How to Write a Text Adventure in Python Appendix A: Saving A Game

Merry Christmas! This has been one of the most requested features for me to add to the tutorial, so here goes. While it modifies the free tutorial code, the concepts here can also apply to the book.

Saving and loading data from code to a binary or text format is referred to as “serialization” and “deserialization”. When different computer systems need to share data, this is often done with XML or JSON. If that’s not a concern, Python can use the builtin pickle library.

Saving the game

First, we’ll need to create the action for the player. Let’s start by adding the action to the player. First add import pickle to the top of the file, then add this method:

def save_and_exit(self):
    pickle.dump(self, open( "saved_player.p", "wb" ))
    pickle.dump(world._world, open( "saved_world.p", "wb" ))
    print("Game saved!")

The pickle.dump method converts an object to a binary format for Python. The first parameter is the object to save and the second parameter specifies where to save the object. We need to create a save for both the world itself (tiles) and the player.

Next, go to the actions file and create an action for this method:

class SaveAndExit(Action):
    def __init__(self):
        super().__init__(method=Player.save_and_exit, name="Save and Exit", hotkey='x')

Finally, we have to make sure this shows up as an option for the player, so add moves.append(actions.SaveAndExit()) to the end of the list of actions in

def available_actions(self):
    """Returns all of the available actions in this room."""
    moves = self.adjacent_moves()

    return moves

If you run the game now, you can save the game and you should notice two files appear in the directory of your game code.

Loading the game

To load the game, we basically have to do the same process in reverse. So we’ll check to see if the save files exist, and if they do, transform the data into the game objects.

Since we need to handle new games and saved games, I renamed the play method to game_loop and created a new play method:

def play(saved_world=None, saved_player=None):
    if saved_world and saved_player:
        world._world = saved_world
        player = saved_player
        player = Player()

def game_loop(player):
   # same code that used to be in "play"

The new play method has optional parameters for the saved objects. If they are present, we manually set the world and player to those parameters. Otherwise, we default to creating a new world and player.

Next add these imports:

from pathlib import Path
import pickle

Now we’ll create a method that checks to see if the files are present, and if so load them and pass them into our new play method. Here we’ll use pickle.load which reads in a file and unpacks it into an object.

def check_for_save():
    if Path("saved_player.p").is_file() and Path("saved_world.p").is_file():
        saved_world = pickle.load(open("saved_world.p", "rb"))
        saved_player = pickle.load(open("saved_player.p", "rb"))
        save_exists = True
        save_exists = False

I wanted to give the player the option of loading the saved game or starting a new one, so I added this code too:

    if save_exists:
        valid_input = False
        while not valid_input:
            load = input("Saved game found! Do you want to load the game? Y/N ")
            if load in ['Y','y']:
                play(saved_world, saved_player)
                valid_input = True
            elif load in ['N','n']:
                valid_input = True
                print("Invalid choice.")

Notice how we use the two variants of play, sometimes with parameters and sometimes without.

Last, we have to change the entry point of our code:

if __name__ == "__main__":

Now when you play the game, you should see it pick up your save files and load the game at the spot when you saved it.

As always, the full code is on GitHub!

Click here for Part 1 of the abridged tutorial.

Java for the Real World Video Course

Since I published my Java for the Real World eBook, I’ve received several inquires about providing the learning material via video. I will admit I am still skeptical in general about video, but I finally decided to bite the bullet and make a video course. I’ve published the course at Udemy, a site I’ve used and enjoyed in the past.

An image of my video at Udemy

The content of the video course is very similar to the book. We cover:

  • The JVM
  • Build tools
  • Testing
  • Spring
  • Web Application Frameworks
  • Web Application Deployment
  • Data Access
  • Logging

I walk through the code examples projects that are on GitHub and describe how to use the various tools in the application.

I hope that the combination of an eBook and a video course will help get the knowledge out to more people. The world of Java is intimidating, so get started today!

Click here to learn Java for the Real World!

Vaadin Flow Trepidation

I started using Vaadin about a year ago and was really happy with the results. I got a nice looking web application written almost entirely in Java. I also found being able to work around the traditional MVC pattern refreshing. So, I was naturally interested to see what they would do for their version 10 release. (Side note: What’s the fear about the number nine? Windows 9, iPhone 9, Vaadin 9…all missing.)

I got notice a few months ago that the new version–branded Vaadin Flow–was released along with a new website. But when I clicked through, I was a bit confused to see Vaadin Components being proffered alongside the Java framework. As it turns out, their entire lineup pivoted to the Google Polymer JavaScript framework. It certainly feels like the company is trying to position themselves to be a competitor in the already-dense JavaScript world, with the Java framework playing second fiddle.

Now of course the party line is that Vaadin Flow is still a first-rate product, but peel back the marketing and you’ll see that it is severely lacking in features. The product was released without support for a ComboBox (it’s since been added) and Checkbox Groups, and the suggestion was to manually use HTML tags.

Several more components are listed as “Not Planned” with the most passive aggressive one being MenuBar: “A menu bar is a desktop pattern that is typically not suitable in a world of mobile first applications.”

The new framework isn’t without improvements. In particular, I liked the simplicity of using a @Route annotation on a component to expose it as a view. It’s also much easier include custom CSS, eliminating the annoying cycle of making a CSS change, compiling the CSS, and relaunching the application.

// Simple!
public class MyView extends Div {


I’m hoping that this direction is just a temporary misjudgement on the part of the Vaadin team. I have no problems with Vaadin moving from GWT to Polymer, but worry that developing the JavaScript components at the expense of dropping features from Vaadin Flow is going to make the product much less appealing. The conspiracy theorist in me fears that this was planned to make Vaadin Flow so unused that they can justify dropping it completely. But hopefully there will be enough community feedback to make them reconsider their decision to drop features and make Vaadin Flow just as usable as Vaadin 8.

Java for the Real World Updated for Java 11

With Java 11 being released in just over a week, I have decided to push out a significant update to Java for the Real World.

For a limited time, you can get the book for only $11.11 in celebration of JDK 11!

The major updates include:

  • Updated syntax for all code examples to take advantage of the improvements in recent JDK releases.
  • A new project and section about Vaadin Flow
  • Updated dependency versions
  • Additional discussion about the various JDKs available
  • New guidance about running Java applications in Docker containers

OpenJDK 11 will be a long term support version, so it is definitely worth taking a look at the new features and upgrading when possible.

Java Build Tools: Ant vs. Maven vs. Gradle

This is an abbreviated chapter from my book Java for the Real World. Want more content like this? Click here to get the book!

For anything but the most trivial applications, compiling Java from the command line is an exercise in masochism. The difficulty including dependencies and making executable .jar files is why build tools were created.

For this example, we will be compiling this trivial application:

package com.example.iscream;

import com.example.iscream.service.DailySpecialService;
import java.util.List;

public class Application {
    public static void main(String[] args) {
        System.out.println("Starting store!\n\n==============\n");

        DailySpecialService dailySpecialService = new DailySpecialService();
        List<String> dailySpecials = dailySpecialService.getSpecials();

        System.out.println("Today's specials are:");
        dailySpecials.forEach(s -> System.out.println(" - " + s));
package com.example.iscream.service;

import java.util.List;

public class DailySpecialService {

    public List<String> getSpecials() {
        return Lists.newArrayList("Salty Caramel", "Coconut Chip", "Maui Mango");


The program make has been used for over forty years to compile source code into applications. As such, it was the natural choice in Java’s early years. Unfortunately, a lot of the assumptions and conventions with C programs don’t translate well to the Java ecosystem. To make (har) building the Java Tomcat application easier, James Duncan Davidson wrote Ant. Soon, other open source projects started using Ant, and from there it quickly spread throughout the community.

Build files

Ant build files are written in XML and are called build.xml by convention. I know even the word “XML” makes some people shudder, but in small doses it isn’t too painful. I promise. Ant calls the different phases of the build process “targets”. Targets that are defined in the build file can then be invoked using the ant TARGET command where TARGET is the name of the target.

Here’s the complete build file with the defined targets:


    <path id="classpath">
        <fileset dir="lib" includes="**/*.jar"/>

    <target name="clean">
        <delete dir="build"/>

    <target name="compile">
        <mkdir dir="build/classes"/>
        <javac srcdir="src/main/java"

    <target name="jar">
        <mkdir dir="build/jar"/>
        <jar destfile="build/jar/IScream.jar" basedir="build/classes"/>

    <target name="run" depends="jar">
        <java fork="true" classname="com.example.iscream.Application">
                <path refid="classpath"/>
                <path location="build/jar/IScream.jar"/>


With these targets defined, you may run ant clean, ant compile, ant jar, ant run to compile, build, and run the application we built.

Of course, the build file you’re likely to encounter in a real project is going to be much more complex than this example. Ant has dozens of built-in tasks, and it’s possible to define custom tasks too. A typical build might move around files, assemble documentation, run tests, publish build artifacts, etc. If you are lucky and are working on a well-maintained project, the build file should “just work”. If not, you may have to make tweaks for your specific computer. Keep an eye out for .properties files referenced by the build file that may contain configurable filepaths, environments, etc.


While setting up a build script takes some time up front, hopefully you can see the benefit of using one over passing commands manually to Java. Of course, Ant isn’t without its own problems. First, there are few enforced standards in an Ant script. This provides flexibility, but at the cost of every build file being entirely different. In the same way that knowing Java doesn’t mean you can jump into any codebase, knowing Ant doesn’t mean you can jump into any Ant file–you need to take time to understand it. Second, the imperative nature of Ant means build scripts can get very, very long. One example I found is over 2000 lines long! Finally, we learned Ant has no built-in capability for dependency management, although it can be supplemented with Ivy. These limitations along with some other build script annoyances led to the creation of Maven in the early 2000s.


Maven is really two tools in one: a dependency manager and a build tool. Like Ant it is XML-based, but unlike Ant, it outlines fairly rigid standards. Furthermore, Maven is declarative allowing you to define what your build should do and less about how to do it. These advantages make Maven appealing; build files are much more standard across projects and developers spend less time tailoring the files. As such, Maven has become somewhat of a de facto standard in the Java world.

Maven Phases

The most common build phases are included in Maven and can be executed by running mvn PHASE (where PHASE is the phase name). The most common phase you will invoke is install because it will fully build and test the project, then create a build artifact.

Although it isn’t actually a phase, the command mvn clean deserves a mention. Running that command will “clean” your local build directory (i.e. /target), and remove compiled classes, resources, packages, etc. In theory, you should just be able to run mvn install and your build directory will be updated automatically. However, it seems that enough developers (including myself) have been burned by this not working that we habitually run mvn clean install to force the project to build from scratch.

Project Object Model (POM) Files

Maven’s build files are called Project Object Model files, usually just abbreviated to POM, and are saved as pom.xml in the root directory of a project. In order for Maven to work out of the box, it’s important to follow this directory structure:

├── pom.xml
└── src
    ├── main
    │   ├── java
    │   │    <-- Your Java code goes here
    │   ├── resources
    │   │    <-- Non-code files that your app/library needs
    └── test
        ├── java
        │    <-- Java tests
        ├── resources
        │    <-- Non-code files that your tests need

As mentioned previously, Maven has dependency management built in. The easiest way to find the correct values are from the project's website or the MVNRepository site. For our build, we also need to use one of Apache's official plugins--the Shade plugin. This plugin is used to build fat .jar files.

Here's the complete POM file:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="" xmlns:xsi="" xsi:schemaLocation="             ">
                        <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">

At this point you can run mvn package and you will see the iscream-0.0.1-SNAPSHOT.jar file inside of the target folder. If you run java -jar iscream-0.0.1-SNAPSHOT.jar you can run the application.


Although Maven has made considerable strides in making builds easier, all Maven users have found themselves banging their head against the wall with a tricky Maven problem at one time or another. I've already mentioned some usability problems with plugins, but there's also the problem of "The Maven Way". Anytime a build deviates from what Maven expects, it can be difficult to put in a work-around. Many projects are "normal...except for that one weird thing we have to do". And the more "weird things" in the build, the harder it can be to bend Maven to your will. Wouldn't it be great if we could combine the flexibility of Ant with the features of Maven? That's exactly what Gradle is trying to do.


The first thing you will notice about a Gradle build script is that it is not XML! In fact, Gradle uses a domain specific language (DSL) based on Groovy, which is another programming language that can run on the JVM.

The DSL defines both the core parts of the build file and specific build steps called "tasks". It is also extensible making it very easy to define your own tasks. And of course, Gradle also has a rich third-party plugin library. Let's dive in.

Build files

Gradle build files are appropriately named build.gradle and start out by configuring the build. For our project we need to take advantage of a fat jar plugin, so we will add the Shadow plugin to the build script configuration.

In order for Gradle to download the plugin, it has to look in a repository, which is an index for artifacts. Some repositories are known to Gradle and can be referred to simply as mavenCentral() or jcenter(). The Gradle team decided to not reinvent the wheel when it comes to repositories and instead relies on the existing Maven and Ivy dependency ecosystems.


Finally after Ant's obscure "target" and Maven's confusing "phase", Gradle gave a reasonable name to their build steps: "tasks". We use Gradle's apply to give access to certain tasks. (The java plugin is built in to Gradle which is why we did not need to declare it in the build's dependencies.)

The java plugin will give you common tasks such as clean, compileJava, test, etc. The shadow plugin will give you the shadowJar task which builds a fat jar. To see a complete list of the available tasks, you can run gradle -q tasks.

Dependency Management

We've already discussed how a build script can rely on a plugin dependency, likewise the build script can define the dependencies for your project. Here's the complete build file:

buildscript {
    repositories {
    dependencies {
        classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.4'

apply plugin: 'java'
apply plugin: 'com.github.johnrengelman.shadow'

group = 'com.example'
version = '0.0.1-SNAPSHOT'
sourceCompatibility = 1.8
targetCompatibility = 1.8

repositories {

dependencies {
    compile group: '', name: 'guava', version: '21.0'

shadowJar {
    baseName = 'iscream'
    manifest {
        attributes 'Main-Class': 'com.example.iscream.Application'

Now that the build knows how to find the project's dependencies, we can run gradle shawdowJar to create a fat jar that includes the Guava dependency. After it completes, you should see /build/lib/iscream-0.0.1-SNAPSHOT-all.jar, which can be ran in the usual way (java -jar ...).


Gradle brings a lot of flexibility and power to the Java build ecosystem. Of course, there is always some danger with highly customizable tools--suddenly you have to be aware of code quality in your build file. This is not necessarily bad, but worth considering when evaluating how your team will use the tool. Furthermore, much of Gradle's power comes from third-party plugins. And since it is relatively new, it still sometimes feels like you are using a bunch of plugins developed by SomeRandomPerson. You may find yourself comparing three plugins that ostensibly do the same thing, each have a few dozen GitHub stars, and little documentation to boot. Despite these downsides, Gradle is gaining popularity and is particularly appealing to developers who like to have more control over their builds.

For a more in-depth comparison and other practical advice about the Java ecosystem, check out my book Java for the Real World.

Click here to get Java for the Real World!