Skip to content

Digging into the Database

Paolo "Nusco" Perrotta edited this page Sep 19, 2015 · 2 revisions

If you run Narjillos with the -s switch, the program will save the experiment to an .exp file. The file is actually a SQLite database. This data is then used for two purposes:

You probably don't need to look inside the database - but if you wish, you can do so with any SQLite client. Here is how the data is structured and saved.

  1. The DNA table contains all the DNA strands that are generated in the pool. They get saved as soon as they're generated, so this table is always updated to the very last event in your experiment.

  2. The HISTORY table contains statistics for the whole ecosystem - such as the number of creatures, the amount of food, and the like. It's updated with a new record every 1000 ticks (which means every handful of seconds).

  3. The EXPERIMENT table contains a single row that contains the current state of the experiment, serialized to a large JSON blob. It's saved every ten minutes.

When you pick up an existing experiment, the program de-serializes it from the JSON blob in EXPERIMENT and then continues the experiment, generating updates for the DNA and HISTORY table. Because of the different save policies for the three tables, the content of EXPERIMENT will generally lag up to 10 minutes behind the content of HISTORY, which in turn will lag up to 999 ticks behind the content of DNA. This is not a problem, because experiments are strictly deterministic. So, even if an experiment generates a HISTORY or DNA update that is already in the database, the update will contain the exact same data (with a few irrelevant differences, like the RUNNING_TIME column in the HISTORY table), and it will be quietly skipped.

There is one last wrinkle in the EXPERIMENT table. If you ever see two rows in this table, that's because an experiment happened to be interrupted right in the middle of a periodic save. Narjillos uses this temporary redundancy to avoid losing data if the program is interrupted during the long blob-saving update. Don't worry - the program will automatically clean up the potentially half-saved second row when you load an experiment. Just pick up the experiment again, and everything will be clean.

Clone this wiki locally