Skip to content

Why Trial and why not Cloud Cafe

Samuel A. Falvo II edited this page Apr 21, 2015 · 2 revisions

We, the QE engineers working with Otter, have dropped CloudCafe in favor of more direct integration with Trial because of the following reasons. Note that this is a brain-dump, and is not exhaustive, as I'm sure there are details I'm missing or forgetting.

Code Navigation Issues

With CloudCafe, you had several places to distribute code to implement any given feature. Besides the test itself, you also had clients, heaviors, and other artifacts. Code didn't always make sense in just one class, so functionality that I thought was client behavior would actually be found in a Behavior class, vice-versa, or worse, in both places concurrently.

While working on tests for Otter, we often had more than 7 editor windows open at once, looking at different parts of the overall framework (some in CloudCafe/OpenCafe itself, some in CloudRoast, and similarly for Otter-specific files). It literally felt like I was coding Java in Eclipse. I would routinely get lost.

With Trial, you have no more than two windows open for any given feature you're interested in: one is the test you're writing, and the other is the class definition containing your supported feature. So if you ever do have 7 windows open when working with Trial-based tests, you can be assured that six of those windows are relevant to six different aspects of your test. You don't have to trace what's relevant throughout a large collection of undocumented source.

Inadequate to non-existent logging.

When I was attempting to maintain Otter's cloudroast tests, we had a need to write diagnostics to a log. However, every attempt I tried resulted in no output at all. Every. Single. One.

Trial-based tests can use the logging support provided standard with Trial, which lets you generate diagnostics that appear in the log file that Trial creates when it runs. I've tested this -- it works.

Long Delays to Code Completion.

Everything had to be done the CloudCafe-way. Everything. Combined with the complete lack of documentation and my seeming inability to get good answers from its core committers, I just couldn't work productively. Most of my time was spent doing R&D to just figure out how to use CloudCafe. E.g., serialization of data, the constant need to hand-craft mappers from JSON to Python objects, etc.

With Trial, we use library classes that hook right into the Trial run-time, which has ample documentation on the web. For web I/O, we use the treq package, which provides convenient access to JSON serialization. No complex logic for XML representation, which has definitively fallen out of favor in every Rackspace product I'm aware of. Also, I'm fairly certain Repose offers JSON-to-XML or vice versa mapping, yes? I seem to recall hearing about this during RAX.IO when I attended back in 2013.

Parallelization

We never could figure out a good way to run tests in parallel with CloudCafe. At best, you can run different test classes in parallel, but the tests inside those classes were all sequentially run. This doesn't work in practice. We actually need tests to run in parallel.

Trial supports test-level concurrency directly, out of the box. To attain the same effect with CloudCafe, we'd have to rewrite all the tests we wanted to run in parallel into separate, distinct classes, so that there'd be no more than one or two tests per class. That kind of defeats the value of a "test suite" as a concept.

Summary

To summarize, Trial is just plain simpler. Even having to hand-write my own libraries for Identity, Cloud Load Balancer, Autoscale, and partial support for Nova, plus writing the test itself, I actually took less time than getting even one test successfully running (let alone correct) under CloudCafe. I'm now at a point where I can write up to two tests per day with Trial (amortized to 1 test per day due to the need for responding to code review feedback). With CloudCafe, I'd be lucky to get a test written in three weeks.