Skip to content

Configuration strategies and their consequences

Esta Nagy edited this page Nov 6, 2020 · 3 revisions

This page collects a couple of strategies you might use to set up Abort-Mission. Most probably there is no one-size-fits-all strategy we could offer, all of these have valid use cases when they can be considered advantageous. Please feel free to recommend additional strategy ideas by sharing them in the issues section!

Fail-fast(er)

Configuration

Use a single evaluator with a broad enough matcher to match all of your tests (using the any class matching regexp is ideal) with a low burn-in threshold and a low failure rate threshold.

Use case

When you want to stop testing after the first few failures and your tests are very similar. For example if you are including all of your integration tests when they depend on the exact same thing, e.g. they load the same Spring context and initialize the same database or web service for your tests.

Note: Test frameworks or build tools tend to have an option to fail after the first failed test. If this is your goal, setting that parameter is a lot more simple.

Pros

✔️ Can detect test initialization failures (which can be very time-consuming if the same failing context startup is repeated for a lot of tests)
✔️ Easy to do
✔️ Shows results early
✔️ Configurable thresholds to stop test run after 5% or 10% failure rate only

Cons

❌ Prediction is flawed as it cannot find out why the tests are failing (each test is just one increment on a counter)
❌ Missed opportunities

All-in

Configuration

Augment your tests with as much very specific matchers as you can think of. Define all your dependencies on at least the test class level but feel free to clarify on methods if needed. If a test can use a dependency that might fail, we need to know that dependency by name!

Prefer using compound matchers that are only matching the tests which are really form the same category and avoid unnecessary abort decisions by too generic matcher definitions.

Use case

This setup is for those who have lots of dependencies and lots of tests, most of which showing different behavior in smaller groups.

Pros

✔️ Test decisions will have a lot of data
✔️ You can define different thresholds for each test group
✔️ No missed opportunities
✔️ All test groups will have an opportunity to pass

Cons

❌ Needs quite a lot of effort
❌ ROI is questionable
❌ Time spent on failing tests remains high

Balanced

Configuration

Identify the most tests with the most potential:

  • The ones taking longer to setup and are repeated
  • The tests which are parameterized and fail slowly
  • The cases which tend to use unreliable external dependencies (e.g Database, Selenium Grid, anything used over the network etc.)

Mark the dependencies which can cause the most issues, the most pointless repetition or the largest amount of throw-away effort spent on test setup.

Configure one evaluator per dependency (preferably only matching when that dependency is used).

Use case

Can be ideal when the number of dependencies is low, for example:

  1. Testing a microservice depending on 1-3 others
  2. End-to-end UI tests are running on a small number of well separated features/pages

Pros

✔️ The most impact for minimal effort
✔️ Easy to configure and maintain
✔️ You might be able to reuse already existing test categories as separation sticks to the features

Cons

❌ Needs the right kind of service/app to work well
❌ Can leave some performance on the table