-
Notifications
You must be signed in to change notification settings - Fork 34
Boost Test Adapter Design
The Boost Unit Test Adapter allows users to discover and execute Boost Test test cases from within the Visual Studio IDE. The adapter abides to Visual Studio's Microsoft.VisualStudio.TestPlatform.ObjectModel
specifications which, in brief, expect an implementation to list tests available within a compiled module and be able to execute tests individually given either a module or a test case specification as input. For more information regarding Visual Studio's test platform, refer to the Visual Studio Test Platform Primer article.
The following is a high-level description of how the Boost Unit Test Adapter handles test discovery and execution for native C++ modules containing Boost.Test test cases.
Once an exe
or dll
is successfully compiled and filtered by path, the Boost Unit Test Adapter needs to verify if the module contains Boost.Test test cases. Test discovery is not limited to one approach and multiple approaches should be available. Should one approach fail to discover tests, it should fall-back to other discovery strategies.
An appropriate discovery strategy is first identified for a given set of modules and later, these strategies are applied.
Currently, there are 2 main approaches to discovery (hereby referred simply as discoverers) defined.
Boost.Test 3 test runner allows test cases to be listed. A module is first tested to see whether this functionality is available and if so, the module is registered with this discovery approach. Only modules which are deemed as safe, that is modules which contain the --list_content
markers (help text, environment variable identifier, etc.), are taken into consideration but this behaviour can be overriden via the <ForceBoostVersion>
configuration value which is ideal for cases where Boost.Test is dynamically linked. DOT test listings are then parsed via an ANTLR4 generated grammar and later identified to the Visual Studio APIs.
As from Boost.Test available in Boost 1.63, the module is also tested for its Boost version. This allows the adapter to enable/disable certain features based on the stated Boost version.
Via test adapter configuration, users can specify an external Boost.Test runner which should be responsible of listing and executing tests. If an external discoverer is configured, this takes precedence over the 'internal' ones. External test runners also make use of the list content discovery mechanism.
Any and all discovered tests by any of the mentioned approach are then identified to the Visual Studio APIs. Test suite information, label information and enabled status are also provided to Visual Studio so that it is possible to categorize test cases via said traits. Additional properties such as the Boost version are serialized as Visual Studio test case properties (which are transparent to the end user) to allow for interaction between the discovery and execution phases.
Once tests are discovered from a module, the Boost Test Adapter attempts to execute test cases. Each test case is individually executed in a separate process so that test results can be communicated immediately back to Visual Studio unless specified otherwise via the <TestBatchStrategy>
configuration value. Results generated by the process are later parsed and sent to Visual Studio. Note that from Visual Studio 2015 Update 1, tests may also be executed in parallel.
Test modules are executed as a separate process or, possibly, within a Visual Studio debug context should the user choose to debug a test. On completion, test results (located within the user's temporary folder) are then parsed and communicated to Visual Studio. This process normally runs within the directory where the module is located i.e. the working directory is the same directory in which the executable is located and uses the general environment variables. These two properties can be modified via Visual Studio's Debugging property page or, in the case of working directory, via the .runsettings
configuration.
Executing test cases individually can be slow, especially for code coverage since symbols need to be reloaded per process spawn. In an attempt to minimize this cost, tests are grouped using the most optimal test batching strategy, meaning that ideally, more than one test case is executed in one go. The rationale behind this is that users choosing to analyze code coverage are more interested in the code coverage results rather than the test results themselves.