-
Notifications
You must be signed in to change notification settings - Fork 98
Add abstractions for end-to-end tests. #273
base: master
Are you sure you want to change the base?
Conversation
a021a9f
to
256bd8b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into this, @Lukas-Stuehrk. Apologies for not responding sooner.
I think your idea is well-motivated, but I'm not sure this is the right solution. You're absolutely correct that we would benefit from better testability around generation. We have decent unit test coverage about which symbols we should expect to be emitted, but little to no coverage about how they're rendered.
The concern I have with this particular abstraction is that it introduces enough complexity of its own that it becomes difficult to tell whether a failing test is a problem with the test or the functionality being tested. My personal philosophy is that test code should value clarity at the expense of repetition. A lot of folks follow the "Rule of three" for refactoring; when it comes to tests, I'd bump that up to the 7 – 10 range.
I think the duplication of logic, such as the various test-specific Page
types, is a code smell that suggests that we could improve the test boundaries / testability of the Page
types in the SwiftDoc
module.
Here's what I'm thinking as far as a medium- / long-term end-to-end testing strategy:
- Make
SwiftDoc
page and generator types more testable, so that we can reason about what symbols they contain and how they'll be rendered - In addition, use
CommonMark
andMarkup
parsers to test the actual generated.md
and.html
output, treatingswift-doc
as a black box
Does that sounds like a plan?
Well...I think I made up for it with my response time 😬. I'm very sorry.
I slightly disagree. We actually don't have a decent unit test coverage about which symbols we should expect to be emitted. We have good unit tests for parsing and extracting the symbols. But recently we introduced a lot of changes which separate between the symbols which are found and the symbols which are emitted in the output, making them diverge more and more. By doing this, we lost our unit test coverage of the emitted symbols. Before, every symbol which was found and kept was also emitted. This lead to the first bugs already, e.g. #264.
I'm not sure if I agree that it's a duplication of logic. I think it's a valid approach to build abstractions for end-to-end tests which make it easier to reason about the inputs and outputs. And then it's kind of necessary to use the same names / domain language like the program itself. Here are some examples from the swift-argument-parser. However, I understand and I agree that it adds complexity to the tests and it will be more difficult to understand why a test failed. But maybe that's a price worth paying. But it's hard to assess without having data.
I think this will be a challenge. The page and generator types are part of the executable ( How I imagine how this could look like (in a far away future): We will have an "abstract" representation of the pages respective what needs to be generated, without anything related to the markdown or HTML rendering. And this is fully unit tested then. And then the generators take this abstract representation and create the output. And this again is fully unit tested.
But isn't this – more or less – what is achieved with this pull request? It tests the actual generated output of the |
This adds abstractions to remove much of the boilerplate code needed for generating end-to-end tests for the
generate
subcommand. It will make it easy to add regression tests in the future, which will most likely be required when we need to support new Swift versions in the future.The tests don't rely on the internal data structures of
swift-doc
which will be subject to change. Therefore they will also come handy when we rework the generators.This PR also contains examples how to use the abstractions. There's an example for a passing test and an example for a failing test. The failing test is only for illustrating how test failures look in the GitHub actions CI and needs to be removed before the PR is merged.
For now, the provided APIs cover the most basic functionality to find out if symbols are included. I will add additional functionality if we decide to continue with this approach.