-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add hover and completion tests #45
Conversation
…ure, specialize tests based on the token kind
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! 😄
One change I would like to see is a short description of how you run these tests and how you might add a new test. Preferably, under tests/README.md
.
That makes sense. Agreed. |
… types for better discoverability and type validation
Thanks for checking out the code! I had to rewrite the Sorry, I'm a bit busy this week, so I didn't have time to make a |
Alright, it might be a little overkill, but there you go, the |
Haha, that’s awesome work! Should be a huge help! |
…union of expected_* module types
I can't seem to see the tests which are failing:
If I add the
But that doesn't really explain why the test is failing (what did the test expect and what was found?). What is the proper way to invoke |
xfail/xfailed tests are the ones that are "expected to fail", so they don't trigger a report since that's the expected behavior. I don't think there's a native way to capture the expected/result from an xfail, but we can embed our own string alongside $ pytest -rx You can also always find them marked You should be able to see the expected vs result in most tests if they actually fail, but I'm going to think how to communicate that better in cases where we don't just compare values, but instead test for a property like "expected should be a subset of result". It's actually pretty dumb of me that I didn't think about it since I'd just get that info by dumping locals with |
I see, that makes sense. I guess it's fine in that case 😄
Yes, that's reasonable to add at some point! However, I'm willing to merge this in its current state, but feel free to open a new PR if you have any extensions! 🐱 |
Thanks, yeah, I have a few ideas, but I've been too busy to think it through. Sorry. It's not as bad right now, though. Currently, if we test these properties of expected/result and the test fails, the traceback will report something like:
Thanks to the aptly named boolean values you can tell which condition failed the assert. Unfortunately, the values of expected/result will not be printed by default, but I think for now it's good enough to run pytest with |
This PR implements a mini testing framework for hover and autocomplete for glsl_analyzer. It is based on
pytest
andpytest-lsp
packages. Some points about the design:Should help resolve #26.
@nolanderc, please review this one.
Start with
lsp_testing_input.py
, it shows how you define tests in terms of expectations. Take note of howExpectFail
andTokenKind
are used to control the flow of tests. Then move on totesting_utils.py
to see the primitives used for wrapping data and communicating testing conditions. Lastly,test_lsp.py
contains the testing logic itself, arguably the ugliest part of this whole thing, but it has to be written once at best per request type/token combination.I also added the glsl samples that I wrote for these tests into this repo as part of the PR. I realized that submodules are probably not the best idea for the type of workflow these tests aiming at. Updating the tests with submodules happens in the following steps:
Now imagine that in the process of this glsl_analyzer PR a bug was found; we fixed it, and we now want to add a test for that bug as part of that PR. We have to do steps 1-5 again. This is cumbersome and this will come up every time we want to update the tests. Also, if the CI is set-up to run the tests, you are always forced to update them; even if it is a bugfix, you'd have to remove
ExpectFail
.I suggest that we keep most of the tests we write in this repo and reserve glsl-samples for tests that we dump with no intention to modify/extend them frequently.