Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lessons learned from v1.0.0 release #279

Open
7 of 10 tasks
marc-white opened this issue Dec 2, 2024 · 4 comments
Open
7 of 10 tasks

Lessons learned from v1.0.0 release #279

marc-white opened this issue Dec 2, 2024 · 4 comments

Comments

@marc-white
Copy link
Collaborator

marc-white commented Dec 2, 2024

This issue is designed to record lessons learned from the v1.0.0 release process, and to track actions against those lessons.

@rbeucher
Copy link
Member

rbeucher commented Dec 2, 2024

Thanks @marc-white

@charles-turner-1
Copy link
Collaborator

There appear to be no post-build tests suggested for the main catalog, beyond, "Run some random notebooks and see what breaks". Consideration should be given to defining some tests that a developer can run to check the health of the new catalog (perhaps even just a Jupyter notebook of assorted commands to check).

Could we use/modify/extend the end to end tests for this?

@marc-white
Copy link
Collaborator Author

marc-white commented Dec 2, 2024

There appear to be no post-build tests suggested for the main catalog, beyond, "Run some random notebooks and see what breaks". Consideration should be given to defining some tests that a developer can run to check the health of the new catalog (perhaps even just a Jupyter notebook of assorted commands to check).

Could we use/modify/extend the end to end tests for this?

That's one possibility, although I was wondering how to set up a test suite that interrogates a 'live' catalog, rather than some minimal test case/mock.

What I actually had in mind was a basic Jupyter notebook that runs a lot of really basic sanity-check commands, and let's you look over the outputs for anything odd. For example, comparing the new catalog against a previous version (that's how I noticed all of the missing experiments in PR #281 ). Given each catalog update will have something unique to it, I'm not sure there's a way you can realistically automate testing of that.

@charles-turner-1
Copy link
Collaborator

Yeah, good call.

Elastic have a tool called nbtest which would be perfect for this situation - we build a notebook with a bunch of expected outputs, & we can use nbtest to ensure consistency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

3 participants