-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docs for per key inference #28243
Conversation
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
R: @tvalentyn |
Stopping reviewer notifications for this pull request: review requested by someone other than the bot, ceding control |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, thanks, @damccorm ! Consider getting a review from TW folks before merging as well.
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Show resolved
Hide resolved
keyed_model_handler = KeyedModelHandler(mhs, max_models_per_worker_hint=2) | ||
``` | ||
|
||
The previous example will load at most 2 models per worker at any given time, and will unload models that aren't |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous example will load at most 2 models per worker at any given time
Have we tried running multi-key inference under load, for example 10 models, many examples, but only 1 model can fit in memory? We could try that with and without GBK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't yet, but its a good idea I can follow up with
keyed_model_handler = KeyedModelHandler(mhs, max_models_per_worker_hint=2) | ||
``` | ||
|
||
The previous example will load at most 2 models per worker at any given time, and will unload models that aren't |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
load at most 2 models per worker
Users might perceive it as a guarantee, and come to us if/when they see a single OOM error. If this cannot be guaranteed, we can phrase that the upper ceiling is enforced as best effort. Or mention that there may be some delay between when the model is unloaded and the memory is freed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mentioned that there's some delay, let me know if you think the wording is ok
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
…damccorm/keyedMhDocs
…orm/beam into users/damccorm/keyedMhDocs
retest this please |
Run Portable_Python PreCommit |
Run Python PreCommit |
Run PythonDocker PreCommit |
Run PythonDocs PreCommit |
Run Python_Coverage PreCommit |
Run Python_Dataframes PreCommit |
Run Python_Examples PreCommit |
Run Python_Integration PreCommit |
Run Python_PVR_Flink PreCommit |
Run Python_Runners PreCommit |
Run Python_Transforms PreCommit |
Run Website PreCommit |
Run Website_Stage_GCS PreCommit |
Codecov Report
@@ Coverage Diff @@
## master #28243 +/- ##
=======================================
Coverage 72.34% 72.35%
=======================================
Files 682 682
Lines 100536 100541 +5
=======================================
+ Hits 72737 72747 +10
+ Misses 26221 26216 -5
Partials 1578 1578
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 6 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
R: @rszper |
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
website/www/site/content/en/documentation/sdks/python-machine-learning.md
Outdated
Show resolved
Hide resolved
Co-authored-by: Rebecca Szper <[email protected]>
* Update KeyMhMapping to KeyModelMapping * Add docs for per key inference * Add piece on memory thrashing * Whitespace * Update wording based on feedback * Add references to website in pydoc * Apply suggestions from code review Co-authored-by: Rebecca Szper <[email protected]> * Remove ordering implied by wording * Lint fixes --------- Co-authored-by: Rebecca Szper <[email protected]>
This should not be merged until release 2.51 is finalized.
✨ RENDERED
Part of #27628
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.