Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Try out other models. #23

Open
thepushkarp opened this issue Sep 19, 2021 · 0 comments
Open

Try out other models. #23

thepushkarp opened this issue Sep 19, 2021 · 0 comments
Labels
help wanted Extra attention is needed

Comments

@thepushkarp
Copy link
Owner

thepushkarp commented Sep 19, 2021

Currently, we are using multi-qa-MiniLM-L6-cos-v1, which has a speed (sentences encoded/sec on 1 V100 GPU) of 14200 and a model size of 80 MB. We should try out other models to see if we can get better performance and speed out of them.

Additionally, we can also try using other types of tokenizers.

Further reading:

@thepushkarp thepushkarp added the help wanted Extra attention is needed label Oct 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant