Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add nvidia provider #579

Merged
merged 19 commits into from
Feb 1, 2024
Merged

Conversation

stevie-35
Copy link
Contributor

@stevie-35 stevie-35 commented Jan 12, 2024

This PR closes #568.

Notes/questions for reviewers:

  • Will notebooks like commands.ipynb be rerun so that nvidia shows up in the %ai list and other outputs?
  • Would you like me to edit the changelog with this PR or will someone else take care of that?

Notes for NVIDIAns

  • @jdye64 could you take a quick look at this to review naming conventions and docs (e.g. nvidia-chat vs using nvidia?) or tag someone else who can?

Thanks!

Copy link

welcome bot commented Jan 12, 2024

Thanks for submitting your first pull request! You are awesome! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please make sure you followed the pull request template, as this will help us review your contribution more quickly.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also a intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@JasonWeill JasonWeill added the enhancement New feature or request label Jan 12, 2024
@JasonWeill
Copy link
Collaborator

@stevie-35 Thank you so much for opening this pull request! Our team will take a look at it as soon as we can. The changelog is generated by our release workflows, so you don't have to modify it. Our sample notebooks like commands.ipynb are not automatically updated; when I make a change to a command, I typically rerun the notebook myself and add it to my branch.

Copy link
Collaborator

@JasonWeill JasonWeill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you so much for opening this request! I signed up for an API key and I was able to make a request against a couple of different models. I was a little confused about the sign-up process: I generated a key for the Llama 13B model in the NVIDIA catalog, but I was able to make requests against other models with both the magic commands and the chat UI. Is this intended behavior? If so, it might be good to note this in the docs.

docs/source/users/index.md Outdated Show resolved Hide resolved
@stevie-35
Copy link
Contributor Author

@JasonWeill Thanks for reviewing! RE: "I was able to make requests against other models with both the magic commands and the chat UI. Is this intended behavior? If so, it might be good to note this in the docs." Yes, this seems to be the expected behavior - I appreciate your note here and I will happily bring it to our internal teams as feedback! Perhaps this flow will be changed in the future, and if so, we can make a new PR to update these docs.

@stevie-35
Copy link
Contributor Author

I added a note about the API key behavior in the above commit.
One question I have for you @JasonWeill - I tried using the embedding model in the Jupyternaut chat interface window and found it to be a little buggy. Are you all aware of this or does this have to do with the NVIDIA providers I added?

@JasonWeill
Copy link
Collaborator

Hi @stevie-35 ! Thanks for your patience — I was out of office earlier this week. With an NVIDIA API key set, I get the following error when I use an NVIDIA embedding model:

Traceback (most recent call last):
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/_common.py", line 203, in _try_raise
    response.raise_for_status()
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
    raise HTTPError(http_error_msg, response=self)
    ^^^^^^^^^^^^^^^^^
requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://api.nvcf.nvidia.com/v2/nvcf/pexec/functions/091a03bb-7364-4087-8090-bd71e9277520

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/jweill/git/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/base.py", line 113, in on_message
    await self.process_message(message)
  File "/Users/jweill/git/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/learn.py", line 121, in process_message
    await self.learn_dir(
  File "/Users/jweill/git/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/learn.py", line 161, in learn_dir
    embedding_records = await dask_client.compute(delayed)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/distributed/client.py", line 339, in _result
    raise exc.with_traceback(tb)
  File "/Users/jweill/git/jupyter-ai/packages/jupyter-ai/jupyter_ai/document_loaders/directory.py", line 89, in embed_chunk
    embedding = em.embed_query(content)
      ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/embeddings.py", line 56, in embed_query
    return self._embed([text], model_type=self.model_type or "query")[0]
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/embeddings.py", line 38, in _embed
    response = self.client.get_req(
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/_common.py", line 288, in get_req
    response, session = self._post(invoke_url, payload)
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/_common.py", line 165, in _post
    self._try_raise(response)
    ^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/envs/jupyter-ai-jl4/lib/python3.11/site-packages/langchain_nvidia_ai_endpoints/_common.py", line 218, in _try_raise
    raise Exception(f"{title}\n{body}") from e
    ^^^^^^^^^^^^^^^^^
Exception: [###] Unknown Error
>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
</body>
</html>

I don't see errors when using an OpenAI model with a valid API key, so this might be isolated to the NVIDIA key. I do see an error with the Cohere embedding model, but this is likely unrelated to your change.

@stevie-35
Copy link
Contributor Author

Hey @JasonWeill I'm having trouble debugging this because of another issue, so I'm going to pull out the embedding model part and make a new PR for that later so we can get this merged. (I'm seeing a different issue where the Jupyternaut application is actually sending a copy of the api key to the LLM whenever I write a message, so I'm not able to recreate your error - thought I might share since I'm not sure what this behavior is.)

Will do this now and then hopefully should be ready!

@JasonWeill
Copy link
Collaborator

@stevie-35 Thanks for revising this review!

I have been trying the different models that your change adds. With nvidia-chat:playground_mixtral_8x7b, I get an error when I sent a prompt using %%ai:

Exception: [500] Internal Server Error
string_value:"[StatusCode.DEADLINE_EXCEEDED] Deadline Exceeded"

This doesn't occur with any of the other models in the nvidia-chat provider.

@stevie-35
Copy link
Contributor Author

stevie-35 commented Jan 26, 2024

Hi @JasonWeill, I just ran the same (with the single API key I have been using throughout developent) and am seeing appropriate output:
image

Perhaps the issue has been resolved. Would you like to try again?

I can report this error to our engineering teams to prevent future problems like this, but those should be solved outside this repo. Let me know if you are ok with still including the model.

Thanks for testing them out and continuing to work on this!

@JasonWeill
Copy link
Collaborator

I still see a "deadline exceeded" error on the Mixtral model, but since this seems to be a problem on the remote end, that shouldn't block this PR.

@stevie-35 I see one incomplete todo on the original description. Did you get clarity about whether to use nvidia or nvidia-chat as the model provider name?

@JasonWeill
Copy link
Collaborator

@stevie-35 Any update about the nvidia vs. nvidia-chat naming convention for the model providers? Aside from that, once this change is rebased, it's looking good to merge. Thanks again!

@stevie-35
Copy link
Contributor Author

@JasonWeill we are good to go with the current naming convention. Thank you for checking on that and for your attention on this.

@JasonWeill
Copy link
Collaborator

@stevie-35 Looks like I can't rebase this branch due to conflicts. If you can rebase your branch against main, and resolve any merge conflicts, I can merge this in. Thanks again for your good work on this!

@dlqqq
Copy link
Member

dlqqq commented Jan 31, 2024

@stevie-35 I can help with the rebase & merge conflict resolution if you need a hand, just LMK!

@stevie-35
Copy link
Contributor Author

All good, thanks for the offer @dlqqq! Just intermittently busy but hopefully we are good to go after the checks pass, @JasonWeill. I will try to be responsive over the next couple hours in case anything changes.

@JasonWeill
Copy link
Collaborator

@stevie-35 I saw you did a merge commit, but I'm still seeing an out-of-date message. Can you try running git rebase upstream/main, resolving conflicts, committing the changes, and then running git push -f to update your remote branch? Thanks!

@stevie-35
Copy link
Contributor Author

ok @JasonWeill I think I took care of it, hopefully..

dlqqq
dlqqq previously requested changes Feb 1, 2024
packages/jupyter-ai-magics/pyproject.toml Outdated Show resolved Hide resolved
@JasonWeill JasonWeill dismissed dlqqq’s stale review February 1, 2024 17:36

Changes made and committed

@JasonWeill JasonWeill merged commit 0c1c646 into jupyterlab:main Feb 1, 2024
8 checks passed
Copy link

welcome bot commented Feb 1, 2024

Congrats on your first merged pull request in this project! 🎉
congrats
Thank you for contributing, we are very proud of you! ❤️

@JasonWeill
Copy link
Collaborator

@stevie-35 Thank you so much for your contribution!

@JasonWeill
Copy link
Collaborator

@meeseeksdev please backport to 1.x

meeseeksmachine pushed a commit to meeseeksmachine/jupyter-ai that referenced this pull request Feb 1, 2024
JasonWeill pushed a commit that referenced this pull request Feb 1, 2024
dbelgrod pushed a commit to dbelgrod/jupyter-ai that referenced this pull request Jun 10, 2024
* Add NVIDIA chat and embeddings providers

* Fix typo

* Some NVIDIA provider documentation

* Update docs and naming for NVIDIA

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update docs/source/users/index.md

Co-authored-by: Jason Weill <[email protected]>

* Update index.md - add note about api key access

* Remove nvidia embeddings model

* Remove nvidia embedding provider

* Remove nvidia embeddings provider

* Mentions conda install instructions in docs

* Uninstallation with conda

* Makes NVIDIA package dependency optional, updates docs

* Revert "Uninstallation with conda"

This reverts commit 8082fa4.

* Revert "Mentions conda install instructions in docs"

This reverts commit 348ae2e.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jason Weill <[email protected]>
Co-authored-by: Jason Weill <[email protected]>
Marchlak pushed a commit to Marchlak/jupyter-ai that referenced this pull request Oct 28, 2024
* Add NVIDIA chat and embeddings providers

* Fix typo

* Some NVIDIA provider documentation

* Update docs and naming for NVIDIA

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Update docs/source/users/index.md

Co-authored-by: Jason Weill <[email protected]>

* Update index.md - add note about api key access

* Remove nvidia embeddings model

* Remove nvidia embedding provider

* Remove nvidia embeddings provider

* Mentions conda install instructions in docs

* Uninstallation with conda

* Makes NVIDIA package dependency optional, updates docs

* Revert "Uninstallation with conda"

This reverts commit 8082fa4.

* Revert "Mentions conda install instructions in docs"

This reverts commit 348ae2e.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Jason Weill <[email protected]>
Co-authored-by: Jason Weill <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add new provider NVIDIA Foundation Model Endpoints
3 participants