Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into add-nvidia-provider
Browse files Browse the repository at this point in the history
  • Loading branch information
stevie-35 committed Jan 25, 2024
2 parents fdc1dd3 + 7043144 commit 26d6aa0
Show file tree
Hide file tree
Showing 41 changed files with 3,232 additions and 1,690 deletions.
50 changes: 48 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,54 @@

<!-- <START NEW CHANGELOG ENTRY> -->

## 2.10.0-beta.1

([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/[email protected]))

### Bugs fixed

- Fix streaming, add minimal tests [#592](https://github.com/jupyterlab/jupyter-ai/pull/592) ([@krassowski](https://github.com/krassowski))

### Contributors to this release

([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-19&to=2024-01-22&type=c))

[@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-19..2024-01-22&type=Issues)

<!-- <END NEW CHANGELOG ENTRY> -->

## 2.10.0-beta.0

([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/[email protected]))

### Enhancements made

- Inline completion support [#582](https://github.com/jupyterlab/jupyter-ai/pull/582) ([@krassowski](https://github.com/krassowski))

### Contributors to this release

([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-18&to=2024-01-19&type=c))

[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-18..2024-01-19&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Ajtpio+updated%3A2024-01-18..2024-01-19&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-18..2024-01-19&type=Issues)

## 2.9.1

([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/[email protected]))

### Enhancements made

- LangChain v0.1.0 [#572](https://github.com/jupyterlab/jupyter-ai/pull/572) ([@dlqqq](https://github.com/dlqqq))

### Bugs fixed

- Update Cohere model IDs [#584](https://github.com/jupyterlab/jupyter-ai/pull/584) ([@dlqqq](https://github.com/dlqqq))

### Contributors to this release

([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-08&to=2024-01-18&type=c))

[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-08..2024-01-18&type=Issues) | [@JasonWeill](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3AJasonWeill+updated%3A2024-01-08..2024-01-18&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Ajtpio+updated%3A2024-01-08..2024-01-18&type=Issues) | [@welcome](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Awelcome+updated%3A2024-01-08..2024-01-18&type=Issues)

## 2.9.0

([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/[email protected]))
Expand All @@ -17,8 +65,6 @@

[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-03..2024-01-08&type=Issues) | [@JasonWeill](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3AJasonWeill+updated%3A2024-01-03..2024-01-08&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-03..2024-01-08&type=Issues) | [@lumberbot-app](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Alumberbot-app+updated%3A2024-01-03..2024-01-08&type=Issues)

<!-- <END NEW CHANGELOG ENTRY> -->

## 2.8.1

([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/[email protected]))
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ and powerful way to explore generative AI models in notebooks and improve your p
in JupyterLab and the Jupyter Notebook. More specifically, Jupyter AI offers:

* An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground.
This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).
This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.).
* A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.
* Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere,
Hugging Face, NVIDIA, and OpenAI.
Expand Down
710 changes: 487 additions & 223 deletions examples/commands.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion lerna.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"$schema": "node_modules/lerna/schemas/lerna-schema.json",
"useWorkspaces": true,
"version": "2.9.0",
"version": "2.10.0-beta.1",
"npmClient": "yarn",
"useNx": true
}
6 changes: 4 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@jupyter-ai/monorepo",
"version": "2.9.0",
"version": "2.10.0-beta.1",
"description": "A generative AI extension for JupyterLab",
"private": true,
"keywords": [
Expand Down Expand Up @@ -38,10 +38,12 @@
"test": "lerna run test"
},
"devDependencies": {
"@jupyterlab/builder": "^4",
"lerna": "^6.4.1",
"nx": "^15.9.2"
},
"resolutions": {
"@jupyterlab/completer": "4.1.0-beta.0"
},
"nx": {
"includedScripts": []
}
Expand Down
2 changes: 1 addition & 1 deletion packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MODEL_ID_ALIASES = {
"gpt2": "huggingface_hub:gpt2",
"gpt3": "openai:text-davinci-003",
"gpt3": "openai:davinci-002",
"chatgpt": "openai-chat:gpt-3.5-turbo",
"gpt4": "openai-chat:gpt-4",
"ernie-bot": "qianfan:ERNIE-Bot",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,15 @@ class OpenAIEmbeddingsProvider(BaseEmbeddingsProvider, OpenAIEmbeddings):
class CohereEmbeddingsProvider(BaseEmbeddingsProvider, CohereEmbeddings):
id = "cohere"
name = "Cohere"
models = ["large", "multilingual-22-12", "small"]
models = [
"embed-english-v2.0",
"embed-english-light-v2.0",
"embed-multilingual-v2.0",
"embed-english-v3.0",
"embed-english-light-v3.0",
"embed-multilingual-v3.0",
"embed-multilingual-light-v3.0",
]
model_id_key = "model"
pypi_package_deps = ["cohere"]
auth_strategy = EnvAuthStrategy(name="COHERE_API_KEY")
Expand Down
7 changes: 6 additions & 1 deletion packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
Original file line number Diff line number Diff line change
Expand Up @@ -481,8 +481,13 @@ def run_ai_cell(self, args: CellArgs, prompt: str):
if args.model_id in self.custom_model_registry and isinstance(
self.custom_model_registry[args.model_id], LLMChain
):
# Get the output, either as raw text or as the contents of the 'text' key of a dict
invoke_output = self.custom_model_registry[args.model_id].invoke(prompt)
if isinstance(invoke_output, dict):
invoke_output = invoke_output.get("text")

return self.display_output(
self.custom_model_registry[args.model_id].run(prompt),
invoke_output,
args.format,
{"jupyter_ai": {"custom_chain_id": args.model_id}},
)
Expand Down
124 changes: 106 additions & 18 deletions packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,13 @@
from langchain.chat_models.base import BaseChatModel
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain.llms.utils import enforce_stop_tokens
from langchain.prompts import PromptTemplate
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.pydantic_v1 import BaseModel, Extra, root_validator
from langchain.schema import LLMResult
from langchain.utils import get_from_dict_or_env
Expand Down Expand Up @@ -43,6 +49,49 @@
from pydantic.main import ModelMetaclass


CHAT_SYSTEM_PROMPT = """
You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
You are not a language model, but rather an application built on a foundation model from {provider_name} called {local_model_id}.
You are talkative and you provide lots of specific details from the foundation model's context.
You may use Markdown to format your response.
Code blocks must be formatted in Markdown.
Math should be rendered with inline TeX markup, surrounded by $.
If you do not know the answer to a question, answer truthfully by responding that you do not know.
The following is a friendly conversation between you and a human.
""".strip()

CHAT_DEFAULT_TEMPLATE = """Current conversation:
{history}
Human: {input}
AI:"""


COMPLETION_SYSTEM_PROMPT = """
You are an application built to provide helpful code completion suggestions.
You should only produce code. Keep comments to minimum, use the
programming language comment syntax. Produce clean code.
The code is written in JupyterLab, a data analysis and code development
environment which can execute code extended with additional syntax for
interactive features, such as magics.
""".strip()

# only add the suffix bit if present to save input tokens/computation time
COMPLETION_DEFAULT_TEMPLATE = """
The document is called `{{filename}}` and written in {{language}}.
{% if suffix %}
The code after the completion request is:
```
{{suffix}}
```
{% endif %}
Complete the following code:
```
{{prefix}}"""


class EnvAuthStrategy(BaseModel):
"""Require one auth token via an environment variable."""

Expand Down Expand Up @@ -266,6 +315,55 @@ def get_prompt_template(self, format) -> PromptTemplate:
else:
return self.prompt_templates["text"] # Default to plain format

def get_chat_prompt_template(self) -> PromptTemplate:
"""
Produce a prompt template optimised for chat conversation.
The template should take two variables: history and input.
"""
name = self.__class__.name
if self.is_chat_provider:
return ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
CHAT_SYSTEM_PROMPT
).format(provider_name=name, local_model_id=self.model_id),
MessagesPlaceholder(variable_name="history"),
HumanMessagePromptTemplate.from_template("{input}"),
]
)
else:
return PromptTemplate(
input_variables=["history", "input"],
template=CHAT_SYSTEM_PROMPT.format(
provider_name=name, local_model_id=self.model_id
)
+ "\n\n"
+ CHAT_DEFAULT_TEMPLATE,
)

def get_completion_prompt_template(self) -> PromptTemplate:
"""
Produce a prompt template optimised for inline code or text completion.
The template should take variables: prefix, suffix, language, filename.
"""
if self.is_chat_provider:
return ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(COMPLETION_SYSTEM_PROMPT),
HumanMessagePromptTemplate.from_template(
COMPLETION_DEFAULT_TEMPLATE, template_format="jinja2"
),
]
)
else:
return PromptTemplate(
input_variables=["prefix", "suffix", "language", "filename"],
template=COMPLETION_SYSTEM_PROMPT
+ "\n\n"
+ COMPLETION_DEFAULT_TEMPLATE,
template_format="jinja2",
)

@property
def is_chat_provider(self):
return isinstance(self, BaseChatModel)
Expand Down Expand Up @@ -364,7 +462,8 @@ def allows_concurrency(self):
class CohereProvider(BaseProvider, Cohere):
id = "cohere"
name = "Cohere"
models = ["medium", "xlarge"]
# Source: https://docs.cohere.com/reference/generate
models = ["command", "command-nightly", "command-light", "command-light-nightly"]
model_id_key = "model"
pypi_package_deps = ["cohere"]
auth_strategy = EnvAuthStrategy(name="COHERE_API_KEY")
Expand Down Expand Up @@ -536,17 +635,7 @@ async def _acall(self, *args, **kwargs) -> Coroutine[Any, Any, str]:
class OpenAIProvider(BaseProvider, OpenAI):
id = "openai"
name = "OpenAI"
models = [
"text-davinci-003",
"text-davinci-002",
"text-curie-001",
"text-babbage-001",
"text-ada-001",
"davinci",
"curie",
"babbage",
"ada",
]
models = ["babbage-002", "davinci-002", "gpt-3.5-turbo-instruct"]
model_id_key = "model_name"
pypi_package_deps = ["openai"]
auth_strategy = EnvAuthStrategy(name="OPENAI_API_KEY")
Expand All @@ -569,15 +658,14 @@ class ChatOpenAIProvider(BaseProvider, ChatOpenAI):
name = "OpenAI"
models = [
"gpt-3.5-turbo",
"gpt-3.5-turbo-0301", # Deprecated as of 2024-06-13
"gpt-3.5-turbo-0613", # Deprecated as of 2024-06-13
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo-16k",
"gpt-3.5-turbo-0301",
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-3.5-turbo-16k-0613", # Deprecated as of 2024-06-13
"gpt-4",
"gpt-4-0314",
"gpt-4-0613",
"gpt-4-32k",
"gpt-4-32k-0314",
"gpt-4-32k-0613",
"gpt-4-1106-preview",
]
Expand Down
2 changes: 1 addition & 1 deletion packages/jupyter-ai-magics/package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "@jupyter-ai/magics",
"version": "2.9.0",
"version": "2.10.0-beta.1",
"description": "Jupyter AI magics Python package. Not published on NPM.",
"private": true,
"homepage": "https://github.com/jupyterlab/jupyter-ai",
Expand Down
2 changes: 1 addition & 1 deletion packages/jupyter-ai-magics/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ test = ["coverage", "pytest", "pytest-asyncio", "pytest-cov"]
all = [
"ai21",
"anthropic~=0.3.0",
"cohere",
"cohere>4.40,<5",
"gpt4all",
"huggingface_hub",
"ipywidgets",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ async def execute(
# prompt = task.prompt_template.format(**prompt_variables)
# openai.api_key = self.api_key
# response = openai.Completion.create(
# model="text-davinci-003",
# model="davinci-002",
# prompt=prompt,
# ...
# )
Expand Down
Loading

0 comments on commit 26d6aa0

Please sign in to comment.