diff --git a/CHANGELOG.md b/CHANGELOG.md
index 60c9f7602..d850cbbe9 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,54 @@
+## 2.10.0-beta.1
+
+([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/core@2.10.0-beta.0...b1186e9eefed5633461e19b61a5ccf4129b41a78))
+
+### Bugs fixed
+
+- Fix streaming, add minimal tests [#592](https://github.com/jupyterlab/jupyter-ai/pull/592) ([@krassowski](https://github.com/krassowski))
+
+### Contributors to this release
+
+([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-19&to=2024-01-22&type=c))
+
+[@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-19..2024-01-22&type=Issues)
+
+
+
+## 2.10.0-beta.0
+
+([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/core@2.9.1...d1a9f40d5fb4c937ae53d22172c8836b171ac6e6))
+
+### Enhancements made
+
+- Inline completion support [#582](https://github.com/jupyterlab/jupyter-ai/pull/582) ([@krassowski](https://github.com/krassowski))
+
+### Contributors to this release
+
+([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-18&to=2024-01-19&type=c))
+
+[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-18..2024-01-19&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Ajtpio+updated%3A2024-01-18..2024-01-19&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-18..2024-01-19&type=Issues)
+
+## 2.9.1
+
+([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/core@2.9.0...effc60920bca2a0d1d37beaff03d2bf18bb7d392))
+
+### Enhancements made
+
+- LangChain v0.1.0 [#572](https://github.com/jupyterlab/jupyter-ai/pull/572) ([@dlqqq](https://github.com/dlqqq))
+
+### Bugs fixed
+
+- Update Cohere model IDs [#584](https://github.com/jupyterlab/jupyter-ai/pull/584) ([@dlqqq](https://github.com/dlqqq))
+
+### Contributors to this release
+
+([GitHub contributors page for this release](https://github.com/jupyterlab/jupyter-ai/graphs/contributors?from=2024-01-08&to=2024-01-18&type=c))
+
+[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-08..2024-01-18&type=Issues) | [@JasonWeill](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3AJasonWeill+updated%3A2024-01-08..2024-01-18&type=Issues) | [@jtpio](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Ajtpio+updated%3A2024-01-08..2024-01-18&type=Issues) | [@welcome](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Awelcome+updated%3A2024-01-08..2024-01-18&type=Issues)
+
## 2.9.0
([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/core@2.8.1...a1c9da8ad37daada3033228a109118df4c6a07cf))
@@ -17,8 +65,6 @@
[@dlqqq](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Adlqqq+updated%3A2024-01-03..2024-01-08&type=Issues) | [@JasonWeill](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3AJasonWeill+updated%3A2024-01-03..2024-01-08&type=Issues) | [@krassowski](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Akrassowski+updated%3A2024-01-03..2024-01-08&type=Issues) | [@lumberbot-app](https://github.com/search?q=repo%3Ajupyterlab%2Fjupyter-ai+involves%3Alumberbot-app+updated%3A2024-01-03..2024-01-08&type=Issues)
-
-
## 2.8.1
([Full Changelog](https://github.com/jupyterlab/jupyter-ai/compare/@jupyter-ai/core@2.8.0...5cd4dbb8075a7dabf9df5e62df8b4d6a6c099918))
diff --git a/README.md b/README.md
index fa3b68cd4..92461b67c 100644
--- a/README.md
+++ b/README.md
@@ -7,7 +7,7 @@ and powerful way to explore generative AI models in notebooks and improve your p
in JupyterLab and the Jupyter Notebook. More specifically, Jupyter AI offers:
* An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground.
- This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, VSCode, etc.).
+ This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.).
* A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant.
* Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere,
Hugging Face, NVIDIA, and OpenAI.
diff --git a/examples/commands.ipynb b/examples/commands.ipynb
index 6990df577..0b0c6816c 100644
--- a/examples/commands.ipynb
+++ b/examples/commands.ipynb
@@ -137,14 +137,17 @@
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
"| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-tg1-large`, `bedrock:anthropic.claude-v1`, `bedrock:anthropic.claude-instant-v1`, `bedrock:anthropic.claude-v2`, `bedrock:ai21.j2-jumbo-instruct`, `bedrock:ai21.j2-grande-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
+ "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-text-express-v1`, `bedrock:ai21.j2-ultra-v1`, `bedrock:ai21.j2-mid-v1`, `bedrock:cohere.command-light-text-v14`, `bedrock:cohere.command-text-v14`, `bedrock:meta.llama2-13b-chat-v1`, `bedrock:meta.llama2-70b-chat-v1` |\n",
+ "| `bedrock-chat` | Not applicable. | N/A | `bedrock-chat:anthropic.claude-v1`, `bedrock-chat:anthropic.claude-v2`, `bedrock-chat:anthropic.claude-v2:1`, `bedrock-chat:anthropic.claude-instant-v1` |\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-2.0`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0`, `anthropic:claude-instant-v1.2` |\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | `anthropic-chat:claude-v1`, `anthropic-chat:claude-v1.0`, `anthropic-chat:claude-v1.2`, `anthropic-chat:claude-2`, `anthropic-chat:claude-2.0`, `anthropic-chat:claude-instant-v1`, `anthropic-chat:claude-instant-v1.0`, `anthropic-chat:claude-instant-v1.2` |\n",
"| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy` |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:command`, `cohere:command-nightly`, `cohere:command-light`, `cohere:command-light-nightly` |\n",
+ "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy`, `gpt4all:mistral-7b-openorca.Q4_0`, `gpt4all:mistral-7b-instruct-v0.1.Q4_0`, `gpt4all:gpt4all-falcon-q4_0`, `gpt4all:wizardlm-13b-v1.2.Q4_0`, `gpt4all:nous-hermes-llama2-13b.Q4_0`, `gpt4all:gpt4all-13b-snoozy-q4_0`, `gpt4all:mpt-7b-chat-merges-q4_0`, `gpt4all:orca-mini-3b-gguf2-q4_0`, `gpt4all:starcoder-q4_0`, `gpt4all:rift-coder-v0-7b-q4_0`, `gpt4all:em_german_mistral_v01.Q4_0` |\n",
"| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-4-32k-0613` |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-1106`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0613`, `openai-chat:gpt-4-1106-preview` |\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | `qianfan:ERNIE-Bot`, `qianfan:ERNIE-Bot-4` |\n",
"| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
@@ -152,13 +155,16 @@
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
- "| `gpt4` | `openai-chat:gpt-4` |\n"
+ "| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -170,65 +176,97 @@
"* ai21:j2-jumbo-instruct\n",
"\n",
"bedrock\n",
- "* bedrock:amazon.titan-tg1-large\n",
- "* bedrock:anthropic.claude-v1\n",
- "* bedrock:anthropic.claude-instant-v1\n",
- "* bedrock:anthropic.claude-v2\n",
- "* bedrock:ai21.j2-jumbo-instruct\n",
- "* bedrock:ai21.j2-grande-instruct\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
"\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
"* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
"\n",
"azure-chat-openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
"\n",
"gpt4all\n",
"* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
"* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
"* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
"* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
- "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
"* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-16k-0613\n",
"* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
"* openai-chat:gpt-4-0613\n",
"* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
"* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
"* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
@@ -236,9 +274,12 @@
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
- "gpt4 - openai-chat:gpt-4\n"
+ "gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n"
]
},
"execution_count": 4,
@@ -261,20 +302,14 @@
"text/markdown": [
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n"
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n"
],
"text/plain": [
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n"
]
},
@@ -335,14 +370,17 @@
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
"| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-tg1-large`, `bedrock:anthropic.claude-v1`, `bedrock:anthropic.claude-instant-v1`, `bedrock:anthropic.claude-v2`, `bedrock:ai21.j2-jumbo-instruct`, `bedrock:ai21.j2-grande-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
+ "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-text-express-v1`, `bedrock:ai21.j2-ultra-v1`, `bedrock:ai21.j2-mid-v1`, `bedrock:cohere.command-light-text-v14`, `bedrock:cohere.command-text-v14`, `bedrock:meta.llama2-13b-chat-v1`, `bedrock:meta.llama2-70b-chat-v1` |\n",
+ "| `bedrock-chat` | Not applicable. | N/A | `bedrock-chat:anthropic.claude-v1`, `bedrock-chat:anthropic.claude-v2`, `bedrock-chat:anthropic.claude-v2:1`, `bedrock-chat:anthropic.claude-instant-v1` |\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-2.0`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0`, `anthropic:claude-instant-v1.2` |\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | `anthropic-chat:claude-v1`, `anthropic-chat:claude-v1.0`, `anthropic-chat:claude-v1.2`, `anthropic-chat:claude-2`, `anthropic-chat:claude-2.0`, `anthropic-chat:claude-instant-v1`, `anthropic-chat:claude-instant-v1.0`, `anthropic-chat:claude-instant-v1.2` |\n",
"| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy` |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:command`, `cohere:command-nightly`, `cohere:command-light`, `cohere:command-light-nightly` |\n",
+ "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy`, `gpt4all:mistral-7b-openorca.Q4_0`, `gpt4all:mistral-7b-instruct-v0.1.Q4_0`, `gpt4all:gpt4all-falcon-q4_0`, `gpt4all:wizardlm-13b-v1.2.Q4_0`, `gpt4all:nous-hermes-llama2-13b.Q4_0`, `gpt4all:gpt4all-13b-snoozy-q4_0`, `gpt4all:mpt-7b-chat-merges-q4_0`, `gpt4all:orca-mini-3b-gguf2-q4_0`, `gpt4all:starcoder-q4_0`, `gpt4all:rift-coder-v0-7b-q4_0`, `gpt4all:em_german_mistral_v01.Q4_0` |\n",
"| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-4-32k-0613` |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-1106`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0613`, `openai-chat:gpt-4-1106-preview` |\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | `qianfan:ERNIE-Bot`, `qianfan:ERNIE-Bot-4` |\n",
"| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
@@ -350,14 +388,17 @@
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
"| `mychat` | `openai-chat:gpt-4` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -369,65 +410,97 @@
"* ai21:j2-jumbo-instruct\n",
"\n",
"bedrock\n",
- "* bedrock:amazon.titan-tg1-large\n",
- "* bedrock:anthropic.claude-v1\n",
- "* bedrock:anthropic.claude-instant-v1\n",
- "* bedrock:anthropic.claude-v2\n",
- "* bedrock:ai21.j2-jumbo-instruct\n",
- "* bedrock:ai21.j2-grande-instruct\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
"\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
"* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
"\n",
"azure-chat-openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
"\n",
"gpt4all\n",
"* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
"* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
"* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
"* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
- "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
"* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
"* openai-chat:gpt-3.5-turbo-16k-0613\n",
"* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
"* openai-chat:gpt-4-0613\n",
"* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
"* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
"* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
@@ -435,9 +508,12 @@
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
"mychat - openai-chat:gpt-4\n"
]
},
@@ -461,9 +537,7 @@
{
"data": {
"text/markdown": [
- "\n",
- "\n",
- "This model is unknown."
+ "Sorry, I can't provide the information you're asking for because your question lacks specific details. Could you please provide more context or information?"
],
"text/plain": [
""
@@ -473,8 +547,8 @@
"metadata": {
"text/markdown": {
"jupyter_ai": {
- "model_id": "text-davinci-003",
- "provider_id": "openai"
+ "model_id": "gpt-4",
+ "provider_id": "openai-chat"
}
}
},
@@ -482,7 +556,7 @@
}
],
"source": [
- "%%ai gpt3\n",
+ "%%ai gpt4\n",
"What model is this?"
]
},
@@ -507,7 +581,7 @@
}
],
"source": [
- "%ai update mychat openai:text-davinci-003"
+ "%ai update mychat openai:babbage-002"
]
},
{
@@ -521,9 +595,15 @@
{
"data": {
"text/markdown": [
+ " I need someone to enter data from a pdf into excel.\n",
+ "\n",
+ "We are looking for an experienced freelancer with very high attention to detail to assist us with a number of tasks. Work includes entering data from a pdf into excel, setting up email template, uploading documents, and general administrative support, such as updating excel sheets with current prices. This is a long-term position. Please provide samples of your work. Please note that we will only accept ...\n",
"\n",
+ "I have a PDF form which I want to extract the text from the forms fields and place it in a word file. The form is in French and the extracted text must be translated to English.\n",
"\n",
- "This model is not specified."
+ "I have a PDF file which I want to extract the text from the forms fields and place it in a word file. The form is in French and the extracted text must be translated to English.\n",
+ "\n",
+ "I have a PDF form which I want to extract the text from the forms fields and place it in a word file. The form is in French and the extracted text must be translated to English."
],
"text/plain": [
""
@@ -533,7 +613,7 @@
"metadata": {
"text/markdown": {
"jupyter_ai": {
- "model_id": "text-davinci-003",
+ "model_id": "babbage-002",
"provider_id": "openai"
}
}
@@ -543,7 +623,7 @@
],
"source": [
"%%ai mychat\n",
- "What model is this?"
+ "Tell me about mathematical symbols"
]
},
{
@@ -560,26 +640,35 @@
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
"| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-text-express-v1`, `bedrock:ai21.j2-ultra-v1`, `bedrock:ai21.j2-mid-v1`, `bedrock:cohere.command-light-text-v14`, `bedrock:cohere.command-text-v14`, `bedrock:meta.llama2-13b-chat-v1`, `bedrock:meta.llama2-70b-chat-v1` |\n",
+ "| `bedrock-chat` | Not applicable. | N/A | `bedrock-chat:anthropic.claude-v1`, `bedrock-chat:anthropic.claude-v2`, `bedrock-chat:anthropic.claude-v2:1`, `bedrock-chat:anthropic.claude-instant-v1` |\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-2.0`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0`, `anthropic:claude-instant-v1.2` |\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | `anthropic-chat:claude-v1`, `anthropic-chat:claude-v1.0`, `anthropic-chat:claude-v1.2`, `anthropic-chat:claude-2`, `anthropic-chat:claude-2.0`, `anthropic-chat:claude-instant-v1`, `anthropic-chat:claude-instant-v1.0`, `anthropic-chat:claude-instant-v1.2` |\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:command`, `cohere:command-nightly`, `cohere:command-light`, `cohere:command-light-nightly` |\n",
+ "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy`, `gpt4all:mistral-7b-openorca.Q4_0`, `gpt4all:mistral-7b-instruct-v0.1.Q4_0`, `gpt4all:gpt4all-falcon-q4_0`, `gpt4all:wizardlm-13b-v1.2.Q4_0`, `gpt4all:nous-hermes-llama2-13b.Q4_0`, `gpt4all:gpt4all-13b-snoozy-q4_0`, `gpt4all:mpt-7b-chat-merges-q4_0`, `gpt4all:orca-mini-3b-gguf2-q4_0`, `gpt4all:starcoder-q4_0`, `gpt4all:rift-coder-v0-7b-q4_0`, `gpt4all:em_german_mistral_v01.Q4_0` |\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-1106`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0613`, `openai-chat:gpt-4-1106-preview` |\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | `qianfan:ERNIE-Bot`, `qianfan:ERNIE-Bot-4` |\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
- "| `mychat` | `openai:text-davinci-003` |\n"
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
+ "| `mychat` | `openai:babbage-002` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -590,54 +679,112 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
- "mychat - openai:text-davinci-003\n"
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
+ "mychat - openai:babbage-002\n"
]
},
"execution_count": 11,
@@ -689,25 +836,34 @@
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
"| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-text-express-v1`, `bedrock:ai21.j2-ultra-v1`, `bedrock:ai21.j2-mid-v1`, `bedrock:cohere.command-light-text-v14`, `bedrock:cohere.command-text-v14`, `bedrock:meta.llama2-13b-chat-v1`, `bedrock:meta.llama2-70b-chat-v1` |\n",
+ "| `bedrock-chat` | Not applicable. | N/A | `bedrock-chat:anthropic.claude-v1`, `bedrock-chat:anthropic.claude-v2`, `bedrock-chat:anthropic.claude-v2:1`, `bedrock-chat:anthropic.claude-instant-v1` |\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-2.0`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0`, `anthropic:claude-instant-v1.2` |\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | `anthropic-chat:claude-v1`, `anthropic-chat:claude-v1.0`, `anthropic-chat:claude-v1.2`, `anthropic-chat:claude-2`, `anthropic-chat:claude-2.0`, `anthropic-chat:claude-instant-v1`, `anthropic-chat:claude-instant-v1.0`, `anthropic-chat:claude-instant-v1.2` |\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:command`, `cohere:command-nightly`, `cohere:command-light`, `cohere:command-light-nightly` |\n",
+ "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy`, `gpt4all:mistral-7b-openorca.Q4_0`, `gpt4all:mistral-7b-instruct-v0.1.Q4_0`, `gpt4all:gpt4all-falcon-q4_0`, `gpt4all:wizardlm-13b-v1.2.Q4_0`, `gpt4all:nous-hermes-llama2-13b.Q4_0`, `gpt4all:gpt4all-13b-snoozy-q4_0`, `gpt4all:mpt-7b-chat-merges-q4_0`, `gpt4all:orca-mini-3b-gguf2-q4_0`, `gpt4all:starcoder-q4_0`, `gpt4all:rift-coder-v0-7b-q4_0`, `gpt4all:em_german_mistral_v01.Q4_0` |\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-1106`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0613`, `openai-chat:gpt-4-1106-preview` |\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | `qianfan:ERNIE-Bot`, `qianfan:ERNIE-Bot-4` |\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
- "| `gpt4` | `openai-chat:gpt-4` |\n"
+ "| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -718,53 +874,111 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
- "gpt4 - openai-chat:gpt-4\n"
+ "gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n"
]
},
"execution_count": 13,
@@ -797,12 +1011,12 @@
"source": [
"from langchain.chains import LLMChain\n",
"from langchain.prompts import PromptTemplate\n",
- "from langchain.llms import OpenAI\n",
+ "from langchain_community.llms import Cohere\n",
"\n",
- "llm = OpenAI(temperature=0.9)\n",
+ "llm = Cohere(model=\"command\", max_tokens=256, temperature=0.75)\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"product\"],\n",
- " template=\"What is a good name for a company that makes {product}?\",\n",
+ " template=\"What is a good name for a company that makes {product}? Provide only one name. Do not provide any other text than the name. Do not provide other info\",\n",
")\n",
"chain = LLMChain(llm=llm, prompt=prompt)"
]
@@ -810,19 +1024,6 @@
{
"cell_type": "code",
"execution_count": 15,
- "id": "29d5239f-7601-405e-b059-4e881ebf7ab1",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "from langchain.chains import LLMChain\n",
- "chain = LLMChain(llm=llm, prompt=prompt)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
"id": "43e7a77c-93af-4ef7-a104-f932c9f54183",
"metadata": {
"tags": []
@@ -832,20 +1033,18 @@
"name": "stdout",
"output_type": "stream",
"text": [
- "\n",
- "\n",
- "Bright Toes Socks.\n"
+ "{'product': 'colorful socks', 'text': ' Chroma Socks '}\n"
]
}
],
"source": [
"# Run the chain only specifying the input variable.\n",
- "print(chain.run(\"colorful socks\"))"
+ "print(chain.invoke(\"colorful socks\"))"
]
},
{
"cell_type": "code",
- "execution_count": 17,
+ "execution_count": 16,
"id": "9badc567-9720-4e33-ab4a-54fda5129f36",
"metadata": {
"tags": []
@@ -860,7 +1059,7 @@
"Registered new alias `company`"
]
},
- "execution_count": 17,
+ "execution_count": 16,
"metadata": {},
"output_type": "execute_result"
}
@@ -871,7 +1070,7 @@
},
{
"cell_type": "code",
- "execution_count": 18,
+ "execution_count": 17,
"id": "92b75d71-8844-4872-b424-b0023706abb1",
"metadata": {
"tags": []
@@ -883,26 +1082,35 @@
"| Provider | Environment variable | Set? | Models |\n",
"|----------|----------------------|------|--------|\n",
"| `ai21` | `AI21_API_KEY` | ✅ | `ai21:j1-large`, `ai21:j1-grande`, `ai21:j1-jumbo`, `ai21:j1-grande-instruct`, `ai21:j2-large`, `ai21:j2-grande`, `ai21:j2-jumbo`, `ai21:j2-grande-instruct`, `ai21:j2-jumbo-instruct` |\n",
- "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0` |\n",
- "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:medium`, `cohere:xlarge` |\n",
- "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
- "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:text-davinci-003`, `openai:text-davinci-002`, `openai:text-curie-001`, `openai:text-babbage-001`, `openai:text-ada-001`, `openai:davinci`, `openai:curie`, `openai:babbage`, `openai:ada` |\n",
- "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-4`, `openai-chat:gpt-4-0314`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0314`, `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301` |\n",
- "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
+ "| `bedrock` | Not applicable. | N/A | `bedrock:amazon.titan-text-express-v1`, `bedrock:ai21.j2-ultra-v1`, `bedrock:ai21.j2-mid-v1`, `bedrock:cohere.command-light-text-v14`, `bedrock:cohere.command-text-v14`, `bedrock:meta.llama2-13b-chat-v1`, `bedrock:meta.llama2-70b-chat-v1` |\n",
+ "| `bedrock-chat` | Not applicable. | N/A | `bedrock-chat:anthropic.claude-v1`, `bedrock-chat:anthropic.claude-v2`, `bedrock-chat:anthropic.claude-v2:1`, `bedrock-chat:anthropic.claude-instant-v1` |\n",
+ "| `anthropic` | `ANTHROPIC_API_KEY` | ✅ | `anthropic:claude-v1`, `anthropic:claude-v1.0`, `anthropic:claude-v1.2`, `anthropic:claude-2`, `anthropic:claude-2.0`, `anthropic:claude-instant-v1`, `anthropic:claude-instant-v1.0`, `anthropic:claude-instant-v1.2` |\n",
+ "| `anthropic-chat` | `ANTHROPIC_API_KEY` | ✅ | `anthropic-chat:claude-v1`, `anthropic-chat:claude-v1.0`, `anthropic-chat:claude-v1.2`, `anthropic-chat:claude-2`, `anthropic-chat:claude-2.0`, `anthropic-chat:claude-instant-v1`, `anthropic-chat:claude-instant-v1.0`, `anthropic-chat:claude-instant-v1.2` |\n",
+ "| `azure-chat-openai` | `OPENAI_API_KEY` | ✅ | This provider does not define a list of models. |\n",
+ "| `cohere` | `COHERE_API_KEY` | ✅ | `cohere:command`, `cohere:command-nightly`, `cohere:command-light`, `cohere:command-light-nightly` |\n",
+ "| `gpt4all` | Not applicable. | N/A | `gpt4all:ggml-gpt4all-j-v1.2-jazzy`, `gpt4all:ggml-gpt4all-j-v1.3-groovy`, `gpt4all:ggml-gpt4all-l13b-snoozy`, `gpt4all:mistral-7b-openorca.Q4_0`, `gpt4all:mistral-7b-instruct-v0.1.Q4_0`, `gpt4all:gpt4all-falcon-q4_0`, `gpt4all:wizardlm-13b-v1.2.Q4_0`, `gpt4all:nous-hermes-llama2-13b.Q4_0`, `gpt4all:gpt4all-13b-snoozy-q4_0`, `gpt4all:mpt-7b-chat-merges-q4_0`, `gpt4all:orca-mini-3b-gguf2-q4_0`, `gpt4all:starcoder-q4_0`, `gpt4all:rift-coder-v0-7b-q4_0`, `gpt4all:em_german_mistral_v01.Q4_0` |\n",
+ "| `huggingface_hub` | `HUGGINGFACEHUB_API_TOKEN` | ✅ | See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`. |\n",
+ "| `openai` | `OPENAI_API_KEY` | ✅ | `openai:babbage-002`, `openai:davinci-002`, `openai:gpt-3.5-turbo-instruct` |\n",
+ "| `openai-chat` | `OPENAI_API_KEY` | ✅ | `openai-chat:gpt-3.5-turbo`, `openai-chat:gpt-3.5-turbo-0301`, `openai-chat:gpt-3.5-turbo-0613`, `openai-chat:gpt-3.5-turbo-1106`, `openai-chat:gpt-3.5-turbo-16k`, `openai-chat:gpt-3.5-turbo-16k-0613`, `openai-chat:gpt-4`, `openai-chat:gpt-4-0613`, `openai-chat:gpt-4-32k`, `openai-chat:gpt-4-32k-0613`, `openai-chat:gpt-4-1106-preview` |\n",
+ "| `qianfan` | `QIANFAN_AK`, `QIANFAN_SK` | ❌ | `qianfan:ERNIE-Bot`, `qianfan:ERNIE-Bot-4` |\n",
+ "| `sagemaker-endpoint` | Not applicable. | N/A | Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints). |\n",
"\n",
"Aliases and custom commands:\n",
"\n",
"| Name | Target |\n",
"|------|--------|\n",
"| `gpt2` | `huggingface_hub:gpt2` |\n",
- "| `gpt3` | `openai:text-davinci-003` |\n",
+ "| `gpt3` | `openai:davinci-002` |\n",
"| `chatgpt` | `openai-chat:gpt-3.5-turbo` |\n",
"| `gpt4` | `openai-chat:gpt-4` |\n",
+ "| `ernie-bot` | `qianfan:ERNIE-Bot` |\n",
+ "| `ernie-bot-4` | `qianfan:ERNIE-Bot-4` |\n",
+ "| `titan` | `bedrock:amazon.titan-tg1-large` |\n",
"| `company` | *custom chain* |\n"
],
"text/plain": [
"ai21\n",
- "Requires environment variable AI21_API_KEY (set)\n",
+ "Requires environment variable: AI21_API_KEY (set)\n",
"* ai21:j1-large\n",
"* ai21:j1-grande\n",
"* ai21:j1-jumbo\n",
@@ -913,57 +1121,115 @@
"* ai21:j2-grande-instruct\n",
"* ai21:j2-jumbo-instruct\n",
"\n",
+ "bedrock\n",
+ "* bedrock:amazon.titan-text-express-v1\n",
+ "* bedrock:ai21.j2-ultra-v1\n",
+ "* bedrock:ai21.j2-mid-v1\n",
+ "* bedrock:cohere.command-light-text-v14\n",
+ "* bedrock:cohere.command-text-v14\n",
+ "* bedrock:meta.llama2-13b-chat-v1\n",
+ "* bedrock:meta.llama2-70b-chat-v1\n",
+ "\n",
+ "bedrock-chat\n",
+ "* bedrock-chat:anthropic.claude-v1\n",
+ "* bedrock-chat:anthropic.claude-v2\n",
+ "* bedrock-chat:anthropic.claude-v2:1\n",
+ "* bedrock-chat:anthropic.claude-instant-v1\n",
+ "\n",
"anthropic\n",
- "Requires environment variable ANTHROPIC_API_KEY (set)\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
"* anthropic:claude-v1\n",
"* anthropic:claude-v1.0\n",
"* anthropic:claude-v1.2\n",
+ "* anthropic:claude-2\n",
+ "* anthropic:claude-2.0\n",
"* anthropic:claude-instant-v1\n",
"* anthropic:claude-instant-v1.0\n",
+ "* anthropic:claude-instant-v1.2\n",
+ "\n",
+ "anthropic-chat\n",
+ "Requires environment variable: ANTHROPIC_API_KEY (set)\n",
+ "* anthropic-chat:claude-v1\n",
+ "* anthropic-chat:claude-v1.0\n",
+ "* anthropic-chat:claude-v1.2\n",
+ "* anthropic-chat:claude-2\n",
+ "* anthropic-chat:claude-2.0\n",
+ "* anthropic-chat:claude-instant-v1\n",
+ "* anthropic-chat:claude-instant-v1.0\n",
+ "* anthropic-chat:claude-instant-v1.2\n",
+ "\n",
+ "azure-chat-openai\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* This provider does not define a list of models.\n",
"\n",
"cohere\n",
- "Requires environment variable COHERE_API_KEY (set)\n",
- "* cohere:medium\n",
- "* cohere:xlarge\n",
+ "Requires environment variable: COHERE_API_KEY (set)\n",
+ "* cohere:command\n",
+ "* cohere:command-nightly\n",
+ "* cohere:command-light\n",
+ "* cohere:command-light-nightly\n",
+ "\n",
+ "gpt4all\n",
+ "* gpt4all:ggml-gpt4all-j-v1.2-jazzy\n",
+ "* gpt4all:ggml-gpt4all-j-v1.3-groovy\n",
+ "* gpt4all:ggml-gpt4all-l13b-snoozy\n",
+ "* gpt4all:mistral-7b-openorca.Q4_0\n",
+ "* gpt4all:mistral-7b-instruct-v0.1.Q4_0\n",
+ "* gpt4all:gpt4all-falcon-q4_0\n",
+ "* gpt4all:wizardlm-13b-v1.2.Q4_0\n",
+ "* gpt4all:nous-hermes-llama2-13b.Q4_0\n",
+ "* gpt4all:gpt4all-13b-snoozy-q4_0\n",
+ "* gpt4all:mpt-7b-chat-merges-q4_0\n",
+ "* gpt4all:orca-mini-3b-gguf2-q4_0\n",
+ "* gpt4all:starcoder-q4_0\n",
+ "* gpt4all:rift-coder-v0-7b-q4_0\n",
+ "* gpt4all:em_german_mistral_v01.Q4_0\n",
"\n",
"huggingface_hub\n",
- "Requires environment variable HUGGINGFACEHUB_API_TOKEN (set)\n",
- "* See https://huggingface.co/models for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
+ "Requires environment variable: HUGGINGFACEHUB_API_TOKEN (set)\n",
+ "* See [https://huggingface.co/models](https://huggingface.co/models) for a list of models. Pass a model's repository ID as the model ID; for example, `huggingface_hub:ExampleOwner/example-model`.\n",
"\n",
"openai\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai:text-davinci-003\n",
- "* openai:text-davinci-002\n",
- "* openai:text-curie-001\n",
- "* openai:text-babbage-001\n",
- "* openai:text-ada-001\n",
- "* openai:davinci\n",
- "* openai:curie\n",
- "* openai:babbage\n",
- "* openai:ada\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
+ "* openai:babbage-002\n",
+ "* openai:davinci-002\n",
+ "* openai:gpt-3.5-turbo-instruct\n",
"\n",
"openai-chat\n",
- "Requires environment variable OPENAI_API_KEY (set)\n",
- "* openai-chat:gpt-4\n",
- "* openai-chat:gpt-4-0314\n",
- "* openai-chat:gpt-4-32k\n",
- "* openai-chat:gpt-4-32k-0314\n",
+ "Requires environment variable: OPENAI_API_KEY (set)\n",
"* openai-chat:gpt-3.5-turbo\n",
"* openai-chat:gpt-3.5-turbo-0301\n",
+ "* openai-chat:gpt-3.5-turbo-0613\n",
+ "* openai-chat:gpt-3.5-turbo-1106\n",
+ "* openai-chat:gpt-3.5-turbo-16k\n",
+ "* openai-chat:gpt-3.5-turbo-16k-0613\n",
+ "* openai-chat:gpt-4\n",
+ "* openai-chat:gpt-4-0613\n",
+ "* openai-chat:gpt-4-32k\n",
+ "* openai-chat:gpt-4-32k-0613\n",
+ "* openai-chat:gpt-4-1106-preview\n",
+ "\n",
+ "qianfan\n",
+ "Requires environment variables: QIANFAN_AK (not set), QIANFAN_SK (not set)\n",
+ "* qianfan:ERNIE-Bot\n",
+ "* qianfan:ERNIE-Bot-4\n",
"\n",
"sagemaker-endpoint\n",
- "* Specify an endpoint name as the model ID. In addition, you must include the `--region_name`, `--request_schema`, and the `--response_path` arguments. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
+ "* Specify an endpoint name as the model ID. In addition, you must specify a region name, request schema, and response path. For more information, see the documentation about [SageMaker endpoints deployment](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-deployment.html) and about [using magic commands with SageMaker endpoints](https://jupyter-ai.readthedocs.io/en/latest/users/index.html#using-magic-commands-with-sagemaker-endpoints).\n",
"\n",
"\n",
"Aliases and custom commands:\n",
"gpt2 - huggingface_hub:gpt2\n",
- "gpt3 - openai:text-davinci-003\n",
+ "gpt3 - openai:davinci-002\n",
"chatgpt - openai-chat:gpt-3.5-turbo\n",
"gpt4 - openai-chat:gpt-4\n",
+ "ernie-bot - qianfan:ERNIE-Bot\n",
+ "ernie-bot-4 - qianfan:ERNIE-Bot-4\n",
+ "titan - bedrock:amazon.titan-tg1-large\n",
"company - custom chain\n"
]
},
- "execution_count": 18,
+ "execution_count": 17,
"metadata": {},
"output_type": "execute_result"
}
@@ -974,22 +1240,22 @@
},
{
"cell_type": "code",
- "execution_count": 19,
+ "execution_count": 18,
"id": "cfef0fee-a7c6-49e4-8d90-9aa12f7b91d1",
"metadata": {},
"outputs": [
{
"data": {
"text/markdown": [
+ " Chroma Sox \n",
"\n",
- "\n",
- "**Brightsocks**"
+ "Let me know if you would like me to provide any other suggestions! "
],
"text/plain": [
""
]
},
- "execution_count": 19,
+ "execution_count": 18,
"metadata": {
"text/markdown": {
"jupyter_ai": {
@@ -1007,19 +1273,17 @@
},
{
"cell_type": "code",
- "execution_count": 20,
+ "execution_count": 19,
"id": "06c698e7-e2cf-41b5-88de-2be4d3b60eba",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
- "\n",
- "\n",
- "FunkySox."
+ " Punch Up Colorful Fashions"
]
},
- "execution_count": 20,
+ "execution_count": 19,
"metadata": {
"jupyter_ai": {
"custom_chain_id": "company"
diff --git a/lerna.json b/lerna.json
index 5de42974f..512470ea1 100644
--- a/lerna.json
+++ b/lerna.json
@@ -1,7 +1,7 @@
{
"$schema": "node_modules/lerna/schemas/lerna-schema.json",
"useWorkspaces": true,
- "version": "2.9.0",
+ "version": "2.10.0-beta.1",
"npmClient": "yarn",
"useNx": true
}
diff --git a/package.json b/package.json
index d173391be..6c7a057ea 100644
--- a/package.json
+++ b/package.json
@@ -1,6 +1,6 @@
{
"name": "@jupyter-ai/monorepo",
- "version": "2.9.0",
+ "version": "2.10.0-beta.1",
"description": "A generative AI extension for JupyterLab",
"private": true,
"keywords": [
@@ -38,10 +38,12 @@
"test": "lerna run test"
},
"devDependencies": {
- "@jupyterlab/builder": "^4",
"lerna": "^6.4.1",
"nx": "^15.9.2"
},
+ "resolutions": {
+ "@jupyterlab/completer": "4.1.0-beta.0"
+ },
"nx": {
"includedScripts": []
}
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py b/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
index 96cac4efe..f34826428 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/aliases.py
@@ -1,6 +1,6 @@
MODEL_ID_ALIASES = {
"gpt2": "huggingface_hub:gpt2",
- "gpt3": "openai:text-davinci-003",
+ "gpt3": "openai:davinci-002",
"chatgpt": "openai-chat:gpt-3.5-turbo",
"gpt4": "openai-chat:gpt-4",
"ernie-bot": "qianfan:ERNIE-Bot",
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py b/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
index ca9fed4b4..9ef8720ba 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/embedding_providers.py
@@ -80,7 +80,15 @@ class OpenAIEmbeddingsProvider(BaseEmbeddingsProvider, OpenAIEmbeddings):
class CohereEmbeddingsProvider(BaseEmbeddingsProvider, CohereEmbeddings):
id = "cohere"
name = "Cohere"
- models = ["large", "multilingual-22-12", "small"]
+ models = [
+ "embed-english-v2.0",
+ "embed-english-light-v2.0",
+ "embed-multilingual-v2.0",
+ "embed-english-v3.0",
+ "embed-english-light-v3.0",
+ "embed-multilingual-v3.0",
+ "embed-multilingual-light-v3.0",
+ ]
model_id_key = "model"
pypi_package_deps = ["cohere"]
auth_strategy = EnvAuthStrategy(name="COHERE_API_KEY")
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py b/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
index f6239fbdf..51b1f3bb9 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/magics.py
@@ -481,8 +481,13 @@ def run_ai_cell(self, args: CellArgs, prompt: str):
if args.model_id in self.custom_model_registry and isinstance(
self.custom_model_registry[args.model_id], LLMChain
):
+ # Get the output, either as raw text or as the contents of the 'text' key of a dict
+ invoke_output = self.custom_model_registry[args.model_id].invoke(prompt)
+ if isinstance(invoke_output, dict):
+ invoke_output = invoke_output.get("text")
+
return self.display_output(
- self.custom_model_registry[args.model_id].run(prompt),
+ invoke_output,
args.format,
{"jupyter_ai": {"custom_chain_id": args.model_id}},
)
diff --git a/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py b/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
index 062bb1863..9053a2752 100644
--- a/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
+++ b/packages/jupyter-ai-magics/jupyter_ai_magics/providers.py
@@ -11,7 +11,13 @@
from langchain.chat_models.base import BaseChatModel
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain.llms.utils import enforce_stop_tokens
-from langchain.prompts import PromptTemplate
+from langchain.prompts import (
+ ChatPromptTemplate,
+ HumanMessagePromptTemplate,
+ MessagesPlaceholder,
+ PromptTemplate,
+ SystemMessagePromptTemplate,
+)
from langchain.pydantic_v1 import BaseModel, Extra, root_validator
from langchain.schema import LLMResult
from langchain.utils import get_from_dict_or_env
@@ -43,6 +49,49 @@
from pydantic.main import ModelMetaclass
+CHAT_SYSTEM_PROMPT = """
+You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
+You are not a language model, but rather an application built on a foundation model from {provider_name} called {local_model_id}.
+You are talkative and you provide lots of specific details from the foundation model's context.
+You may use Markdown to format your response.
+Code blocks must be formatted in Markdown.
+Math should be rendered with inline TeX markup, surrounded by $.
+If you do not know the answer to a question, answer truthfully by responding that you do not know.
+The following is a friendly conversation between you and a human.
+""".strip()
+
+CHAT_DEFAULT_TEMPLATE = """Current conversation:
+{history}
+Human: {input}
+AI:"""
+
+
+COMPLETION_SYSTEM_PROMPT = """
+You are an application built to provide helpful code completion suggestions.
+You should only produce code. Keep comments to minimum, use the
+programming language comment syntax. Produce clean code.
+The code is written in JupyterLab, a data analysis and code development
+environment which can execute code extended with additional syntax for
+interactive features, such as magics.
+""".strip()
+
+# only add the suffix bit if present to save input tokens/computation time
+COMPLETION_DEFAULT_TEMPLATE = """
+The document is called `{{filename}}` and written in {{language}}.
+{% if suffix %}
+The code after the completion request is:
+
+```
+{{suffix}}
+```
+{% endif %}
+
+Complete the following code:
+
+```
+{{prefix}}"""
+
+
class EnvAuthStrategy(BaseModel):
"""Require one auth token via an environment variable."""
@@ -266,6 +315,55 @@ def get_prompt_template(self, format) -> PromptTemplate:
else:
return self.prompt_templates["text"] # Default to plain format
+ def get_chat_prompt_template(self) -> PromptTemplate:
+ """
+ Produce a prompt template optimised for chat conversation.
+ The template should take two variables: history and input.
+ """
+ name = self.__class__.name
+ if self.is_chat_provider:
+ return ChatPromptTemplate.from_messages(
+ [
+ SystemMessagePromptTemplate.from_template(
+ CHAT_SYSTEM_PROMPT
+ ).format(provider_name=name, local_model_id=self.model_id),
+ MessagesPlaceholder(variable_name="history"),
+ HumanMessagePromptTemplate.from_template("{input}"),
+ ]
+ )
+ else:
+ return PromptTemplate(
+ input_variables=["history", "input"],
+ template=CHAT_SYSTEM_PROMPT.format(
+ provider_name=name, local_model_id=self.model_id
+ )
+ + "\n\n"
+ + CHAT_DEFAULT_TEMPLATE,
+ )
+
+ def get_completion_prompt_template(self) -> PromptTemplate:
+ """
+ Produce a prompt template optimised for inline code or text completion.
+ The template should take variables: prefix, suffix, language, filename.
+ """
+ if self.is_chat_provider:
+ return ChatPromptTemplate.from_messages(
+ [
+ SystemMessagePromptTemplate.from_template(COMPLETION_SYSTEM_PROMPT),
+ HumanMessagePromptTemplate.from_template(
+ COMPLETION_DEFAULT_TEMPLATE, template_format="jinja2"
+ ),
+ ]
+ )
+ else:
+ return PromptTemplate(
+ input_variables=["prefix", "suffix", "language", "filename"],
+ template=COMPLETION_SYSTEM_PROMPT
+ + "\n\n"
+ + COMPLETION_DEFAULT_TEMPLATE,
+ template_format="jinja2",
+ )
+
@property
def is_chat_provider(self):
return isinstance(self, BaseChatModel)
@@ -364,7 +462,8 @@ def allows_concurrency(self):
class CohereProvider(BaseProvider, Cohere):
id = "cohere"
name = "Cohere"
- models = ["medium", "xlarge"]
+ # Source: https://docs.cohere.com/reference/generate
+ models = ["command", "command-nightly", "command-light", "command-light-nightly"]
model_id_key = "model"
pypi_package_deps = ["cohere"]
auth_strategy = EnvAuthStrategy(name="COHERE_API_KEY")
@@ -536,17 +635,7 @@ async def _acall(self, *args, **kwargs) -> Coroutine[Any, Any, str]:
class OpenAIProvider(BaseProvider, OpenAI):
id = "openai"
name = "OpenAI"
- models = [
- "text-davinci-003",
- "text-davinci-002",
- "text-curie-001",
- "text-babbage-001",
- "text-ada-001",
- "davinci",
- "curie",
- "babbage",
- "ada",
- ]
+ models = ["babbage-002", "davinci-002", "gpt-3.5-turbo-instruct"]
model_id_key = "model_name"
pypi_package_deps = ["openai"]
auth_strategy = EnvAuthStrategy(name="OPENAI_API_KEY")
@@ -569,15 +658,14 @@ class ChatOpenAIProvider(BaseProvider, ChatOpenAI):
name = "OpenAI"
models = [
"gpt-3.5-turbo",
+ "gpt-3.5-turbo-0301", # Deprecated as of 2024-06-13
+ "gpt-3.5-turbo-0613", # Deprecated as of 2024-06-13
+ "gpt-3.5-turbo-1106",
"gpt-3.5-turbo-16k",
- "gpt-3.5-turbo-0301",
- "gpt-3.5-turbo-0613",
- "gpt-3.5-turbo-16k-0613",
+ "gpt-3.5-turbo-16k-0613", # Deprecated as of 2024-06-13
"gpt-4",
- "gpt-4-0314",
"gpt-4-0613",
"gpt-4-32k",
- "gpt-4-32k-0314",
"gpt-4-32k-0613",
"gpt-4-1106-preview",
]
diff --git a/packages/jupyter-ai-magics/package.json b/packages/jupyter-ai-magics/package.json
index 3d6f3da9f..a695d92ed 100644
--- a/packages/jupyter-ai-magics/package.json
+++ b/packages/jupyter-ai-magics/package.json
@@ -1,6 +1,6 @@
{
"name": "@jupyter-ai/magics",
- "version": "2.9.0",
+ "version": "2.10.0-beta.1",
"description": "Jupyter AI magics Python package. Not published on NPM.",
"private": true,
"homepage": "https://github.com/jupyterlab/jupyter-ai",
diff --git a/packages/jupyter-ai-magics/pyproject.toml b/packages/jupyter-ai-magics/pyproject.toml
index b84eab58a..9fe5c0721 100644
--- a/packages/jupyter-ai-magics/pyproject.toml
+++ b/packages/jupyter-ai-magics/pyproject.toml
@@ -38,7 +38,7 @@ test = ["coverage", "pytest", "pytest-asyncio", "pytest-cov"]
all = [
"ai21",
"anthropic~=0.3.0",
- "cohere",
+ "cohere>4.40,<5",
"gpt4all",
"huggingface_hub",
"ipywidgets",
diff --git a/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py b/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
index c32e86148..63066ef07 100644
--- a/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
+++ b/packages/jupyter-ai-module-cookiecutter/{{cookiecutter.labextension_name}}/{{cookiecutter.python_name}}/engine.py
@@ -29,7 +29,7 @@ async def execute(
# prompt = task.prompt_template.format(**prompt_variables)
# openai.api_key = self.api_key
# response = openai.Completion.create(
- # model="text-davinci-003",
+ # model="davinci-002",
# prompt=prompt,
# ...
# )
diff --git a/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py b/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
index 3a76fba44..584f0b33f 100644
--- a/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
+++ b/packages/jupyter-ai/jupyter_ai/chat_handlers/default.py
@@ -4,32 +4,9 @@
from jupyter_ai_magics.providers import BaseProvider
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferWindowMemory
-from langchain.prompts import (
- ChatPromptTemplate,
- HumanMessagePromptTemplate,
- MessagesPlaceholder,
- PromptTemplate,
- SystemMessagePromptTemplate,
-)
from .base import BaseChatHandler, SlashCommandRoutingType
-SYSTEM_PROMPT = """
-You are Jupyternaut, a conversational assistant living in JupyterLab to help users.
-You are not a language model, but rather an application built on a foundation model from {provider_name} called {local_model_id}.
-You are talkative and you provide lots of specific details from the foundation model's context.
-You may use Markdown to format your response.
-Code blocks must be formatted in Markdown.
-Math should be rendered with inline TeX markup, surrounded by $.
-If you do not know the answer to a question, answer truthfully by responding that you do not know.
-The following is a friendly conversation between you and a human.
-""".strip()
-
-DEFAULT_TEMPLATE = """Current conversation:
-{history}
-Human: {input}
-AI:"""
-
class DefaultChatHandler(BaseChatHandler):
id = "default"
@@ -49,27 +26,10 @@ def create_llm_chain(
model_parameters = self.get_model_parameters(provider, provider_params)
llm = provider(**provider_params, **model_parameters)
- if llm.is_chat_provider:
- prompt_template = ChatPromptTemplate.from_messages(
- [
- SystemMessagePromptTemplate.from_template(SYSTEM_PROMPT).format(
- provider_name=llm.name, local_model_id=llm.model_id
- ),
- MessagesPlaceholder(variable_name="history"),
- HumanMessagePromptTemplate.from_template("{input}"),
- ]
- )
- self.memory = ConversationBufferWindowMemory(return_messages=True, k=2)
- else:
- prompt_template = PromptTemplate(
- input_variables=["history", "input"],
- template=SYSTEM_PROMPT.format(
- provider_name=llm.name, local_model_id=llm.model_id
- )
- + "\n\n"
- + DEFAULT_TEMPLATE,
- )
- self.memory = ConversationBufferWindowMemory(k=2)
+ prompt_template = llm.get_chat_prompt_template()
+ self.memory = ConversationBufferWindowMemory(
+ return_messages=llm.is_chat_provider, k=2
+ )
self.llm = llm
self.llm_chain = ConversationChain(
diff --git a/packages/jupyter-ai/jupyter_ai/completions/__init__.py b/packages/jupyter-ai/jupyter_ai/completions/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/packages/jupyter-ai/jupyter_ai/completions/handlers/__init__.py b/packages/jupyter-ai/jupyter_ai/completions/handlers/__init__.py
new file mode 100644
index 000000000..f0d412a94
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/completions/handlers/__init__.py
@@ -0,0 +1,4 @@
+from .base import BaseInlineCompletionHandler
+from .default import DefaultInlineCompletionHandler
+
+__all__ = ["BaseInlineCompletionHandler", "DefaultInlineCompletionHandler"]
diff --git a/packages/jupyter-ai/jupyter_ai/completions/handlers/base.py b/packages/jupyter-ai/jupyter_ai/completions/handlers/base.py
new file mode 100644
index 000000000..c52c308db
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/completions/handlers/base.py
@@ -0,0 +1,167 @@
+import json
+import time
+import traceback
+from asyncio import AbstractEventLoop
+from typing import Any, AsyncIterator, Dict, Union
+
+import tornado
+from jupyter_ai.completions.handlers.llm_mixin import LLMHandlerMixin
+from jupyter_ai.completions.models import (
+ CompletionError,
+ InlineCompletionList,
+ InlineCompletionReply,
+ InlineCompletionRequest,
+ InlineCompletionStreamChunk,
+)
+from jupyter_server.base.handlers import JupyterHandler
+from langchain.pydantic_v1 import BaseModel, ValidationError
+
+
+class BaseInlineCompletionHandler(
+ LLMHandlerMixin, JupyterHandler, tornado.websocket.WebSocketHandler
+):
+ """A Tornado WebSocket handler that receives inline completion requests and
+ fulfills them accordingly. This class is instantiated once per WebSocket
+ connection."""
+
+ ##
+ # Interface for subclasses
+ ##
+ async def handle_request(
+ self, message: InlineCompletionRequest
+ ) -> InlineCompletionReply:
+ """
+ Handles an inline completion request, without streaming. Subclasses
+ must define this method and write a reply via `self.write_message()`.
+
+ The method definition does not need to be wrapped in a try/except block.
+ """
+ raise NotImplementedError(
+ "The required method `self.handle_request()` is not defined by this subclass."
+ )
+
+ async def handle_stream_request(
+ self, message: InlineCompletionRequest
+ ) -> AsyncIterator[InlineCompletionStreamChunk]:
+ """
+ Handles an inline completion request, **with streaming**.
+ Implementations may optionally define this method. Implementations that
+ do so should stream replies via successive calls to
+ `self.write_message()`.
+
+ The method definition does not need to be wrapped in a try/except block.
+ """
+ raise NotImplementedError(
+ "The optional method `self.handle_stream_request()` is not defined by this subclass."
+ )
+
+ ##
+ # Definition of base class
+ ##
+ handler_kind = "completion"
+
+ @property
+ def loop(self) -> AbstractEventLoop:
+ return self.settings["jai_event_loop"]
+
+ def write_message(self, message: Union[bytes, str, Dict[str, Any], BaseModel]):
+ """
+ Write a bytes, string, dict, or Pydantic model object to the WebSocket
+ connection. The base definition of this method is provided by Tornado.
+ """
+ if isinstance(message, BaseModel):
+ message = message.dict()
+
+ super().write_message(message)
+
+ def initialize(self):
+ self.log.debug("Initializing websocket connection %s", self.request.path)
+
+ def pre_get(self):
+ """Handles authentication/authorization."""
+ # authenticate the request before opening the websocket
+ user = self.current_user
+ if user is None:
+ self.log.warning("Couldn't authenticate WebSocket connection")
+ raise tornado.web.HTTPError(403)
+
+ # authorize the user.
+ if not self.authorizer.is_authorized(self, user, "execute", "events"):
+ raise tornado.web.HTTPError(403)
+
+ async def get(self, *args, **kwargs):
+ """Get an event socket."""
+ self.pre_get()
+ res = super().get(*args, **kwargs)
+ await res
+
+ async def on_message(self, message):
+ """Public Tornado method that is called when the client sends a message
+ over this connection. This should **not** be overriden by subclasses."""
+
+ # first, verify that the message is an `InlineCompletionRequest`.
+ self.log.debug("Message received: %s", message)
+ try:
+ message = json.loads(message)
+ request = InlineCompletionRequest(**message)
+ except ValidationError as e:
+ self.log.error(e)
+ return
+
+ # next, dispatch the request to the correct handler and create the
+ # `handle_request` coroutine object
+ handle_request = None
+ if request.stream:
+ try:
+ handle_request = self._handle_stream_request(request)
+ except NotImplementedError:
+ self.log.error(
+ "Unable to handle stream request. The current `InlineCompletionHandler` does not implement the `handle_stream_request()` method."
+ )
+ return
+
+ else:
+ handle_request = self._handle_request(request)
+
+ # finally, wrap `handle_request` in an exception handler, and start the
+ # task on the event loop.
+ async def handle_request_and_catch():
+ try:
+ await handle_request
+ except Exception as e:
+ await self.handle_exc(e, request)
+
+ self.loop.create_task(handle_request_and_catch())
+
+ async def handle_exc(self, e: Exception, request: InlineCompletionRequest):
+ """
+ Handles an exception raised in either `handle_request()` or
+ `handle_stream_request()`. This base class provides a default
+ implementation, which may be overridden by subclasses.
+ """
+ error = CompletionError(
+ type=e.__class__.__name__,
+ title=e.args[0] if e.args else "Exception",
+ traceback=traceback.format_exc(),
+ )
+ self.write_message(
+ InlineCompletionReply(
+ list=InlineCompletionList(items=[]),
+ error=error,
+ reply_to=request.number,
+ )
+ )
+
+ async def _handle_request(self, request: InlineCompletionRequest):
+ """Private wrapper around `self.handle_request()`."""
+ start = time.time()
+ await self.handle_request(request)
+ latency_ms = round((time.time() - start) * 1000)
+ self.log.info(f"Inline completion handler resolved in {latency_ms} ms.")
+
+ async def _handle_stream_request(self, request: InlineCompletionRequest):
+ """Private wrapper around `self.handle_stream_request()`."""
+ start = time.time()
+ await self.handle_stream_request(request)
+ latency_ms = round((time.time() - start) * 1000)
+ self.log.info(f"Inline completion streaming completed in {latency_ms} ms.")
diff --git a/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py b/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
new file mode 100644
index 000000000..552d23791
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
@@ -0,0 +1,151 @@
+from typing import Dict, Type
+
+from jupyter_ai_magics.providers import BaseProvider
+from langchain.prompts import (
+ ChatPromptTemplate,
+ HumanMessagePromptTemplate,
+ PromptTemplate,
+ SystemMessagePromptTemplate,
+)
+from langchain.schema.output_parser import StrOutputParser
+from langchain.schema.runnable import Runnable
+
+from ..models import (
+ InlineCompletionList,
+ InlineCompletionReply,
+ InlineCompletionRequest,
+ InlineCompletionStreamChunk,
+)
+from .base import BaseInlineCompletionHandler
+
+
+class DefaultInlineCompletionHandler(BaseInlineCompletionHandler):
+ llm_chain: Runnable
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+
+ def create_llm_chain(
+ self, provider: Type[BaseProvider], provider_params: Dict[str, str]
+ ):
+ model_parameters = self.get_model_parameters(provider, provider_params)
+ llm = provider(**provider_params, **model_parameters)
+
+ prompt_template = llm.get_completion_prompt_template()
+
+ self.llm = llm
+ self.llm_chain = prompt_template | llm | StrOutputParser()
+
+ async def handle_request(self, request: InlineCompletionRequest) -> None:
+ """Handles an inline completion request without streaming."""
+ self.get_llm_chain()
+ model_arguments = self._template_inputs_from_request(request)
+ suggestion = await self.llm_chain.ainvoke(input=model_arguments)
+ suggestion = self._post_process_suggestion(suggestion, request)
+ self.write_message(
+ InlineCompletionReply(
+ list=InlineCompletionList(items=[{"insertText": suggestion}]),
+ reply_to=request.number,
+ )
+ )
+
+ def _write_incomplete_reply(self, request: InlineCompletionRequest):
+ """Writes an incomplete `InlineCompletionReply`, indicating to the
+ client that LLM output is about to streamed across this connection.
+ Should be called first in `self.handle_stream_request()`."""
+
+ token = self._token_from_request(request, 0)
+ reply = InlineCompletionReply(
+ list=InlineCompletionList(
+ items=[
+ {
+ # insert text starts empty as we do not pre-generate any part
+ "insertText": "",
+ "isIncomplete": True,
+ "token": token,
+ }
+ ]
+ ),
+ reply_to=request.number,
+ )
+ self.write_message(reply)
+
+ async def handle_stream_request(self, request: InlineCompletionRequest):
+ # first, send empty initial reply.
+ self._write_incomplete_reply(request)
+
+ # then, generate and stream LLM output over this connection.
+ self.get_llm_chain()
+ token = self._token_from_request(request, 0)
+ model_arguments = self._template_inputs_from_request(request)
+ suggestion = ""
+
+ async for fragment in self.llm_chain.astream(input=model_arguments):
+ suggestion += fragment
+ if suggestion.startswith("```"):
+ if "\n" not in suggestion:
+ # we are not ready to apply post-processing
+ continue
+ else:
+ suggestion = self._post_process_suggestion(suggestion, request)
+ self.write_message(
+ InlineCompletionStreamChunk(
+ type="stream",
+ response={"insertText": suggestion, "token": token},
+ reply_to=request.number,
+ done=False,
+ )
+ )
+
+ # finally, send a message confirming that we are done
+ self.write_message(
+ InlineCompletionStreamChunk(
+ type="stream",
+ response={"insertText": suggestion, "token": token},
+ reply_to=request.number,
+ done=True,
+ )
+ )
+
+ def _token_from_request(self, request: InlineCompletionRequest, suggestion: int):
+ """Generate a deterministic token (for matching streamed messages)
+ using request number and suggestion number"""
+ return f"t{request.number}s{suggestion}"
+
+ def _template_inputs_from_request(self, request: InlineCompletionRequest) -> Dict:
+ suffix = request.suffix.strip()
+ filename = request.path.split("/")[-1] if request.path else "untitled"
+
+ return {
+ "prefix": request.prefix,
+ "suffix": suffix,
+ "language": request.language,
+ "filename": filename,
+ "stop": ["\n```"],
+ }
+
+ def _post_process_suggestion(
+ self, suggestion: str, request: InlineCompletionRequest
+ ) -> str:
+ """Remove spurious fragments from the suggestion.
+
+ While most models (especially instruct and infill models do not require
+ any pre-processing, some models such as gpt-4 which only have chat APIs
+ may require removing spurious fragments. This function uses heuristics
+ and request data to remove such fragments.
+ """
+ # gpt-4 tends to add "```python" or similar
+ language = request.language or "python"
+ markdown_identifiers = {"ipython": ["ipython", "python", "py"]}
+ bad_openings = [
+ f"```{identifier}"
+ for identifier in markdown_identifiers.get(language, [language])
+ ] + ["```"]
+ for opening in bad_openings:
+ if suggestion.startswith(opening):
+ suggestion = suggestion[len(opening) :].lstrip()
+ # check for the prefix inclusion (only if there was a bad opening)
+ if suggestion.startswith(request.prefix):
+ suggestion = suggestion[len(request.prefix) :]
+ break
+ return suggestion
diff --git a/packages/jupyter-ai/jupyter_ai/completions/handlers/llm_mixin.py b/packages/jupyter-ai/jupyter_ai/completions/handlers/llm_mixin.py
new file mode 100644
index 000000000..da8765b39
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/completions/handlers/llm_mixin.py
@@ -0,0 +1,69 @@
+from typing import Any, Dict, Type
+
+from jupyter_ai.config_manager import ConfigManager
+from jupyter_ai_magics.providers import BaseProvider
+
+
+class LLMHandlerMixin:
+ """Base class containing shared methods and attributes used by LLM handler classes."""
+
+ # This could be used to derive `BaseChatHandler` too (there is a lot of duplication!),
+ # but it was decided against it to avoid introducing conflicts for backports against 1.x
+
+ handler_kind: str
+
+ @property
+ def config_manager(self) -> ConfigManager:
+ return self.settings["jai_config_manager"]
+
+ @property
+ def model_parameters(self) -> Dict[str, Dict[str, Any]]:
+ return self.settings["model_parameters"]
+
+ def __init__(self, *args, **kwargs):
+ super().__init__(*args, **kwargs)
+ self.llm = None
+ self.llm_params = None
+ self.llm_chain = None
+
+ def get_llm_chain(self):
+ lm_provider = self.config_manager.lm_provider
+ lm_provider_params = self.config_manager.lm_provider_params
+
+ curr_lm_id = (
+ f'{self.llm.id}:{lm_provider_params["model_id"]}' if self.llm else None
+ )
+ next_lm_id = (
+ f'{lm_provider.id}:{lm_provider_params["model_id"]}'
+ if lm_provider
+ else None
+ )
+
+ if not lm_provider or not lm_provider_params:
+ return None
+
+ if curr_lm_id != next_lm_id:
+ self.log.info(
+ f"Switching {self.handler_kind} language model from {curr_lm_id} to {next_lm_id}."
+ )
+ self.create_llm_chain(lm_provider, lm_provider_params)
+ elif self.llm_params != lm_provider_params:
+ self.log.info(
+ f"{self.handler_kind} model params changed, updating the llm chain."
+ )
+ self.create_llm_chain(lm_provider, lm_provider_params)
+
+ self.llm_params = lm_provider_params
+ return self.llm_chain
+
+ def get_model_parameters(
+ self, provider: Type[BaseProvider], provider_params: Dict[str, str]
+ ):
+ return self.model_parameters.get(
+ f"{provider.id}:{provider_params['model_id']}", {}
+ )
+
+ def create_llm_chain(
+ self, provider: Type[BaseProvider], provider_params: Dict[str, str]
+ ):
+ raise NotImplementedError("Should be implemented by subclasses")
diff --git a/packages/jupyter-ai/jupyter_ai/completions/models.py b/packages/jupyter-ai/jupyter_ai/completions/models.py
new file mode 100644
index 000000000..507365408
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/completions/models.py
@@ -0,0 +1,71 @@
+from typing import List, Literal, Optional
+
+from langchain.pydantic_v1 import BaseModel
+
+
+class InlineCompletionRequest(BaseModel):
+ """Message send by client to request inline completions.
+
+ Prefix/suffix implementation is used to avoid the need for synchronising
+ the notebook state at every key press (subject to change in future)."""
+
+ # unique message ID generated by the client used to identify replies and
+ # to easily discard replies for older requests
+ number: int
+ # prefix should include full text of the current cell preceding the cursor
+ prefix: str
+ # suffix should include full text of the current cell preceding the cursor
+ suffix: str
+ # media type for the current language, e.g. `text/x-python`
+ mime: str
+ # whether to stream the response (if supported by the model)
+ stream: bool
+ # path to the notebook of file for which the completions are generated
+ path: Optional[str]
+ # language inferred from the document mime type (if possible)
+ language: Optional[str]
+ # identifier of the cell for which the completions are generated if in a notebook
+ # previous cells and following cells can be used to learn the wider context
+ cell_id: Optional[str]
+
+
+class InlineCompletionItem(BaseModel):
+ """The inline completion suggestion to be displayed on the frontend.
+
+ See JuptyerLab `InlineCompletionItem` documentation for the details.
+ """
+
+ insertText: str
+ filterText: Optional[str]
+ isIncomplete: Optional[bool]
+ token: Optional[str]
+
+
+class CompletionError(BaseModel):
+ type: str
+ traceback: str
+
+
+class InlineCompletionList(BaseModel):
+ """Reflection of JupyterLab's `IInlineCompletionList`."""
+
+ items: List[InlineCompletionItem]
+
+
+class InlineCompletionReply(BaseModel):
+ """Message sent from model to client with the infill suggestions"""
+
+ list: InlineCompletionList
+ # number of request for which we are replying
+ reply_to: int
+ error: Optional[CompletionError]
+
+
+class InlineCompletionStreamChunk(BaseModel):
+ """Message sent from model to client with the infill suggestions"""
+
+ type: Literal["stream"] = "stream"
+ response: InlineCompletionItem
+ reply_to: int
+ done: bool
+ error: Optional[CompletionError]
diff --git a/packages/jupyter-ai/jupyter_ai/extension.py b/packages/jupyter-ai/jupyter_ai/extension.py
index a9385801a..e3958fc7b 100644
--- a/packages/jupyter-ai/jupyter_ai/extension.py
+++ b/packages/jupyter-ai/jupyter_ai/extension.py
@@ -18,6 +18,7 @@
LearnChatHandler,
)
from .chat_handlers.help import HelpMessage
+from .completions.handlers import DefaultInlineCompletionHandler
from .config_manager import ConfigManager
from .handlers import (
ApiKeysHandler,
@@ -38,6 +39,7 @@ class AiExtension(ExtensionApp):
(r"api/ai/chats/history?", ChatHistoryHandler),
(r"api/ai/providers?", ModelProviderHandler),
(r"api/ai/providers/embeddings?", EmbeddingsModelProviderHandler),
+ (r"api/ai/completion/inline/?", DefaultInlineCompletionHandler),
]
allowed_providers = List(
@@ -169,12 +171,17 @@ def initialize_settings(self):
self.settings["dask_client_future"] = loop.create_task(self._get_dask_client())
eps = entry_points()
- # initialize chat handlers
- chat_handler_eps = eps.select(group="jupyter_ai.chat_handlers")
- chat_handler_kwargs = {
+ common_handler_kargs = {
"log": self.log,
"config_manager": self.settings["jai_config_manager"],
+ "model_parameters": self.settings["model_parameters"],
+ }
+
+ # initialize chat handlers
+ chat_handler_eps = eps.select(group="jupyter_ai.chat_handlers")
+ chat_handler_kwargs = {
+ **common_handler_kargs,
"root_chat_handlers": self.settings["jai_root_chat_handlers"],
"chat_history": self.settings["chat_history"],
"root_dir": self.serverapp.root_dir,
diff --git a/packages/jupyter-ai/jupyter_ai/handlers.py b/packages/jupyter-ai/jupyter_ai/handlers.py
index 4707c68dc..0f02e8a6c 100644
--- a/packages/jupyter-ai/jupyter_ai/handlers.py
+++ b/packages/jupyter-ai/jupyter_ai/handlers.py
@@ -15,6 +15,7 @@
from tornado import web, websocket
from tornado.web import HTTPError
+from .completions.models import InlineCompletionRequest
from .models import (
AgentChatMessage,
ChatClient,
diff --git a/packages/jupyter-ai/jupyter_ai/tests/completions/__init__.py b/packages/jupyter-ai/jupyter_ai/tests/completions/__init__.py
new file mode 100644
index 000000000..e69de29bb
diff --git a/packages/jupyter-ai/jupyter_ai/tests/completions/test_handlers.py b/packages/jupyter-ai/jupyter_ai/tests/completions/test_handlers.py
new file mode 100644
index 000000000..1b950af74
--- /dev/null
+++ b/packages/jupyter-ai/jupyter_ai/tests/completions/test_handlers.py
@@ -0,0 +1,116 @@
+import json
+from types import SimpleNamespace
+
+from jupyter_ai.completions.handlers.default import DefaultInlineCompletionHandler
+from jupyter_ai.completions.models import InlineCompletionRequest
+from jupyter_ai_magics import BaseProvider
+from langchain_community.llms import FakeListLLM
+from pytest import fixture
+from tornado.httputil import HTTPServerRequest
+from tornado.web import Application
+
+
+class MockProvider(BaseProvider, FakeListLLM):
+ id = "my_provider"
+ name = "My Provider"
+ model_id_key = "model"
+ models = ["model"]
+
+ def __init__(self, **kwargs):
+ kwargs["responses"] = ["Test response"]
+ super().__init__(**kwargs)
+
+
+class MockCompletionHandler(DefaultInlineCompletionHandler):
+ def __init__(self):
+ self.request = HTTPServerRequest()
+ self.application = Application()
+ self.messages = []
+ self.tasks = []
+ self.settings["jai_config_manager"] = SimpleNamespace(
+ lm_provider=MockProvider, lm_provider_params={"model_id": "model"}
+ )
+ self.settings["jai_event_loop"] = SimpleNamespace(
+ create_task=lambda x: self.tasks.append(x)
+ )
+ self.settings["model_parameters"] = {}
+ self.llm_params = {}
+ self.create_llm_chain(MockProvider, {"model_id": "model"})
+
+ def write_message(self, message: str) -> None: # type: ignore
+ self.messages.append(message)
+
+ async def handle_exc(self, e: Exception, _request: InlineCompletionRequest):
+ # raise all exceptions during testing rather
+ raise e
+
+
+@fixture
+def inline_handler() -> MockCompletionHandler:
+ return MockCompletionHandler()
+
+
+async def test_on_message(inline_handler):
+ request = InlineCompletionRequest(
+ number=1, prefix="", suffix="", mime="", stream=False
+ )
+ # Test end to end, without checking details of the replies,
+ # which are tested in appropriate method unit tests.
+ await inline_handler.on_message(json.dumps(dict(request)))
+ assert len(inline_handler.tasks) == 1
+ await inline_handler.tasks[0]
+ assert len(inline_handler.messages) == 1
+
+
+async def test_on_message_stream(inline_handler):
+ stream_request = InlineCompletionRequest(
+ number=1, prefix="", suffix="", mime="", stream=True
+ )
+ # Test end to end, without checking details of the replies,
+ # which are tested in appropriate method unit tests.
+ await inline_handler.on_message(json.dumps(dict(stream_request)))
+ assert len(inline_handler.tasks) == 1
+ await inline_handler.tasks[0]
+ assert len(inline_handler.messages) == 3
+
+
+async def test_handle_request(inline_handler):
+ dummy_request = InlineCompletionRequest(
+ number=1, prefix="", suffix="", mime="", stream=False
+ )
+ await inline_handler.handle_request(dummy_request)
+ # should write a single reply
+ assert len(inline_handler.messages) == 1
+ # reply should contain a single suggestion
+ suggestions = inline_handler.messages[0].list.items
+ assert len(suggestions) == 1
+ # the suggestion should include insert text from LLM
+ assert suggestions[0].insertText == "Test response"
+
+
+async def test_handle_stream_request(inline_handler):
+ inline_handler.llm_chain = FakeListLLM(responses=["test"])
+ dummy_request = InlineCompletionRequest(
+ number=1, prefix="", suffix="", mime="", stream=True
+ )
+ await inline_handler.handle_stream_request(dummy_request)
+
+ # should write three replies
+ assert len(inline_handler.messages) == 3
+
+ # first reply should be empty to start the stream
+ first = inline_handler.messages[0].list.items[0]
+ assert first.insertText == ""
+ assert first.isIncomplete == True
+
+ # second reply should be a chunk containing the token
+ second = inline_handler.messages[1]
+ assert second.type == "stream"
+ assert second.response.insertText == "Test response"
+ assert second.done == False
+
+ # third reply should be a closing chunk
+ third = inline_handler.messages[2]
+ assert third.type == "stream"
+ assert third.response.insertText == "Test response"
+ assert third.done == True
diff --git a/packages/jupyter-ai/package.json b/packages/jupyter-ai/package.json
index 0dda0870f..3e684c021 100644
--- a/packages/jupyter-ai/package.json
+++ b/packages/jupyter-ai/package.json
@@ -1,6 +1,6 @@
{
"name": "@jupyter-ai/core",
- "version": "2.9.0",
+ "version": "2.10.0-beta.1",
"description": "A generative AI extension for JupyterLab",
"keywords": [
"jupyter",
@@ -63,13 +63,17 @@
"@emotion/styled": "^11.10.5",
"@jupyter/collaboration": "^1",
"@jupyterlab/application": "^4",
+ "@jupyterlab/apputils": "^4",
"@jupyterlab/cells": "^4",
"@jupyterlab/codeeditor": "^4",
"@jupyterlab/codemirror": "^4",
+ "@jupyterlab/completer": "^4",
"@jupyterlab/coreutils": "^6",
+ "@jupyterlab/docregistry": "^4",
"@jupyterlab/fileeditor": "^4",
"@jupyterlab/notebook": "^4",
"@jupyterlab/services": "^7",
+ "@jupyterlab/settingregistry": "^4",
"@jupyterlab/ui-components": "^4",
"@mui/icons-material": "^5.11.0",
"@mui/material": "^5.11.0",
@@ -103,7 +107,7 @@
"stylelint-config-standard": "~24.0.0",
"stylelint-prettier": "^2.0.0",
"ts-jest": "^29",
- "typescript": "~4.3.0",
+ "typescript": "~4.9.0",
"y-protocols": "^1.0.5"
},
"sideEffects": [
diff --git a/packages/jupyter-ai/src/completions/handler.ts b/packages/jupyter-ai/src/completions/handler.ts
new file mode 100644
index 000000000..6d9d4dbdd
--- /dev/null
+++ b/packages/jupyter-ai/src/completions/handler.ts
@@ -0,0 +1,149 @@
+import { IDisposable } from '@lumino/disposable';
+import { PromiseDelegate } from '@lumino/coreutils';
+import { ServerConnection } from '@jupyterlab/services';
+import { URLExt } from '@jupyterlab/coreutils';
+import { AiCompleterService as AiService } from './types';
+import { Signal, ISignal } from '@lumino/signaling';
+
+const SERVICE_URL = 'api/ai/completion/inline';
+
+type StreamChunk = AiService.InlineCompletionStreamChunk;
+
+export class CompletionWebsocketHandler implements IDisposable {
+ /**
+ * The server settings used to make API requests.
+ */
+ readonly serverSettings: ServerConnection.ISettings;
+
+ /**
+ * Create a new completion handler.
+ */
+ constructor(options: AiService.IOptions = {}) {
+ this.serverSettings =
+ options.serverSettings ?? ServerConnection.makeSettings();
+ }
+
+ /**
+ * Initializes the WebSocket connection to the completion backend. Promise is
+ * resolved when server acknowledges connection and sends the client ID. This
+ * must be awaited before calling any other method.
+ */
+ public async initialize(): Promise {
+ await this._initialize();
+ }
+
+ /**
+ * Sends a message across the WebSocket. Promise resolves to the message ID
+ * when the server sends the same message back, acknowledging receipt.
+ */
+ public sendMessage(
+ message: AiService.InlineCompletionRequest
+ ): Promise {
+ return new Promise(resolve => {
+ this._socket?.send(JSON.stringify(message));
+ this._replyForResolver[message.number] = resolve;
+ });
+ }
+
+ /**
+ * Signal emitted when a new chunk of completion is streamed.
+ */
+ get streamed(): ISignal {
+ return this._streamed;
+ }
+
+ /**
+ * Whether the completion handler is disposed.
+ */
+ get isDisposed(): boolean {
+ return this._isDisposed;
+ }
+
+ /**
+ * Dispose the completion handler.
+ */
+ dispose(): void {
+ if (this.isDisposed) {
+ return;
+ }
+ this._isDisposed = true;
+
+ // Clean up socket.
+ const socket = this._socket;
+ if (socket) {
+ this._socket = null;
+ socket.onopen = () => undefined;
+ socket.onerror = () => undefined;
+ socket.onmessage = () => undefined;
+ socket.onclose = () => undefined;
+ socket.close();
+ }
+ }
+
+ private _onMessage(message: AiService.CompleterMessage): void {
+ switch (message.type) {
+ case 'connection': {
+ this._initialized.resolve();
+ break;
+ }
+ case 'stream': {
+ this._streamed.emit(message);
+ break;
+ }
+ default: {
+ if (message.reply_to in this._replyForResolver) {
+ this._replyForResolver[message.reply_to](message);
+ delete this._replyForResolver[message.reply_to];
+ } else {
+ console.warn('Unhandled message', message);
+ }
+ break;
+ }
+ }
+ }
+
+ /**
+ * Dictionary mapping message IDs to Promise resolvers.
+ */
+ private _replyForResolver: Record<
+ number,
+ (value: AiService.InlineCompletionReply) => void
+ > = {};
+
+ private _onClose(e: CloseEvent, reject: any) {
+ reject(new Error('Inline completion websocket disconnected'));
+ console.error('Inline completion websocket disconnected');
+ // only attempt re-connect if there was an abnormal closure
+ // WebSocket status codes defined in RFC 6455: https://www.rfc-editor.org/rfc/rfc6455.html#section-7.4.1
+ if (e.code === 1006) {
+ const delaySeconds = 1;
+ console.info(`Will try to reconnect in ${delaySeconds} s.`);
+ setTimeout(async () => await this._initialize(), delaySeconds * 1000);
+ }
+ }
+
+ private async _initialize(): Promise {
+ if (this.isDisposed) {
+ return;
+ }
+ const promise = new PromiseDelegate();
+ this._initialized = promise;
+ console.log(
+ 'Creating a new websocket connection for inline completions...'
+ );
+ const { token, WebSocket, wsUrl } = this.serverSettings;
+ const url =
+ URLExt.join(wsUrl, SERVICE_URL) +
+ (token ? `?token=${encodeURIComponent(token)}` : '');
+
+ const socket = (this._socket = new WebSocket(url));
+ socket.onclose = e => this._onClose(e, promise.reject);
+ socket.onerror = e => promise.reject(e);
+ socket.onmessage = msg => msg.data && this._onMessage(JSON.parse(msg.data));
+ }
+
+ private _isDisposed = false;
+ private _socket: WebSocket | null = null;
+ private _streamed = new Signal(this);
+ private _initialized: PromiseDelegate = new PromiseDelegate();
+}
diff --git a/packages/jupyter-ai/src/completions/index.ts b/packages/jupyter-ai/src/completions/index.ts
new file mode 100644
index 000000000..507f37c19
--- /dev/null
+++ b/packages/jupyter-ai/src/completions/index.ts
@@ -0,0 +1 @@
+export { completionPlugin } from './plugin';
diff --git a/packages/jupyter-ai/src/completions/plugin.ts b/packages/jupyter-ai/src/completions/plugin.ts
new file mode 100644
index 000000000..835561143
--- /dev/null
+++ b/packages/jupyter-ai/src/completions/plugin.ts
@@ -0,0 +1,190 @@
+import {
+ JupyterFrontEnd,
+ JupyterFrontEndPlugin
+} from '@jupyterlab/application';
+import { ICompletionProviderManager } from '@jupyterlab/completer';
+import { ISettingRegistry } from '@jupyterlab/settingregistry';
+import {
+ IEditorLanguageRegistry,
+ IEditorLanguage
+} from '@jupyterlab/codemirror';
+import { getEditor } from '../selection-watcher';
+import { IJaiStatusItem } from '../tokens';
+import { displayName, JaiInlineProvider } from './provider';
+import { CompletionWebsocketHandler } from './handler';
+
+export namespace CommandIDs {
+ /**
+ * Command to toggle completions globally.
+ */
+ export const toggleCompletions = 'jupyter-ai:toggle-completions';
+ /**
+ * Command to toggle completions for specific language.
+ */
+ export const toggleLanguageCompletions =
+ 'jupyter-ai:toggle-language-completions';
+}
+
+const INLINE_COMPLETER_PLUGIN =
+ '@jupyterlab/completer-extension:inline-completer';
+
+/**
+ * Type of the settings object for the inline completer plugin.
+ */
+type IcPluginSettings = ISettingRegistry.ISettings & {
+ user: {
+ providers?: {
+ [key: string]: unknown;
+ [JaiInlineProvider.ID]?: JaiInlineProvider.ISettings;
+ };
+ };
+ composite: {
+ providers: {
+ [key: string]: unknown;
+ [JaiInlineProvider.ID]: JaiInlineProvider.ISettings;
+ };
+ };
+};
+
+export const completionPlugin: JupyterFrontEndPlugin = {
+ id: 'jupyter_ai:inline-completions',
+ autoStart: true,
+ requires: [
+ ICompletionProviderManager,
+ IEditorLanguageRegistry,
+ ISettingRegistry
+ ],
+ optional: [IJaiStatusItem],
+ activate: async (
+ app: JupyterFrontEnd,
+ completionManager: ICompletionProviderManager,
+ languageRegistry: IEditorLanguageRegistry,
+ settingRegistry: ISettingRegistry,
+ statusItem: IJaiStatusItem | null
+ ): Promise => {
+ if (typeof completionManager.registerInlineProvider === 'undefined') {
+ // Gracefully short-circuit on JupyterLab 4.0 and Notebook 7.0
+ console.warn(
+ 'Inline completions are only supported in JupyterLab 4.1+ and Jupyter Notebook 7.1+'
+ );
+ return;
+ }
+
+ const completionHandler = new CompletionWebsocketHandler();
+ const provider = new JaiInlineProvider({
+ completionHandler,
+ languageRegistry
+ });
+
+ await completionHandler.initialize();
+ completionManager.registerInlineProvider(provider);
+
+ const findCurrentLanguage = (): IEditorLanguage | null => {
+ const widget = app.shell.currentWidget;
+ const editor = getEditor(widget);
+ if (!editor) {
+ return null;
+ }
+ return languageRegistry.findByMIME(editor.model.mimeType);
+ };
+
+ // ic := inline completion
+ async function getIcSettings() {
+ return (await settingRegistry.load(
+ INLINE_COMPLETER_PLUGIN
+ )) as IcPluginSettings;
+ }
+
+ /**
+ * Gets the composite settings for the Jupyter AI inline completion provider
+ * (JaiIcp).
+ *
+ * This reads from the `ISettings.composite` property, which merges the user
+ * settings with the provider defaults, defined in
+ * `JaiInlineProvider.DEFAULT_SETTINGS`.
+ */
+ async function getJaiIcpSettings() {
+ const icSettings = await getIcSettings();
+ return icSettings.composite.providers[JaiInlineProvider.ID];
+ }
+
+ /**
+ * Updates the JaiIcp user settings.
+ */
+ async function updateJaiIcpSettings(
+ newJaiIcpSettings: Partial
+ ) {
+ const icSettings = await getIcSettings();
+ const oldUserIcpSettings = icSettings.user.providers;
+ const newUserIcpSettings = {
+ ...oldUserIcpSettings,
+ [JaiInlineProvider.ID]: {
+ ...oldUserIcpSettings?.[JaiInlineProvider.ID],
+ ...newJaiIcpSettings
+ }
+ };
+ icSettings.set('providers', newUserIcpSettings);
+ }
+
+ app.commands.addCommand(CommandIDs.toggleCompletions, {
+ execute: async () => {
+ const jaiIcpSettings = await getJaiIcpSettings();
+ updateJaiIcpSettings({
+ enabled: !jaiIcpSettings.enabled
+ });
+ },
+ label: 'Enable completions by Jupyternaut',
+ isToggled: () => {
+ return provider.isEnabled();
+ }
+ });
+
+ app.commands.addCommand(CommandIDs.toggleLanguageCompletions, {
+ execute: async () => {
+ const jaiIcpSettings = await getJaiIcpSettings();
+ const language = findCurrentLanguage();
+ if (!language) {
+ return;
+ }
+
+ const disabledLanguages = [...jaiIcpSettings.disabledLanguages];
+ const newDisabledLanguages = disabledLanguages.includes(language.name)
+ ? disabledLanguages.filter(l => l !== language.name)
+ : disabledLanguages.concat(language.name);
+
+ updateJaiIcpSettings({
+ disabledLanguages: newDisabledLanguages
+ });
+ },
+ label: () => {
+ const language = findCurrentLanguage();
+ return language
+ ? `Disable completions in ${displayName(language)}`
+ : 'Disable completions in files';
+ },
+ isToggled: () => {
+ const language = findCurrentLanguage();
+ return !!language && !provider.isLanguageEnabled(language.name);
+ },
+ isVisible: () => {
+ const language = findCurrentLanguage();
+ return !!language;
+ },
+ isEnabled: () => {
+ const language = findCurrentLanguage();
+ return !!language && provider.isEnabled();
+ }
+ });
+
+ if (statusItem) {
+ statusItem.addItem({
+ command: CommandIDs.toggleCompletions,
+ rank: 1
+ });
+ statusItem.addItem({
+ command: CommandIDs.toggleLanguageCompletions,
+ rank: 2
+ });
+ }
+ }
+};
diff --git a/packages/jupyter-ai/src/completions/provider.ts b/packages/jupyter-ai/src/completions/provider.ts
new file mode 100644
index 000000000..b1e0f6a89
--- /dev/null
+++ b/packages/jupyter-ai/src/completions/provider.ts
@@ -0,0 +1,288 @@
+import {
+ InlineCompletionTriggerKind,
+ IInlineCompletionProvider,
+ IInlineCompletionContext,
+ IInlineCompletionList,
+ IInlineCompletionItem,
+ CompletionHandler
+} from '@jupyterlab/completer';
+import { ISettingRegistry } from '@jupyterlab/settingregistry';
+import { Notification, showErrorMessage } from '@jupyterlab/apputils';
+import { JSONValue, PromiseDelegate } from '@lumino/coreutils';
+import {
+ IEditorLanguageRegistry,
+ IEditorLanguage
+} from '@jupyterlab/codemirror';
+import { NotebookPanel } from '@jupyterlab/notebook';
+import { AiCompleterService as AiService } from './types';
+import { DocumentWidget } from '@jupyterlab/docregistry';
+import { jupyternautIcon } from '../icons';
+import { CompletionWebsocketHandler } from './handler';
+
+type StreamChunk = AiService.InlineCompletionStreamChunk;
+
+/**
+ * Format the language name nicely.
+ */
+export function displayName(language: IEditorLanguage): string {
+ if (language.name === 'ipythongfm') {
+ return 'Markdown (IPython)';
+ }
+ if (language.name === 'ipython') {
+ return 'IPython';
+ }
+ return language.displayName ?? language.name;
+}
+
+export class JaiInlineProvider implements IInlineCompletionProvider {
+ readonly identifier = JaiInlineProvider.ID;
+ readonly icon = jupyternautIcon.bindprops({ width: 16, top: 1 });
+
+ constructor(protected options: JaiInlineProvider.IOptions) {
+ options.completionHandler.streamed.connect(this._receiveStreamChunk, this);
+ }
+
+ get name(): string {
+ return 'JupyterAI';
+ }
+
+ async fetch(
+ request: CompletionHandler.IRequest,
+ context: IInlineCompletionContext
+ ): Promise> {
+ const mime = request.mimeType ?? 'text/plain';
+ const language = this.options.languageRegistry.findByMIME(mime);
+ if (!language) {
+ console.warn(
+ `Could not recognise language for ${mime} - cannot complete`
+ );
+ return { items: [] };
+ }
+ if (!this.isLanguageEnabled(language?.name)) {
+ // Do not offer suggestions if disabled.
+ return { items: [] };
+ }
+ let cellId = undefined;
+ let path = context.session?.path;
+ if (context.widget instanceof NotebookPanel) {
+ const activeCell = context.widget.content.activeCell;
+ if (activeCell) {
+ cellId = activeCell.model.id;
+ }
+ }
+ if (!path && context.widget instanceof DocumentWidget) {
+ path = context.widget.context.path;
+ }
+ const number = ++this._counter;
+
+ const streamPreference = this._settings.streaming;
+ const stream =
+ streamPreference === 'always'
+ ? true
+ : streamPreference === 'never'
+ ? false
+ : context.triggerKind === InlineCompletionTriggerKind.Invoke;
+
+ if (stream) {
+ // Reset stream promises handler
+ this._streamPromises.clear();
+ }
+ const result = await this.options.completionHandler.sendMessage({
+ path: context.session?.path,
+ mime,
+ prefix: this._prefixFromRequest(request),
+ suffix: this._suffixFromRequest(request),
+ language: this._resolveLanguage(language),
+ number,
+ stream,
+ cell_id: cellId
+ });
+
+ const error = result.error;
+ if (error) {
+ Notification.emit(`Inline completion failed: ${error.type}`, 'error', {
+ autoClose: false,
+ actions: [
+ {
+ label: 'Show Traceback',
+ callback: () => {
+ showErrorMessage('Inline completion failed on the server side', {
+ message: error.traceback
+ });
+ }
+ }
+ ]
+ });
+ throw new Error(
+ `Inline completion failed: ${error.type}\n${error.traceback}`
+ );
+ }
+ return result.list;
+ }
+
+ /**
+ * Stream a reply for completion identified by given `token`.
+ */
+ async *stream(token: string): AsyncGenerator {
+ let done = false;
+ while (!done) {
+ const delegate = new PromiseDelegate();
+ this._streamPromises.set(token, delegate);
+ const promise = delegate.promise;
+ yield promise;
+ done = (await promise).done;
+ }
+ }
+
+ get schema(): ISettingRegistry.IProperty {
+ const knownLanguages = this.options.languageRegistry.getLanguages();
+ return {
+ properties: {
+ maxPrefix: {
+ title: 'Maximum prefix length',
+ minimum: 1,
+ type: 'number',
+ description:
+ 'At most how many prefix characters should be provided to the model.'
+ },
+ maxSuffix: {
+ title: 'Maximum suffix length',
+ minimum: 0,
+ type: 'number',
+ description:
+ 'At most how many suffix characters should be provided to the model.'
+ },
+ disabledLanguages: {
+ title: 'Disabled languages',
+ type: 'array',
+ items: {
+ type: 'string',
+ oneOf: knownLanguages.map(language => {
+ return { const: language.name, title: displayName(language) };
+ })
+ },
+ description:
+ 'Languages for which the completions should not be shown.'
+ },
+ streaming: {
+ title: 'Streaming',
+ type: 'string',
+ oneOf: [
+ { const: 'always', title: 'Always' },
+ { const: 'manual', title: 'When invoked manually' },
+ { const: 'never', title: 'Never' }
+ ],
+ description: 'Whether to show suggestions as they are generated'
+ }
+ },
+ default: JaiInlineProvider.DEFAULT_SETTINGS as any
+ };
+ }
+
+ async configure(settings: { [property: string]: JSONValue }): Promise {
+ this._settings = settings as unknown as JaiInlineProvider.ISettings;
+ }
+
+ isEnabled(): boolean {
+ return this._settings.enabled;
+ }
+
+ isLanguageEnabled(language: string): boolean {
+ return !this._settings.disabledLanguages.includes(language);
+ }
+
+ /**
+ * Process the stream chunk to make it available in the awaiting generator.
+ */
+ private _receiveStreamChunk(
+ _emitter: CompletionWebsocketHandler,
+ chunk: StreamChunk
+ ) {
+ const token = chunk.response.token;
+ if (!token) {
+ throw Error('Stream chunks must return define `token` in `response`');
+ }
+ const delegate = this._streamPromises.get(token);
+ if (!delegate) {
+ console.warn('Unhandled stream chunk');
+ } else {
+ delegate.resolve(chunk);
+ if (chunk.done) {
+ this._streamPromises.delete(token);
+ }
+ }
+ }
+
+ /**
+ * Extract prefix from request, accounting for context window limit.
+ */
+ private _prefixFromRequest(request: CompletionHandler.IRequest): string {
+ const textBefore = request.text.slice(0, request.offset);
+ const prefix = textBefore.slice(
+ -Math.min(this._settings.maxPrefix, textBefore.length)
+ );
+ return prefix;
+ }
+
+ /**
+ * Extract suffix from request, accounting for context window limit.
+ */
+ private _suffixFromRequest(request: CompletionHandler.IRequest): string {
+ const textAfter = request.text.slice(request.offset);
+ const prefix = textAfter.slice(
+ 0,
+ Math.min(this._settings.maxPrefix, textAfter.length)
+ );
+ return prefix;
+ }
+
+ private _resolveLanguage(language: IEditorLanguage | null) {
+ if (!language) {
+ return 'plain English';
+ }
+ if (language.name === 'ipython') {
+ return 'python';
+ } else if (language.name === 'ipythongfm') {
+ return 'markdown';
+ }
+ return language.name;
+ }
+
+ private _settings: JaiInlineProvider.ISettings =
+ JaiInlineProvider.DEFAULT_SETTINGS;
+
+ private _streamPromises: Map> =
+ new Map();
+ private _counter = 0;
+}
+
+export namespace JaiInlineProvider {
+ export const ID = '@jupyterlab/jupyter-ai';
+
+ export interface IOptions {
+ completionHandler: CompletionWebsocketHandler;
+ languageRegistry: IEditorLanguageRegistry;
+ }
+
+ export interface ISettings {
+ maxPrefix: number;
+ maxSuffix: number;
+ debouncerDelay: number;
+ enabled: boolean;
+ disabledLanguages: string[];
+ streaming: 'always' | 'manual' | 'never';
+ }
+
+ export const DEFAULT_SETTINGS: ISettings = {
+ maxPrefix: 10000,
+ maxSuffix: 10000,
+ // The debouncer delay handling is implemented upstream in JupyterLab;
+ // here we just increase the default from 0, as compared to kernel history
+ // the external AI models may have a token cost associated.
+ debouncerDelay: 250,
+ enabled: false,
+ // ipythongfm means "IPython GitHub Flavoured Markdown"
+ disabledLanguages: ['ipythongfm'],
+ streaming: 'manual'
+ };
+}
diff --git a/packages/jupyter-ai/src/completions/types.ts b/packages/jupyter-ai/src/completions/types.ts
new file mode 100644
index 000000000..13dad9c8e
--- /dev/null
+++ b/packages/jupyter-ai/src/completions/types.ts
@@ -0,0 +1,61 @@
+import type {
+ IInlineCompletionList,
+ IInlineCompletionItem
+} from '@jupyterlab/completer';
+
+import { ServerConnection } from '@jupyterlab/services';
+
+export namespace AiCompleterService {
+ /**
+ * The instantiation options for a data registry handler.
+ */
+ export interface IOptions {
+ serverSettings?: ServerConnection.ISettings;
+ }
+
+ export type ConnectionMessage = {
+ type: 'connection';
+ client_id: string;
+ };
+
+ export type InlineCompletionRequest = {
+ number: number;
+ path?: string;
+ /* The model has to complete given prefix */
+ prefix: string;
+ /* The model may consider the following suffix */
+ suffix: string;
+ mime: string;
+ /* Whether to stream the response (if streaming is supported by the model) */
+ stream: boolean;
+ language?: string;
+ cell_id?: string;
+ };
+
+ export type CompletionError = {
+ type: string;
+ traceback: string;
+ };
+
+ export type InlineCompletionReply = {
+ /**
+ * Type for this message can be skipped (`inline_completion` is presumed default).
+ **/
+ type?: 'inline_completion';
+ list: IInlineCompletionList;
+ reply_to: number;
+ error?: CompletionError;
+ };
+
+ export type InlineCompletionStreamChunk = {
+ type: 'stream';
+ response: IInlineCompletionItem;
+ reply_to: number;
+ done: boolean;
+ };
+
+ export type CompleterMessage =
+ | InlineCompletionReply
+ | ConnectionMessage
+ | InlineCompletionStreamChunk;
+}
diff --git a/packages/jupyter-ai/src/components/chat.tsx b/packages/jupyter-ai/src/components/chat.tsx
index ded339c70..53ba45f1a 100644
--- a/packages/jupyter-ai/src/components/chat.tsx
+++ b/packages/jupyter-ai/src/components/chat.tsx
@@ -4,6 +4,7 @@ import { Button, IconButton, Stack } from '@mui/material';
import SettingsIcon from '@mui/icons-material/Settings';
import ArrowBackIcon from '@mui/icons-material/ArrowBack';
import type { Awareness } from 'y-protocols/awareness';
+import type { IThemeManager } from '@jupyterlab/apputils';
import { JlThemeProvider } from './jl-theme-provider';
import { ChatMessages } from './chat-messages';
@@ -178,6 +179,7 @@ export type ChatProps = {
selectionWatcher: SelectionWatcher;
chatHandler: ChatHandler;
globalAwareness: Awareness | null;
+ themeManager: IThemeManager | null;
chatView?: ChatView;
};
@@ -190,7 +192,7 @@ export function Chat(props: ChatProps): JSX.Element {
const [view, setView] = useState(props.chatView || ChatView.Chat);
return (
-
+
(createTheme());
@@ -12,7 +14,9 @@ export function JlThemeProvider(props: {
async function setJlTheme() {
setTheme(await getJupyterLabTheme());
}
+
setJlTheme();
+ props.themeManager?.themeChanged.connect(setJlTheme);
}, []);
return {props.children};
diff --git a/packages/jupyter-ai/src/components/statusbar-item.tsx b/packages/jupyter-ai/src/components/statusbar-item.tsx
new file mode 100644
index 000000000..1cd2aa5c5
--- /dev/null
+++ b/packages/jupyter-ai/src/components/statusbar-item.tsx
@@ -0,0 +1,99 @@
+import { Popup, showPopup } from '@jupyterlab/statusbar';
+import React from 'react';
+import { VDomModel, VDomRenderer } from '@jupyterlab/ui-components';
+import { CommandRegistry } from '@lumino/commands';
+import { MenuSvg, RankedMenu, IRankedMenu } from '@jupyterlab/ui-components';
+import { Jupyternaut } from '../icons';
+import type { IJaiStatusItem } from '../tokens';
+
+/**
+ * The Jupyter AI status item, shown in the status bar on the bottom right by
+ * default.
+ */
+export class JaiStatusItem
+ extends VDomRenderer
+ implements IJaiStatusItem
+{
+ constructor(options: JaiStatusItem.IOptions) {
+ super(new VDomModel());
+ this._commandRegistry = options.commandRegistry;
+ this._items = [];
+
+ this.addClass('jp-mod-highlighted');
+ this.title.caption = 'Open Jupyternaut status menu';
+ this.node.addEventListener('click', this._handleClick);
+ }
+
+ /**
+ * Adds a menu item to the JAI status item.
+ */
+ addItem(item: IRankedMenu.IItemOptions): void {
+ this._items.push(item);
+ }
+
+ /**
+ * Returns whether the status item has any menu items.
+ */
+ hasItems(): boolean {
+ return this._items.length !== 0;
+ }
+
+ /**
+ * Returns the status item as a JSX element.
+ */
+ render(): JSX.Element | null {
+ if (!this.model) {
+ return null;
+ }
+ return ;
+ }
+
+ dispose(): void {
+ this.node.removeEventListener('click', this._handleClick);
+ super.dispose();
+ }
+
+ /**
+ * Create a menu for viewing status and changing options.
+ */
+ private _handleClick = () => {
+ if (this._popup) {
+ this._popup.dispose();
+ }
+ if (this._menu) {
+ this._menu.dispose();
+ }
+ this._menu = new RankedMenu({
+ commands: this._commandRegistry,
+ renderer: MenuSvg.defaultRenderer
+ });
+ for (const item of this._items) {
+ this._menu.addItem(item);
+ }
+ this._popup = showPopup({
+ body: this._menu,
+ anchor: this,
+ align: 'left'
+ });
+ };
+
+ private _items: IRankedMenu.IItemOptions[];
+ private _commandRegistry: CommandRegistry;
+ private _menu: RankedMenu | null = null;
+ private _popup: Popup | null = null;
+}
+
+/**
+ * A namespace for JupyternautStatus statics.
+ */
+export namespace JaiStatusItem {
+ /**
+ * Options for the JupyternautStatus item.
+ */
+ export interface IOptions {
+ /**
+ * The application command registry.
+ */
+ commandRegistry: CommandRegistry;
+ }
+}
diff --git a/packages/jupyter-ai/src/index.ts b/packages/jupyter-ai/src/index.ts
index e48e2b211..807629eae 100644
--- a/packages/jupyter-ai/src/index.ts
+++ b/packages/jupyter-ai/src/index.ts
@@ -4,7 +4,11 @@ import {
ILayoutRestorer
} from '@jupyterlab/application';
-import { IWidgetTracker, ReactWidget } from '@jupyterlab/apputils';
+import {
+ IWidgetTracker,
+ ReactWidget,
+ IThemeManager
+} from '@jupyterlab/apputils';
import { IDocumentWidget } from '@jupyterlab/docregistry';
import { IGlobalAwareness } from '@jupyter/collaboration';
import type { Awareness } from 'y-protocols/awareness';
@@ -12,6 +16,8 @@ import { buildChatSidebar } from './widgets/chat-sidebar';
import { SelectionWatcher } from './selection-watcher';
import { ChatHandler } from './chat_handler';
import { buildErrorWidget } from './widgets/chat-error';
+import { completionPlugin } from './completions';
+import { statusItemPlugin } from './status';
export type DocumentTracker = IWidgetTracker;
@@ -21,11 +27,12 @@ export type DocumentTracker = IWidgetTracker;
const plugin: JupyterFrontEndPlugin = {
id: 'jupyter_ai:plugin',
autoStart: true,
- optional: [IGlobalAwareness, ILayoutRestorer],
+ optional: [IGlobalAwareness, ILayoutRestorer, IThemeManager],
activate: async (
app: JupyterFrontEnd,
globalAwareness: Awareness | null,
- restorer: ILayoutRestorer | null
+ restorer: ILayoutRestorer | null,
+ themeManager: IThemeManager | null
) => {
/**
* Initialize selection watcher singleton
@@ -43,10 +50,11 @@ const plugin: JupyterFrontEndPlugin = {
chatWidget = buildChatSidebar(
selectionWatcher,
chatHandler,
- globalAwareness
+ globalAwareness,
+ themeManager
);
} catch (e) {
- chatWidget = buildErrorWidget();
+ chatWidget = buildErrorWidget(themeManager);
}
/**
@@ -60,4 +68,4 @@ const plugin: JupyterFrontEndPlugin = {
}
};
-export default plugin;
+export default [plugin, statusItemPlugin, completionPlugin];
diff --git a/packages/jupyter-ai/src/selection-watcher.ts b/packages/jupyter-ai/src/selection-watcher.ts
index 825ee0f18..bbe1adefe 100644
--- a/packages/jupyter-ai/src/selection-watcher.ts
+++ b/packages/jupyter-ai/src/selection-watcher.ts
@@ -14,7 +14,9 @@ import { getCellIndex } from './utils';
/**
* Gets the editor instance used by a document widget. Returns `null` if unable.
*/
-function getEditor(widget: Widget | null) {
+export function getEditor(
+ widget: Widget | null
+): CodeMirrorEditor | null | undefined {
if (!(widget instanceof DocumentWidget)) {
return null;
}
diff --git a/packages/jupyter-ai/src/status.ts b/packages/jupyter-ai/src/status.ts
new file mode 100644
index 000000000..1624657ce
--- /dev/null
+++ b/packages/jupyter-ai/src/status.ts
@@ -0,0 +1,32 @@
+import { IJaiStatusItem } from './tokens';
+import {
+ JupyterFrontEnd,
+ JupyterFrontEndPlugin
+} from '@jupyterlab/application';
+import { IStatusBar } from '@jupyterlab/statusbar';
+import { JaiStatusItem } from './components/statusbar-item';
+
+export const statusItemPlugin: JupyterFrontEndPlugin = {
+ id: 'jupyter_ai:status-item',
+ description: 'Provides a status item for Jupyter AI.',
+ autoStart: true,
+ requires: [IStatusBar],
+ provides: IJaiStatusItem,
+ activate: (app: JupyterFrontEnd, statusBar: IStatusBar | null) => {
+ const statusItem = new JaiStatusItem({
+ commandRegistry: app.commands
+ });
+ if (statusBar) {
+ // Add the status item.
+ statusBar.registerStatusItem('jupyter_ai:jupyternaut-status', {
+ item: statusItem,
+ align: 'right',
+ rank: 100,
+ isActive: () => {
+ return statusItem.hasItems();
+ }
+ });
+ }
+ return statusItem;
+ }
+};
diff --git a/packages/jupyter-ai/src/theme-provider.ts b/packages/jupyter-ai/src/theme-provider.ts
index 405f08198..02db8d369 100644
--- a/packages/jupyter-ai/src/theme-provider.ts
+++ b/packages/jupyter-ai/src/theme-provider.ts
@@ -13,7 +13,6 @@ export async function pollUntilReady(): Promise {
export async function getJupyterLabTheme(): Promise {
await pollUntilReady();
const light = document.body.getAttribute('data-jp-theme-light');
- const primaryFontColor = getCSSVariable('--jp-ui-font-color1');
return createTheme({
spacing: 4,
components: {
@@ -113,7 +112,7 @@ export async function getJupyterLabTheme(): Promise {
dark: getCSSVariable('--jp-success-color0')
},
text: {
- primary: primaryFontColor,
+ primary: getCSSVariable('--jp-ui-font-color1'),
secondary: getCSSVariable('--jp-ui-font-color2'),
disabled: getCSSVariable('--jp-ui-font-color3')
}
@@ -127,11 +126,6 @@ export async function getJupyterLabTheme(): Promise {
htmlFontSize: 16,
button: {
textTransform: 'capitalize'
- },
- // this is undocumented as of the time of writing.
- // https://stackoverflow.com/a/62950304/12548458
- allVariants: {
- color: primaryFontColor
}
}
});
diff --git a/packages/jupyter-ai/src/tokens.ts b/packages/jupyter-ai/src/tokens.ts
new file mode 100644
index 000000000..f240a8197
--- /dev/null
+++ b/packages/jupyter-ai/src/tokens.ts
@@ -0,0 +1,14 @@
+import { Token } from '@lumino/coreutils';
+import type { IRankedMenu } from '@jupyterlab/ui-components';
+
+export interface IJaiStatusItem {
+ addItem(item: IRankedMenu.IItemOptions): void;
+}
+
+/**
+ * The Jupyternaut status token.
+ */
+export const IJaiStatusItem = new Token(
+ 'jupyter_ai:IJupyternautStatus',
+ 'Status indicator displayed in the statusbar'
+);
diff --git a/packages/jupyter-ai/src/widgets/chat-error.tsx b/packages/jupyter-ai/src/widgets/chat-error.tsx
index 3b8f8ef95..8ae9cbb44 100644
--- a/packages/jupyter-ai/src/widgets/chat-error.tsx
+++ b/packages/jupyter-ai/src/widgets/chat-error.tsx
@@ -1,13 +1,16 @@
import React from 'react';
import { ReactWidget } from '@jupyterlab/apputils';
+import type { IThemeManager } from '@jupyterlab/apputils';
+import { Alert, Box } from '@mui/material';
import { chatIcon } from '../icons';
-import { Alert, Box } from '@mui/material';
import { JlThemeProvider } from '../components/jl-theme-provider';
-export function buildErrorWidget(): ReactWidget {
+export function buildErrorWidget(
+ themeManager: IThemeManager | null
+): ReactWidget {
const ErrorWidget = ReactWidget.create(
-
+
);
ChatWidget.id = 'jupyter-ai::chat';
diff --git a/packages/jupyter-ai/style/icons/jupyternaut.svg b/packages/jupyter-ai/style/icons/jupyternaut.svg
index d4367985d..dd800d538 100644
--- a/packages/jupyter-ai/style/icons/jupyternaut.svg
+++ b/packages/jupyter-ai/style/icons/jupyternaut.svg
@@ -1,15 +1,9 @@
diff --git a/yarn.lock b/yarn.lock
index 3cdc4ba8b..0df949cb2 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -1890,20 +1890,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/console@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/console@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- "@types/node": "*"
- chalk: ^4.0.0
- jest-message-util: ^29.6.1
- jest-util: ^29.6.1
- slash: ^3.0.0
- checksum: d0ab23a00947bfb4bff8c0a7e5a7afd16519de16dde3fe7e77b9f13e794c6df7043ecf7fcdde66ac0d2b5fb3262e9cab3d92eaf61f89a12d3b8e3602e06a9902
- languageName: node
- linkType: hard
-
"@jest/console@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/console@npm:29.6.2"
@@ -1918,47 +1904,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/core@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/core@npm:29.6.1"
- dependencies:
- "@jest/console": ^29.6.1
- "@jest/reporters": ^29.6.1
- "@jest/test-result": ^29.6.1
- "@jest/transform": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- ansi-escapes: ^4.2.1
- chalk: ^4.0.0
- ci-info: ^3.2.0
- exit: ^0.1.2
- graceful-fs: ^4.2.9
- jest-changed-files: ^29.5.0
- jest-config: ^29.6.1
- jest-haste-map: ^29.6.1
- jest-message-util: ^29.6.1
- jest-regex-util: ^29.4.3
- jest-resolve: ^29.6.1
- jest-resolve-dependencies: ^29.6.1
- jest-runner: ^29.6.1
- jest-runtime: ^29.6.1
- jest-snapshot: ^29.6.1
- jest-util: ^29.6.1
- jest-validate: ^29.6.1
- jest-watcher: ^29.6.1
- micromatch: ^4.0.4
- pretty-format: ^29.6.1
- slash: ^3.0.0
- strip-ansi: ^6.0.0
- peerDependencies:
- node-notifier: ^8.0.1 || ^9.0.0 || ^10.0.0
- peerDependenciesMeta:
- node-notifier:
- optional: true
- checksum: 736dcc90c6c58dd9e1d2da122103b851187719ce3b3d4167689c63e68252632cd817712955b52ddaa648eba9c6f98f86cd58677325f0db4185f76899c64d7dac
- languageName: node
- linkType: hard
-
"@jest/core@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/core@npm:29.6.2"
@@ -2000,19 +1945,7 @@ __metadata:
languageName: node
linkType: hard
-"@jest/environment@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/environment@npm:29.6.1"
- dependencies:
- "@jest/fake-timers": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- jest-mock: ^29.6.1
- checksum: fb671f91f27e7aa1ba04983ef87a83f0794a597aba0a57d08cbb1fcb484c2aedc2201e99f85fafe27aec9be78af6f2d1d7e6ea88267938992a1d0f9d4615f5b2
- languageName: node
- linkType: hard
-
-"@jest/environment@npm:^29.6.2":
+"@jest/environment@npm:^29.6.1, @jest/environment@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/environment@npm:29.6.2"
dependencies:
@@ -2024,15 +1957,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/expect-utils@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/expect-utils@npm:29.6.1"
- dependencies:
- jest-get-type: ^29.4.3
- checksum: 037ee017eca62f7b45e1465fb5c6f9e92d5709a9ac716b8bff0bd294240a54de734e8f968fb69309cc4aef6c83b9552d5a821f3b18371af394bf04783859d706
- languageName: node
- linkType: hard
-
"@jest/expect-utils@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/expect-utils@npm:29.6.2"
@@ -2042,16 +1966,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/expect@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/expect@npm:29.6.1"
- dependencies:
- expect: ^29.6.1
- jest-snapshot: ^29.6.1
- checksum: 5c56977b3cc8489744d97d9dc2dcb196c1dfecc83a058a7ef0fd4f63d68cf120a23d27669272d1e1b184fb4337b85e4ac1fc7f886e3988fdf243d42d73973eac
- languageName: node
- linkType: hard
-
"@jest/expect@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/expect@npm:29.6.2"
@@ -2062,21 +1976,7 @@ __metadata:
languageName: node
linkType: hard
-"@jest/fake-timers@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/fake-timers@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- "@sinonjs/fake-timers": ^10.0.2
- "@types/node": "*"
- jest-message-util: ^29.6.1
- jest-mock: ^29.6.1
- jest-util: ^29.6.1
- checksum: 86991276944b7d6c2ada3703a272517f5f8f2f4e2af1fe26065f6db1dac4dc6299729a88c46bcb781dcc1b20504c1d4bbd8119fd8a0838ac81a9a4b5d2c8e429
- languageName: node
- linkType: hard
-
-"@jest/fake-timers@npm:^29.6.2":
+"@jest/fake-timers@npm:^29.6.1, @jest/fake-timers@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/fake-timers@npm:29.6.2"
dependencies:
@@ -2090,18 +1990,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/globals@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/globals@npm:29.6.1"
- dependencies:
- "@jest/environment": ^29.6.1
- "@jest/expect": ^29.6.1
- "@jest/types": ^29.6.1
- jest-mock: ^29.6.1
- checksum: fcca0b970a8b4894a1cdff0f500a86b45609e72c0a4319875e9504237b839df1a46c44d2f1362c6d87fdc7a05928edcc4b5a3751c9e6648dd70a761cdab64c94
- languageName: node
- linkType: hard
-
"@jest/globals@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/globals@npm:29.6.2"
@@ -2114,43 +2002,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/reporters@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/reporters@npm:29.6.1"
- dependencies:
- "@bcoe/v8-coverage": ^0.2.3
- "@jest/console": ^29.6.1
- "@jest/test-result": ^29.6.1
- "@jest/transform": ^29.6.1
- "@jest/types": ^29.6.1
- "@jridgewell/trace-mapping": ^0.3.18
- "@types/node": "*"
- chalk: ^4.0.0
- collect-v8-coverage: ^1.0.0
- exit: ^0.1.2
- glob: ^7.1.3
- graceful-fs: ^4.2.9
- istanbul-lib-coverage: ^3.0.0
- istanbul-lib-instrument: ^5.1.0
- istanbul-lib-report: ^3.0.0
- istanbul-lib-source-maps: ^4.0.0
- istanbul-reports: ^3.1.3
- jest-message-util: ^29.6.1
- jest-util: ^29.6.1
- jest-worker: ^29.6.1
- slash: ^3.0.0
- string-length: ^4.0.1
- strip-ansi: ^6.0.0
- v8-to-istanbul: ^9.0.1
- peerDependencies:
- node-notifier: ^8.0.1 || ^9.0.0 || ^10.0.0
- peerDependenciesMeta:
- node-notifier:
- optional: true
- checksum: b7dae415f3f6342b4db2671261bbee29af20a829f42135316c3dd548b9ef85290c9bb64a0e3aec4a55486596be1257ac8216a0f8d9794acd43f8b8fb686fc7e3
- languageName: node
- linkType: hard
-
"@jest/reporters@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/reporters@npm:29.6.2"
@@ -2208,18 +2059,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/test-result@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/test-result@npm:29.6.1"
- dependencies:
- "@jest/console": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/istanbul-lib-coverage": ^2.0.0
- collect-v8-coverage: ^1.0.0
- checksum: 9397a3a3410c5df564e79297b1be4fe33807a6157a017a1f74b54a6ef14de1530f12b922299e822e66a82c53269da16661772bffde3d883a78c5eefd2cd6d1cc
- languageName: node
- linkType: hard
-
"@jest/test-result@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/test-result@npm:29.6.2"
@@ -2232,18 +2071,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/test-sequencer@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/test-sequencer@npm:29.6.1"
- dependencies:
- "@jest/test-result": ^29.6.1
- graceful-fs: ^4.2.9
- jest-haste-map: ^29.6.1
- slash: ^3.0.0
- checksum: f3437178b5dca0401ed2e990d8b69161442351856d56f5725e009a487f5232b51039f8829673884b9bea61c861120d08a53a36432f4a4b8aab38915a68f7000d
- languageName: node
- linkType: hard
-
"@jest/test-sequencer@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/test-sequencer@npm:29.6.2"
@@ -2256,29 +2083,6 @@ __metadata:
languageName: node
linkType: hard
-"@jest/transform@npm:^29.6.1":
- version: 29.6.1
- resolution: "@jest/transform@npm:29.6.1"
- dependencies:
- "@babel/core": ^7.11.6
- "@jest/types": ^29.6.1
- "@jridgewell/trace-mapping": ^0.3.18
- babel-plugin-istanbul: ^6.1.1
- chalk: ^4.0.0
- convert-source-map: ^2.0.0
- fast-json-stable-stringify: ^2.1.0
- graceful-fs: ^4.2.9
- jest-haste-map: ^29.6.1
- jest-regex-util: ^29.4.3
- jest-util: ^29.6.1
- micromatch: ^4.0.4
- pirates: ^4.0.4
- slash: ^3.0.0
- write-file-atomic: ^4.0.2
- checksum: 1635cd66e4b3dbba0689ecefabc6137301756c9c12d1d23e25124dd0dd9b4a6a38653d51e825e90f74faa022152ac1eaf200591fb50417aa7e1f7d1d1c2bc11d
- languageName: node
- linkType: hard
-
"@jest/transform@npm:^29.6.2":
version: 29.6.2
resolution: "@jest/transform@npm:29.6.2"
@@ -2351,20 +2155,13 @@ __metadata:
languageName: node
linkType: hard
-"@jridgewell/sourcemap-codec@npm:1.4.14":
+"@jridgewell/sourcemap-codec@npm:1.4.14, @jridgewell/sourcemap-codec@npm:^1.4.10":
version: 1.4.14
resolution: "@jridgewell/sourcemap-codec@npm:1.4.14"
checksum: 61100637b6d173d3ba786a5dff019e1a74b1f394f323c1fee337ff390239f053b87266c7a948777f4b1ee68c01a8ad0ab61e5ff4abb5a012a0b091bec391ab97
languageName: node
linkType: hard
-"@jridgewell/sourcemap-codec@npm:^1.4.10":
- version: 1.4.15
- resolution: "@jridgewell/sourcemap-codec@npm:1.4.15"
- checksum: b881c7e503db3fc7f3c1f35a1dd2655a188cc51a3612d76efc8a6eb74728bef5606e6758ee77423e564092b4a518aba569bbb21c9bac5ab7a35b0c6ae7e344c8
- languageName: node
- linkType: hard
-
"@jridgewell/trace-mapping@npm:^0.3.12, @jridgewell/trace-mapping@npm:^0.3.17, @jridgewell/trace-mapping@npm:^0.3.18, @jridgewell/trace-mapping@npm:^0.3.9":
version: 0.3.18
resolution: "@jridgewell/trace-mapping@npm:0.3.18"
@@ -2385,14 +2182,18 @@ __metadata:
"@emotion/styled": ^11.10.5
"@jupyter/collaboration": ^1
"@jupyterlab/application": ^4
+ "@jupyterlab/apputils": ^4
"@jupyterlab/builder": ^4
"@jupyterlab/cells": ^4
"@jupyterlab/codeeditor": ^4
"@jupyterlab/codemirror": ^4
+ "@jupyterlab/completer": ^4
"@jupyterlab/coreutils": ^6
+ "@jupyterlab/docregistry": ^4
"@jupyterlab/fileeditor": ^4
"@jupyterlab/notebook": ^4
"@jupyterlab/services": ^7
+ "@jupyterlab/settingregistry": ^4
"@jupyterlab/testutils": ^4
"@jupyterlab/ui-components": ^4
"@mui/icons-material": ^5.11.0
@@ -2421,7 +2222,7 @@ __metadata:
stylelint-config-standard: ~24.0.0
stylelint-prettier: ^2.0.0
ts-jest: ^29
- typescript: ~4.3.0
+ typescript: ~4.9.0
y-protocols: ^1.0.5
languageName: unknown
linkType: soft
@@ -2436,7 +2237,6 @@ __metadata:
version: 0.0.0-use.local
resolution: "@jupyter-ai/monorepo@workspace:."
dependencies:
- "@jupyterlab/builder": ^4
lerna: ^6.4.1
nx: ^15.9.2
languageName: unknown
@@ -2480,9 +2280,33 @@ __metadata:
languageName: node
linkType: hard
-"@jupyter/ydoc@npm:^1.0.2":
- version: 1.0.2
- resolution: "@jupyter/ydoc@npm:1.0.2"
+"@jupyter/react-components@npm:^0.13.3":
+ version: 0.13.3
+ resolution: "@jupyter/react-components@npm:0.13.3"
+ dependencies:
+ "@jupyter/web-components": ^0.13.3
+ "@microsoft/fast-react-wrapper": ^0.3.18
+ react: ">=17.0.0 <19.0.0"
+ checksum: d8912ff6a68833d18bfe44489d71c9e6b4203a29c3c4f65379e630b2b1c1bd887360609d0ee2d03db2e84ee41570de1757cc09a1144288cd0e27a5e9bc0c6e82
+ languageName: node
+ linkType: hard
+
+"@jupyter/web-components@npm:^0.13.3":
+ version: 0.13.3
+ resolution: "@jupyter/web-components@npm:0.13.3"
+ dependencies:
+ "@microsoft/fast-colors": ^5.3.1
+ "@microsoft/fast-components": ^2.30.6
+ "@microsoft/fast-element": ^1.12.0
+ "@microsoft/fast-foundation": ^2.49.0
+ "@microsoft/fast-web-utilities": ^6.0.0
+ checksum: 23a698f4a0cecc0536f8af54c57175fd276d731a8dd978fe52ada02a72679189096f4fff337279a38a75cfdd92c590f7295d3fd12b6e1c5e3241a4691137d214
+ languageName: node
+ linkType: hard
+
+"@jupyter/ydoc@npm:^1.0.2, @jupyter/ydoc@npm:^1.1.1":
+ version: 1.1.1
+ resolution: "@jupyter/ydoc@npm:1.1.1"
dependencies:
"@jupyterlab/nbformat": ^3.0.0 || ^4.0.0-alpha.21 || ^4.0.0
"@lumino/coreutils": ^1.11.0 || ^2.0.0
@@ -2490,39 +2314,68 @@ __metadata:
"@lumino/signaling": ^1.10.0 || ^2.0.0
y-protocols: ^1.0.5
yjs: ^13.5.40
- checksum: 739f9630940466b3cfcd7b742dd06479f81772ca13f863d057af0bbb5e318829506969066ab72977e7c721644982b5c8f88cf44e1ae81955ed1c27e87632d1f2
+ checksum: a239b1dd57cfc9ba36c06ac5032a1b6388849ae01a1d0db0d45094f71fdadf4d473b4bf8becbef0cfcdc85cae505361fbec0822b02da5aa48e06b66f742dd7a0
languageName: node
linkType: hard
-"@jupyterlab/application@npm:^4, @jupyterlab/application@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/application@npm:4.0.3"
+"@jupyterlab/application@npm:^4, @jupyterlab/application@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/application@npm:4.0.11"
dependencies:
"@fortawesome/fontawesome-free": ^5.12.0
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/rendermime-interfaces": ^3.8.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/statedb": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/application": ^2.1.1
- "@lumino/commands": ^2.1.1
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/polling": ^2.1.1
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
- checksum: 25443512d8df22bc87899ed944c9d7ea6c233501173ddd6316d9f0fda0faa523b38b9973f98aeb519a138649839d1d61e19d54f28b229e20485f90d11495eaae
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/application": ^2.2.1
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ checksum: 9df885a5369cd43bc6636ef24afaa4bb371f3fff8940e3487bdb5e0de4b6a70bb33b43c6a50da69590c563b4d3e04f5219de0239a7aa859ffac7d3d1e017d23f
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/apputils@npm:^4, @jupyterlab/apputils@npm:^4.1.11":
+ version: 4.1.11
+ resolution: "@jupyterlab/apputils@npm:4.1.11"
+ dependencies:
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/settingregistry": ^4.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
+ "@types/react": ^18.0.26
+ react: ^18.2.0
+ sanitize-html: ~2.7.3
+ checksum: ab1bfa8e95de86464c35a2460e9cc4f89594a2cb69b38c19fd6d17a1c3d89e5c9fb368a1ac5425b5190c407e64c305c428e076a701117fc9007d0176bfe98501
languageName: node
linkType: hard
-"@jupyterlab/apputils@npm:^4.0.0, @jupyterlab/apputils@npm:^4.1.3":
+"@jupyterlab/apputils@npm:^4.0.0":
version: 4.1.3
resolution: "@jupyterlab/apputils@npm:4.1.3"
dependencies:
@@ -2551,36 +2404,65 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/attachments@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/attachments@npm:4.0.3"
+"@jupyterlab/apputils@npm:^4.2.0-beta.0, @jupyterlab/apputils@npm:^4.2.0-beta.1":
+ version: 4.2.0-beta.1
+ resolution: "@jupyterlab/apputils@npm:4.2.0-beta.1"
+ dependencies:
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/observables": ^5.1.0-beta.1
+ "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
+ "@jupyterlab/services": ^7.1.0-beta.1
+ "@jupyterlab/settingregistry": ^4.1.0-beta.1
+ "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@jupyterlab/statusbar": ^4.1.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.1
+ "@types/react": ^18.0.26
+ react: ^18.2.0
+ sanitize-html: ~2.7.3
+ checksum: 08e88b22bb4c9e5b333f32b44888ab0d7f6300bafb0b7966a40eb3f187f932ceece5a2cbf7c0ee29cbfeb9d90f954352973df96ecdddd4ad8ea89efaa67df46f
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/attachments@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/attachments@npm:4.0.11"
dependencies:
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/rendermime-interfaces": ^3.8.3
- "@lumino/disposable": ^2.1.1
- "@lumino/signaling": ^2.1.1
- checksum: 389b73436a258829ec559d8112f7ce50afa74041c67d30613a9ccabb6fb2af1bf0cf12f840780c31dff1627aae788bd23be06c77e6c63c0d641537989615169d
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ checksum: 13792a1a69280e48fcdaa5405042dad9135a1696197f40527a0c7c250285eab4330436df8cfa4e84b10f60ab07f4674c7abc89f98c50576061ca02c609458a84
languageName: node
linkType: hard
"@jupyterlab/builder@npm:^4":
- version: 4.0.3
- resolution: "@jupyterlab/builder@npm:4.0.3"
- dependencies:
- "@lumino/algorithm": ^2.0.0
- "@lumino/application": ^2.1.1
- "@lumino/commands": ^2.1.1
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/dragdrop": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- "@lumino/widgets": ^2.1.1
+ version: 4.0.11
+ resolution: "@jupyterlab/builder@npm:4.0.11"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/application": ^2.2.1
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
ajv: ^8.12.0
commander: ^9.4.1
css-loader: ^6.7.1
@@ -2602,72 +2484,96 @@ __metadata:
worker-loader: ^3.0.2
bin:
build-labextension: lib/build-labextension.js
- checksum: 7d6402f859bc43cf7baa90893e57bd8d421716256c51fe72b1f80f4b471446e918d77912babe9bfac87a4edcc2ae3d6434334688f13414d293ff340266607b46
+ checksum: 25f25098887572bb508759317b5a7e11716d3e45e554d3858dd16a70117a098281f2664ab8e89ea868298d279ae13840eda51924c39832aa43a3b994598058fe
languageName: node
linkType: hard
-"@jupyterlab/cells@npm:^4, @jupyterlab/cells@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/cells@npm:4.0.3"
+"@jupyterlab/cells@npm:^4, @jupyterlab/cells@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/cells@npm:4.0.11"
dependencies:
"@codemirror/state": ^6.2.0
"@codemirror/view": ^6.9.6
- "@jupyter/ydoc": ^1.0.2
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/attachments": ^4.0.3
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/codemirror": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/documentsearch": ^4.0.3
- "@jupyterlab/filebrowser": ^4.0.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/outputarea": ^4.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/toc": ^6.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/dragdrop": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/polling": ^2.1.1
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- "@lumino/widgets": ^2.1.1
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/attachments": ^4.0.11
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/codemirror": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/documentsearch": ^4.0.11
+ "@jupyterlab/filebrowser": ^4.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/outputarea": ^4.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/toc": ^6.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
- checksum: 48b835ba839f3cf565de0b7181e2404ddf928fc20ec966039fa7680fe6b9d72a66a612da31a62f7821312f11ac571790293b10488e4946e69448ac1be32c3433
+ checksum: c0d554269b0ab598f6ee197e76e3d3aaadf2a17bee778b899f00d2446ca51b1846b03771fb69fbce6009f50c62e8c2d7cfb6f1bcb763d7bd70a6e4f809c7a4d7
languageName: node
linkType: hard
-"@jupyterlab/codeeditor@npm:^4, @jupyterlab/codeeditor@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/codeeditor@npm:4.0.3"
+"@jupyterlab/codeeditor@npm:^4, @jupyterlab/codeeditor@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/codeeditor@npm:4.0.11"
dependencies:
"@codemirror/state": ^6.2.0
- "@jupyter/ydoc": ^1.0.2
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/statusbar": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/dragdrop": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
- checksum: 9e2da69f4bc579ff3d66a863f805b0a2edf1de8590139874767d349aff3e29e50a490fba92242a9c593f33144b6349c1d5a03eee4894ba7ed448593b12bb90eb
+ checksum: 65e3a5ad115fd288d4389b90e0d475051192f361d9ac119d3b75d150db973c735638051474dae18a3fca9ba8e986ea33b57ed424f1c444bafcd60b3e47e548f3
languageName: node
linkType: hard
-"@jupyterlab/codemirror@npm:^4, @jupyterlab/codemirror@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/codemirror@npm:4.0.3"
+"@jupyterlab/codeeditor@npm:^4.1.0-beta.0, @jupyterlab/codeeditor@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/codeeditor@npm:4.1.0-beta.1"
+ dependencies:
+ "@codemirror/state": ^6.2.0
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.2.0-beta.1
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/nbformat": ^4.1.0-beta.1
+ "@jupyterlab/observables": ^5.1.0-beta.1
+ "@jupyterlab/statusbar": ^4.1.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ react: ^18.2.0
+ checksum: db80b904be6cf3bf38569dfe9b918978633b66ddc8df6ea48b090a6f56465b435b7750b3791c5791a85004f0eaa63a85e80320a3deb2813363d7bfed79ce2ea5
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/codemirror@npm:^4, @jupyterlab/codemirror@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/codemirror@npm:4.0.11"
dependencies:
"@codemirror/autocomplete": ^6.5.1
"@codemirror/commands": ^6.2.3
@@ -2689,175 +2595,293 @@ __metadata:
"@codemirror/search": ^6.3.0
"@codemirror/state": ^6.2.0
"@codemirror/view": ^6.9.6
- "@jupyter/ydoc": ^1.0.2
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/documentsearch": ^4.0.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/documentsearch": ^4.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
"@lezer/common": ^1.0.2
"@lezer/generator": ^1.2.2
"@lezer/highlight": ^1.1.4
"@lezer/markdown": ^1.0.2
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/signaling": ^2.1.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
yjs: ^13.5.40
- checksum: 6c68d49f515a32df8a13a258d30dca01dfd7a77086d0df729d825bac618c4805bb210f2b9b672cde027c20543dc7384c514c12c4a5aedc66dd8b56f638b51569
- languageName: node
- linkType: hard
-
-"@jupyterlab/coreutils@npm:^6, @jupyterlab/coreutils@npm:^6.0.0, @jupyterlab/coreutils@npm:^6.0.3":
- version: 6.0.3
- resolution: "@jupyterlab/coreutils@npm:6.0.3"
- dependencies:
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/signaling": ^2.1.1
- minimist: ~1.2.0
- path-browserify: ^1.0.0
- url-parse: ~1.5.4
- checksum: c151b724f283477a92676520fd8009765d772c0bd2716d8648d0c79a56c402b6832609c870dd8ba683218d25a8c05ef769801e4699cedb989caadd99dff1ffe8
+ checksum: e4d16faad69575a6d3c4e41ab3cc268475c92f0783ca14013dc701cc2f12ee4eb7b37c1a650d9e60f17fe4daf0fba303e7cb984e06e9fde587c8075bbee7f1c8
languageName: node
linkType: hard
-"@jupyterlab/docmanager@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/docmanager@npm:4.0.3"
+"@jupyterlab/codemirror@npm:^4.1.0-beta.0":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/codemirror@npm:4.1.0-beta.1"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/statusbar": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
- react: ^18.2.0
- checksum: 2ce21005c918275e5bbd63a54d6cb89a4b2bbf6d9aa8ec0a3b172c48cce98a31d83bbd10c8a2211394ec41c60b782ab73b7dc3481de23bfc89be631cfa43cb5e
+ "@codemirror/autocomplete": ^6.5.1
+ "@codemirror/commands": ^6.2.3
+ "@codemirror/lang-cpp": ^6.0.2
+ "@codemirror/lang-css": ^6.1.1
+ "@codemirror/lang-html": ^6.4.3
+ "@codemirror/lang-java": ^6.0.1
+ "@codemirror/lang-javascript": ^6.1.7
+ "@codemirror/lang-json": ^6.0.1
+ "@codemirror/lang-markdown": ^6.1.1
+ "@codemirror/lang-php": ^6.0.1
+ "@codemirror/lang-python": ^6.1.3
+ "@codemirror/lang-rust": ^6.0.1
+ "@codemirror/lang-sql": ^6.4.1
+ "@codemirror/lang-wast": ^6.0.1
+ "@codemirror/lang-xml": ^6.0.2
+ "@codemirror/language": ^6.6.0
+ "@codemirror/legacy-modes": ^6.3.2
+ "@codemirror/search": ^6.3.0
+ "@codemirror/state": ^6.2.0
+ "@codemirror/view": ^6.9.6
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/codeeditor": ^4.1.0-beta.1
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/documentsearch": ^4.1.0-beta.1
+ "@jupyterlab/nbformat": ^4.1.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@lezer/common": ^1.0.2
+ "@lezer/generator": ^1.2.2
+ "@lezer/highlight": ^1.1.4
+ "@lezer/markdown": ^1.0.2
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ yjs: ^13.5.40
+ checksum: c15e974550f2f15f6fc042977e31b98df2f292de751f45e54f026526e679144a20122a0ea7ff9780ee6cc5f10c9129c21f7b1ea5af398267a4cb042ae190b65b
languageName: node
linkType: hard
-"@jupyterlab/docregistry@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/docregistry@npm:4.0.3"
+"@jupyterlab/completer@npm:4.1.0-beta.0":
+ version: 4.1.0-beta.0
+ resolution: "@jupyterlab/completer@npm:4.1.0-beta.0"
dependencies:
- "@jupyter/ydoc": ^1.0.2
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/rendermime-interfaces": ^3.8.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
- checksum: 3b3256c4b4755c58b3f290403ef49e3ebc6bebb8c17f882a2cc58dfd40a596bc427ba41e60a610ce6c90fc5b4cbd552502ce56a6654b85fb35283bdaed58b861
+ "@codemirror/state": ^6.2.0
+ "@codemirror/view": ^6.9.6
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.2.0-beta.0
+ "@jupyterlab/codeeditor": ^4.1.0-beta.0
+ "@jupyterlab/codemirror": ^4.1.0-beta.0
+ "@jupyterlab/coreutils": ^6.1.0-beta.0
+ "@jupyterlab/rendermime": ^4.1.0-beta.0
+ "@jupyterlab/services": ^7.1.0-beta.0
+ "@jupyterlab/settingregistry": ^4.1.0-beta.0
+ "@jupyterlab/statedb": ^4.1.0-beta.0
+ "@jupyterlab/translation": ^4.1.0-beta.0
+ "@jupyterlab/ui-components": ^4.1.0-beta.0
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ checksum: 542ba03197dc4abc4895cf096ac3eb572c7178ab5c787663e985b1515203a6eabf6a02ebc9eda4ea5b96380937c241ed2b35378340b4d596a74e7e34e5893fb9
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/coreutils@npm:^6, @jupyterlab/coreutils@npm:^6.0.11":
+ version: 6.0.11
+ resolution: "@jupyterlab/coreutils@npm:6.0.11"
+ dependencies:
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ minimist: ~1.2.0
+ path-browserify: ^1.0.0
+ url-parse: ~1.5.4
+ checksum: 2a3ab30865439d486ad180c0779bf086992d5999727e1fb4cbadad6ecd4c53fbcfcde4fc611d9819dc28aedc6b36e7b48d267ff2bcdd8f35de5b4f3d7145f2cc
languageName: node
linkType: hard
-"@jupyterlab/documentsearch@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/documentsearch@npm:4.0.3"
+"@jupyterlab/coreutils@npm:^6.0.0, @jupyterlab/coreutils@npm:^6.0.3":
+ version: 6.0.3
+ resolution: "@jupyterlab/coreutils@npm:6.0.3"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
"@lumino/coreutils": ^2.1.1
"@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/polling": ^2.1.1
"@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
+ minimist: ~1.2.0
+ path-browserify: ^1.0.0
+ url-parse: ~1.5.4
+ checksum: c151b724f283477a92676520fd8009765d772c0bd2716d8648d0c79a56c402b6832609c870dd8ba683218d25a8c05ef769801e4699cedb989caadd99dff1ffe8
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/coreutils@npm:^6.1.0-beta.0, @jupyterlab/coreutils@npm:^6.1.0-beta.1":
+ version: 6.1.0-beta.1
+ resolution: "@jupyterlab/coreutils@npm:6.1.0-beta.1"
+ dependencies:
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ minimist: ~1.2.0
+ path-browserify: ^1.0.0
+ url-parse: ~1.5.4
+ checksum: aeca458beb8f9f73d9ecafdbf85977c46ae472caa8d4f2914060b0a674f8b88f6af1feaee9d1228ec43138c61cf7c48bcadb8fb6f79e9797dc97a7395a579731
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/docmanager@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/docmanager@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ react: ^18.2.0
+ checksum: 964f85cceb54866bb3c603d5d7b3d3f064cb481917ae1e1f6aaf16fe2fb2a0863a9ab8427b82e72eed171e3ae80043b0de72e514dce0a4a0feb46e39c2faf9a0
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/docregistry@npm:^4, @jupyterlab/docregistry@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/docregistry@npm:4.0.11"
+ dependencies:
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ checksum: 0c08ec3660f17b6d45aae030215a008278e82068b94bdd1bb77ec4e2995b5ef974830e90a78f5b46e7863204bab1ac397306c5d65901fed4f6bca5e57b4cbe05
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/documentsearch@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/documentsearch@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
- checksum: fcf8c50a60e2b265901b27749f63b52f17467950da863f8b766d30038a63aa896de4352ea5eb1221f82ec0abf11be424c13f17d0b912e758e456e596e2927b1f
+ checksum: 1fa0087c6a0bc40e653a8e67f362b8765558ff9e1c6cf4dedb2e010cdd5112d863d9f10804f36dc22d79f41ad0757c54446af923337ad27e922f972881141bd4
languageName: node
linkType: hard
-"@jupyterlab/filebrowser@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/filebrowser@npm:4.0.3"
+"@jupyterlab/documentsearch@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/documentsearch@npm:4.1.0-beta.1"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docmanager": ^4.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/statedb": ^4.0.3
- "@jupyterlab/statusbar": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/dragdrop": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/polling": ^2.1.1
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- "@lumino/widgets": ^2.1.1
+ "@jupyterlab/apputils": ^4.2.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ react: ^18.2.0
+ checksum: c1071370e35014230d9da1379f112d8ce03d65736da2014d524230885a00d188533a2df19f43431e92f0dd5028a89b0f21acfd737214e70c33a4f9d2f2a1340e
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/filebrowser@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/filebrowser@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docmanager": ^4.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
- checksum: 30c2447cfe76fb2d1c3d1c88136b842f8d0f46451d4082ecc1a26496e35f5309a956468af18b2b0ac42a72c9bd14a1ddd050d434c6d9740b468956a6bb989086
+ checksum: d4a452fd6e0772a79d662537a8abf10f83c1a66739813e73bf9218ef8c94b388bdfdb2919d97e135b914c40abfed551cb43b7bcc92b3bb896f99f3e5584d257f
languageName: node
linkType: hard
"@jupyterlab/fileeditor@npm:^4":
- version: 4.0.3
- resolution: "@jupyterlab/fileeditor@npm:4.0.3"
- dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/codemirror": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/documentsearch": ^4.0.3
- "@jupyterlab/lsp": ^4.0.3
- "@jupyterlab/statusbar": ^4.0.3
- "@jupyterlab/toc": ^6.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/commands": ^2.1.1
- "@lumino/coreutils": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/widgets": ^2.1.1
+ version: 4.0.11
+ resolution: "@jupyterlab/fileeditor@npm:4.0.11"
+ dependencies:
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/codemirror": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/documentsearch": ^4.0.11
+ "@jupyterlab/lsp": ^4.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/toc": ^6.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
regexp-match-indices: ^1.0.2
- checksum: 9ff129ffa6b91752d3c4f0d36357532a29bec56a4a91d2d3a182e7cba2d3a5ba9b67317bb66356bbd201ca75af30bf5b0985f4629ef4acc4c2842cc7bca72ff6
+ checksum: 27b812a55ac1f91fe149d71ea0e1b93b19f725d270292bb2351d60707d5a293e922cec8b5a7b90c33601ef4fbbe64f8f408d0208b260a62da4bad7028f81cd2e
languageName: node
linkType: hard
-"@jupyterlab/lsp@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/lsp@npm:4.0.3"
+"@jupyterlab/lsp@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/lsp@npm:4.0.11"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/translation": ^4.0.3
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/signaling": ^2.1.1
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
lodash.mergewith: ^4.6.1
vscode-jsonrpc: ^6.0.0
vscode-languageserver-protocol: ^3.17.0
vscode-ws-jsonrpc: ~1.0.2
- checksum: f80084ef6a5337d768281d6d9029e303c0867ced6449301708be76d32a411de7e4d41ca290b92596ef206456533e8132d1221cd1d2feee7ad341b2b998ddc766
+ checksum: e2ca0286320c1c7855cf5c2eecf301037202de4df1e53ac109affd73b41c686a27e6205591f7a0ca85376d595db3e4779a423599c18745df24df93ad124be1a0
languageName: node
linkType: hard
@@ -2870,39 +2894,70 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/notebook@npm:^4, @jupyterlab/notebook@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/notebook@npm:4.0.3"
- dependencies:
- "@jupyter/ydoc": ^1.0.2
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/cells": ^4.0.3
- "@jupyterlab/codeeditor": ^4.0.3
- "@jupyterlab/codemirror": ^4.0.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/documentsearch": ^4.0.3
- "@jupyterlab/lsp": ^4.0.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/settingregistry": ^4.0.3
- "@jupyterlab/statusbar": ^4.0.3
- "@jupyterlab/toc": ^6.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/dragdrop": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- "@lumino/widgets": ^2.1.1
+"@jupyterlab/nbformat@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/nbformat@npm:4.0.11"
+ dependencies:
+ "@lumino/coreutils": ^2.1.2
+ checksum: 7bb488e94f09d66d858ce2a001e208beca9f1e87fc674332c4630cfb5039a6bd1579d9071019782aba546a9b43e2a7de5b125f7a0a7a7caa0b190a2b8d1266b6
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/nbformat@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/nbformat@npm:4.1.0-beta.1"
+ dependencies:
+ "@lumino/coreutils": ^2.1.2
+ checksum: 5a48c52fb67657a18c78dcd2b934c273ded1e2bfec573a4a01d3ef4238beb808d4f509b96d3306c4a39df00f77da3bc74692c2ab8e41d83e60a1382a9e0cd978
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/notebook@npm:^4, @jupyterlab/notebook@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/notebook@npm:4.0.11"
+ dependencies:
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/cells": ^4.0.11
+ "@jupyterlab/codeeditor": ^4.0.11
+ "@jupyterlab/codemirror": ^4.0.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/documentsearch": ^4.0.11
+ "@jupyterlab/lsp": ^4.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/settingregistry": ^4.0.11
+ "@jupyterlab/statusbar": ^4.0.11
+ "@jupyterlab/toc": ^6.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
react: ^18.2.0
- checksum: 1388bea973c093b82ac110bf115f342fb5e2cae9c855f0704f08882df8a3714566fccefbb3d85903fdb30170bae4fdfd29b3785473850bb3e91e8cdfc3658265
+ checksum: e8bbfca1cba7b78427fcca1211266ba989e4950da2361a3606a6ab8485ab4618c6f1a321463a8974b96c7a77d4d00ed9b293abf68f9ce84731bd0e9687ec8be7
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/observables@npm:^5.0.11":
+ version: 5.0.11
+ resolution: "@jupyterlab/observables@npm:5.0.11"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ checksum: b47cc8e73db9cc856454c0db530b774a4d11f6ade066b52fe521b0cec2b7a8f5eebfe2c0f0f7ada976474698dab9a77bdef3feea2960ea75bcf7052404ebec16
languageName: node
linkType: hard
@@ -2919,25 +2974,48 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/outputarea@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/outputarea@npm:4.0.3"
+"@jupyterlab/observables@npm:^5.1.0-beta.1":
+ version: 5.1.0-beta.1
+ resolution: "@jupyterlab/observables@npm:5.1.0-beta.1"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/rendermime-interfaces": ^3.8.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/translation": ^4.0.3
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
- checksum: bd4a60ea43cd3117c8cc6b6aee094f0a4e37c3635051fbf13857669547847ce52f6e186cb2c329893e333e6113998899a9d5fa90727926a73223d5082c936471
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ checksum: 4bdc64771692a9613351251113ca8cd28f69fac00957d500de4cbcb595999bf234c3a61d36ed074d390b7085cde5e2e4d4be59a63f55db271597b5f2f4c07675
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/outputarea@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/outputarea@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ checksum: f9c69319d0bd144f35840d72784b606153fe62d44b51a22f11ab4ee7088a262955dff4ea86de8b1bd929841294c8c5a3fadff37fa46b15ca53586868bb498cad
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/rendermime-interfaces@npm:^3.8.11":
+ version: 3.8.11
+ resolution: "@jupyterlab/rendermime-interfaces@npm:3.8.11"
+ dependencies:
+ "@lumino/coreutils": ^1.11.0 || ^2.1.2
+ "@lumino/widgets": ^1.37.2 || ^2.3.0
+ checksum: 277373ca5e05bfbcd6e88c38cdf5c1bdfc052beaf1cac120cb3a458d96cce949b17c9b47cfd16cfcf2e2241530fa9f3062343512084b79a549f6bde84a846c84
languageName: node
linkType: hard
@@ -2951,27 +3029,76 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/rendermime@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/rendermime@npm:4.0.3"
+"@jupyterlab/rendermime-interfaces@npm:^3.9.0-beta.1":
+ version: 3.9.0-beta.1
+ resolution: "@jupyterlab/rendermime-interfaces@npm:3.9.0-beta.1"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/nbformat": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime-interfaces": ^3.8.3
- "@jupyterlab/services": ^7.0.3
- "@jupyterlab/translation": ^4.0.3
- "@lumino/coreutils": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
+ "@lumino/coreutils": ^1.11.0 || ^2.1.2
+ "@lumino/widgets": ^1.37.2 || ^2.3.1
+ checksum: b8c6cd6af79bb80ace56da753cbfdeba0a7739ed90160fe67cf9f209ee3ee220a616a24422720e6702a2944d23e8193ff1ad6f1d881be0bf8e126e93480fd714
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/rendermime@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/rendermime@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/translation": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
lodash.escape: ^4.0.1
- checksum: 8e5d390ee84eb0f2017b80c2ccf4dc318ef53c3ebb52e84056d76b2bccbcb755190b7712e99936ad0610f03fe4b0a9a0f2f5ed854e21856c0cc39371ce1fad01
+ checksum: cb76d6824caac3b50e4e38c171f7db7239deb4499b0be237d51c68b3195c4d2edb1e4fa42253183949459ae0b78a1acbdc936b1eba51c8472bcf89586d267975
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/rendermime@npm:^4.1.0-beta.0":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/rendermime@npm:4.1.0-beta.1"
+ dependencies:
+ "@jupyterlab/apputils": ^4.2.0-beta.1
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/nbformat": ^4.1.0-beta.1
+ "@jupyterlab/observables": ^5.1.0-beta.1
+ "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
+ "@jupyterlab/services": ^7.1.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ lodash.escape: ^4.0.1
+ checksum: 22f87e09f8c27d06c0f9bb72eb45284c9182411318bf976c4915aad68b6e89d3a4101580a37ee32473d59afdecc30354fb5a5baa2db622cd411241321fa69a8d
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/services@npm:^7, @jupyterlab/services@npm:^7.0.11":
+ version: 7.0.11
+ resolution: "@jupyterlab/services@npm:7.0.11"
+ dependencies:
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/settingregistry": ^4.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/polling": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ ws: ^8.11.0
+ checksum: 6539cc1b34f29feaab094a570576890984fe9cc3f0140dc3b17cca1ead878197bd3d2ca01b4f6fe6808ee5dca8f720769e0db10a27f1fcad1759b6ead9631b24
languageName: node
linkType: hard
-"@jupyterlab/services@npm:^7, @jupyterlab/services@npm:^7.0.0, @jupyterlab/services@npm:^7.0.3":
+"@jupyterlab/services@npm:^7.0.0, @jupyterlab/services@npm:^7.0.3":
version: 7.0.3
resolution: "@jupyterlab/services@npm:7.0.3"
dependencies:
@@ -2990,6 +3117,44 @@ __metadata:
languageName: node
linkType: hard
+"@jupyterlab/services@npm:^7.1.0-beta.0, @jupyterlab/services@npm:^7.1.0-beta.1":
+ version: 7.1.0-beta.1
+ resolution: "@jupyterlab/services@npm:7.1.0-beta.1"
+ dependencies:
+ "@jupyter/ydoc": ^1.1.1
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/nbformat": ^4.1.0-beta.1
+ "@jupyterlab/settingregistry": ^4.1.0-beta.1
+ "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/polling": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ ws: ^8.11.0
+ checksum: 8c0728901e1e80c069aff11abe4c5716502bfb133cab5592a844d1dd6db528212344522b0a15b47aa4c2ade1da9a59d480563313b2a263f291dfb96e605ff08c
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/settingregistry@npm:^4, @jupyterlab/settingregistry@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/settingregistry@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/nbformat": ^4.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@rjsf/utils": ^5.1.0
+ ajv: ^8.12.0
+ json5: ^2.2.3
+ peerDependencies:
+ react: ">=16"
+ checksum: 97d06a08eff0589e83c40611f50e765dc8c75b33f821bee86defdb856c7747276174cc3370374159a37ae1393779cf18634fbca69072db447c053ccb872f3117
+ languageName: node
+ linkType: hard
+
"@jupyterlab/settingregistry@npm:^4.0.3":
version: 4.0.3
resolution: "@jupyterlab/settingregistry@npm:4.0.3"
@@ -3009,6 +3174,38 @@ __metadata:
languageName: node
linkType: hard
+"@jupyterlab/settingregistry@npm:^4.1.0-beta.0, @jupyterlab/settingregistry@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/settingregistry@npm:4.1.0-beta.1"
+ dependencies:
+ "@jupyterlab/nbformat": ^4.1.0-beta.1
+ "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ "@rjsf/utils": ^5.13.4
+ ajv: ^8.12.0
+ json5: ^2.2.3
+ peerDependencies:
+ react: ">=16"
+ checksum: c3ceb6cbf9bc061e9ad0f44d6fe06f59ed4e9f6223f7307c0c30112e20da7da4361928c0380dbdcf92fe0e533934d9c032881165d8546ce51707188696630dd3
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/statedb@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/statedb@npm:4.0.11"
+ dependencies:
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ checksum: b0637af63185b71db698ce572d2fcdaee94e6fe93659ead1e2301cb6ee1ec2b16164a61275cb44af3cac679d40b1a2c3492f20b44d9eb07a75440706627cd733
+ languageName: node
+ linkType: hard
+
"@jupyterlab/statedb@npm:^4.0.3":
version: 4.0.3
resolution: "@jupyterlab/statedb@npm:4.0.3"
@@ -3022,6 +3219,35 @@ __metadata:
languageName: node
linkType: hard
+"@jupyterlab/statedb@npm:^4.1.0-beta.0, @jupyterlab/statedb@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/statedb@npm:4.1.0-beta.1"
+ dependencies:
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ checksum: a4f24554c41db7c5b008d544086038a6c8d37d53cf3d6f8fa911ac28ec4380a67cbb2f2fbcdb48c0ba48adb63b11efda70bfcb90770ab24bfd80b2723a6c2c3e
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/statusbar@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/statusbar@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ react: ^18.2.0
+ checksum: cb9d8e51533d1b0dd13f0459b3f33bab23c23dffdfb58467e58d47d0cb09f61fce320b67c50e3e5a2328fba9f7a815d4f483f460b6bea8b34cf7fcd02144fe10
+ languageName: node
+ linkType: hard
+
"@jupyterlab/statusbar@npm:^4.0.3":
version: 4.0.3
resolution: "@jupyterlab/statusbar@npm:4.0.3"
@@ -3038,15 +3264,31 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/testing@npm:^4.0.3":
- version: 4.0.3
- resolution: "@jupyterlab/testing@npm:4.0.3"
+"@jupyterlab/statusbar@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/statusbar@npm:4.1.0-beta.1"
+ dependencies:
+ "@jupyterlab/ui-components": ^4.1.0-beta.1
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ react: ^18.2.0
+ checksum: c9b48d15e5c6bb0337d583cf0ab47393f7d7cd84dacb9797d9cbd7517bca877a333ba7a75e8d96d93e68d090c06d3d8f58589ba6c1dcb8c73233022c282c24dd
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/testing@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/testing@npm:4.0.11"
dependencies:
"@babel/core": ^7.10.2
"@babel/preset-env": ^7.10.2
- "@jupyterlab/coreutils": ^6.0.3
- "@lumino/coreutils": ^2.1.1
- "@lumino/signaling": ^2.1.1
+ "@jupyterlab/coreutils": ^6.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/signaling": ^2.1.2
child_process: ~1.0.2
deepmerge: ^4.2.2
fs-extra: ^10.1.0
@@ -3059,41 +3301,55 @@ __metadata:
ts-jest: ^29.1.0
peerDependencies:
typescript: ">=4.3"
- checksum: a69e405d9faa36f48d58b2dc08ad7961adf043d33b834180afe6fb4ced3fa917418c831b3fc233c65f7ec92666828a418fe3135d4502980122763189b103a007
+ checksum: 089bdc24bb4376e8eec4129aa36640925d0f5e68388be41ed292f6c1c9be350495caa854bab9779d1431ab17ad272288bb5659a1d555335c2638c57a518cf116
languageName: node
linkType: hard
"@jupyterlab/testutils@npm:^4":
- version: 4.0.3
- resolution: "@jupyterlab/testutils@npm:4.0.3"
- dependencies:
- "@jupyterlab/application": ^4.0.3
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/notebook": ^4.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/testing": ^4.0.3
- checksum: 8194b2551d7c921aa53c18f24d53e0a29fb8a60e4b4c66cd5873da15bf7c36910874de39f4750f42c0340419548cb8917cdfa9f69fa7cce55e6d604e85244dab
+ version: 4.0.11
+ resolution: "@jupyterlab/testutils@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/application": ^4.0.11
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/notebook": ^4.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/testing": ^4.0.11
+ checksum: fbd17fe208d7cf8a1a80cfbe3aa0e08f9ac15436960729da9a870ce4a77c562b3dd9ac585f393faae25ebe23e7ee35a399b247284aa11d7856248a48f036e218
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/toc@npm:^6.0.11":
+ version: 6.0.11
+ resolution: "@jupyterlab/toc@npm:6.0.11"
+ dependencies:
+ "@jupyterlab/apputils": ^4.1.11
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/docregistry": ^4.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime": ^4.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/translation": ^4.0.11
+ "@jupyterlab/ui-components": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/widgets": ^2.3.0
+ react: ^18.2.0
+ checksum: d93d003e65b36d648407c20d19d232c0c232e9c92757b7910a170a5bfc721ec2b229a97efb553726bfa940f570b54ec3dabf8d1bae07ab84a577903d1fd039e1
languageName: node
linkType: hard
-"@jupyterlab/toc@npm:^6.0.3":
- version: 6.0.3
- resolution: "@jupyterlab/toc@npm:6.0.3"
+"@jupyterlab/translation@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/translation@npm:4.0.11"
dependencies:
- "@jupyterlab/apputils": ^4.1.3
- "@jupyterlab/coreutils": ^6.0.3
- "@jupyterlab/docregistry": ^4.0.3
- "@jupyterlab/observables": ^5.0.3
- "@jupyterlab/rendermime": ^4.0.3
- "@jupyterlab/translation": ^4.0.3
- "@jupyterlab/ui-components": ^4.0.3
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/messaging": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/widgets": ^2.1.1
- react: ^18.2.0
- checksum: 8098824692a3e5ac786d476b7c14abc4adbef0db3f0f38143969594dd4fd3069f92335ac417fc45d42d1577d3555791c6aecdf2361acb22d3902b93b899b31f0
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/services": ^7.0.11
+ "@jupyterlab/statedb": ^4.0.11
+ "@lumino/coreutils": ^2.1.2
+ checksum: 1e65d0a162d56724a99dcb7eec874b80e78f8113e14d9cc1461f56cebef9a21604baf1fffd43cd62f186942b63fd49effec2b1960e4e3aca0a6cbe03df46bd51
languageName: node
linkType: hard
@@ -3110,7 +3366,49 @@ __metadata:
languageName: node
linkType: hard
-"@jupyterlab/ui-components@npm:^4, @jupyterlab/ui-components@npm:^4.0.0, @jupyterlab/ui-components@npm:^4.0.3":
+"@jupyterlab/translation@npm:^4.1.0-beta.0, @jupyterlab/translation@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/translation@npm:4.1.0-beta.1"
+ dependencies:
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
+ "@jupyterlab/services": ^7.1.0-beta.1
+ "@jupyterlab/statedb": ^4.1.0-beta.1
+ "@lumino/coreutils": ^2.1.2
+ checksum: bc6b2d72f8124bf39865a037a462bbd8c394255dce6c8ce23b11f11d9a886019b4109cebc73969d7d70ac1651daeef58cee3ac3e982afd713c6987ddd92fee97
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/ui-components@npm:^4, @jupyterlab/ui-components@npm:^4.0.11":
+ version: 4.0.11
+ resolution: "@jupyterlab/ui-components@npm:4.0.11"
+ dependencies:
+ "@jupyterlab/coreutils": ^6.0.11
+ "@jupyterlab/observables": ^5.0.11
+ "@jupyterlab/rendermime-interfaces": ^3.8.11
+ "@jupyterlab/translation": ^4.0.11
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/commands": ^2.1.3
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.0
+ "@rjsf/core": ^5.1.0
+ "@rjsf/utils": ^5.1.0
+ react: ^18.2.0
+ react-dom: ^18.2.0
+ typestyle: ^2.0.4
+ peerDependencies:
+ react: ^18.2.0
+ checksum: 0ad2fcdcb531ffc4da4f475c24520007d65190c70bfe07888f4284256754e15ffb77d23f02a6ce44688bad0103484cba22327db49796abb13f8dfc335ea2373d
+ languageName: node
+ linkType: hard
+
+"@jupyterlab/ui-components@npm:^4.0.0, @jupyterlab/ui-components@npm:^4.0.3":
version: 4.0.3
resolution: "@jupyterlab/ui-components@npm:4.0.3"
dependencies:
@@ -3139,6 +3437,37 @@ __metadata:
languageName: node
linkType: hard
+"@jupyterlab/ui-components@npm:^4.1.0-beta.0, @jupyterlab/ui-components@npm:^4.1.0-beta.1":
+ version: 4.1.0-beta.1
+ resolution: "@jupyterlab/ui-components@npm:4.1.0-beta.1"
+ dependencies:
+ "@jupyter/react-components": ^0.13.3
+ "@jupyter/web-components": ^0.13.3
+ "@jupyterlab/coreutils": ^6.1.0-beta.1
+ "@jupyterlab/observables": ^5.1.0-beta.1
+ "@jupyterlab/rendermime-interfaces": ^3.9.0-beta.1
+ "@jupyterlab/translation": ^4.1.0-beta.1
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/messaging": ^2.0.1
+ "@lumino/polling": ^2.1.2
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ "@lumino/widgets": ^2.3.1
+ "@rjsf/core": ^5.13.4
+ "@rjsf/utils": ^5.13.4
+ react: ^18.2.0
+ react-dom: ^18.2.0
+ typestyle: ^2.0.4
+ peerDependencies:
+ react: ^18.2.0
+ checksum: b6fa63c3df4754083674ff957a89c9db16eee1b7e650657735d144b3218eb1a070b82f6584882e4e9fbeafd568a23390f08c2bdf68bfc5a8414d652b84bb04b8
+ languageName: node
+ linkType: hard
+
"@lerna/child-process@npm:6.6.2":
version: 6.6.2
resolution: "@lerna/child-process@npm:6.6.2"
@@ -3389,151 +3718,220 @@ __metadata:
languageName: node
linkType: hard
-"@lumino/algorithm@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/algorithm@npm:2.0.0"
- checksum: 663edf536e94397b449c6a2643a735e602fbb396dec86b56ad1193a768dce27c6e7da5ad0384aa90086ea44cbb64dde3f9d565e9fd81858f1eb0c6b4253f3b94
+"@lumino/algorithm@npm:^2.0.0, @lumino/algorithm@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/algorithm@npm:2.0.1"
+ checksum: cbf7fcf6ee6b785ea502cdfddc53d61f9d353dcb9659343511d5cd4b4030be2ff2ca4c08daec42f84417ab0318a3d9972a17319fa5231693e109ab112dcf8000
languageName: node
linkType: hard
-"@lumino/application@npm:^2.1.1":
- version: 2.2.0
- resolution: "@lumino/application@npm:2.2.0"
+"@lumino/application@npm:^2.2.1":
+ version: 2.3.0
+ resolution: "@lumino/application@npm:2.3.0"
dependencies:
- "@lumino/commands": ^2.1.2
- "@lumino/coreutils": ^2.1.1
- "@lumino/widgets": ^2.2.0
- checksum: b62da44b21d110c5d3478a49549326974b59325b8c60a58905d8e5ef08210273cd013cb60387d1f082fb79377a230278e2cf63e345491b0a54c75fdcc6164a68
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/widgets": ^2.3.1
+ checksum: 9d1eb5bc972ed158bf219604a53bbac1262059bc5b0123d3e041974486b9cbb8288abeeec916f3b62f62d7c32e716cccf8b73e4832ae927e4f9dd4e4b0cd37ed
languageName: node
linkType: hard
-"@lumino/collections@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/collections@npm:2.0.0"
+"@lumino/collections@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/collections@npm:2.0.1"
dependencies:
- "@lumino/algorithm": ^2.0.0
- checksum: 4a7fc3571e92a1368a1ef01300ad7b6e0d4ff13cb78b89533d5962eea66d4a7550e15d8b80fa3ab1816b1a89382f35015f9dddf72ab04654c17e5b516b845d8f
+ "@lumino/algorithm": ^2.0.1
+ checksum: 8a29b7973a388a33c5beda0819dcd2dc2aad51a8406dcfd4581b055a9f77a39dc5800f7a8b4ae3c0bb97ae7b56a7a869e2560ffb7a920a28e93b477ba05907d6
+ languageName: node
+ linkType: hard
+
+"@lumino/commands@npm:^2.1.1, @lumino/commands@npm:^2.1.3, @lumino/commands@npm:^2.2.0":
+ version: 2.2.0
+ resolution: "@lumino/commands@npm:2.2.0"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/keyboard": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ checksum: 093e9715491e5cef24bc80665d64841417b400f2fa595f9b60832a3b6340c405c94a6aa276911944a2c46d79a6229f3cc087b73f50852bba25ece805abd0fae9
+ languageName: node
+ linkType: hard
+
+"@lumino/coreutils@npm:^1.11.0 || ^2.0.0, @lumino/coreutils@npm:^1.11.0 || ^2.1.1, @lumino/coreutils@npm:^1.11.0 || ^2.1.2, @lumino/coreutils@npm:^2.1.0, @lumino/coreutils@npm:^2.1.1, @lumino/coreutils@npm:^2.1.2":
+ version: 2.1.2
+ resolution: "@lumino/coreutils@npm:2.1.2"
+ checksum: 7865317ac0676b448d108eb57ab5d8b2a17c101995c0f7a7106662d9fe6c859570104525f83ee3cda12ae2e326803372206d6f4c1f415a5b59e4158a7b81066f
languageName: node
linkType: hard
-"@lumino/commands@npm:^2.1.1, @lumino/commands@npm:^2.1.2":
+"@lumino/disposable@npm:^1.10.0 || ^2.0.0, @lumino/disposable@npm:^2.1.0, @lumino/disposable@npm:^2.1.1, @lumino/disposable@npm:^2.1.2":
version: 2.1.2
- resolution: "@lumino/commands@npm:2.1.2"
+ resolution: "@lumino/disposable@npm:2.1.2"
dependencies:
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/keyboard": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- checksum: c0b5ce8c5e1a86a98a90f54bb07b74742748110cf3362b86ff8328c1b5475c4dc05f1c4c9f50bf79e51c4e2ddc5cd69d6194f3d39dd5b58f357b0f30758bf35b
+ "@lumino/signaling": ^2.1.2
+ checksum: ac2fb2bf18d0b2939fda454f3db248a0ff6e8a77b401e586d1caa9293b3318f808b93a117c9c3ac27cd17aab545aea83b49108d099b9b2f5503ae2a012fbc6e2
languageName: node
linkType: hard
-"@lumino/coreutils@npm:^1.11.0 || ^2.0.0, @lumino/coreutils@npm:^1.11.0 || ^2.1.1, @lumino/coreutils@npm:^2.1.0, @lumino/coreutils@npm:^2.1.1":
- version: 2.1.1
- resolution: "@lumino/coreutils@npm:2.1.1"
- checksum: dfdeb2b0282caae17b6c3edfebadf4ce7c75fc879fa60cacfef9b154412f4b35e4ffd95b1833b99d8dacb99aaaa04513570129ae2024c3f33e2677a01f0576ce
+"@lumino/domutils@npm:^2.0.0, @lumino/domutils@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/domutils@npm:2.0.1"
+ checksum: 61fa0ab226869dfbb763fc426790cf5a43b7d6f4cea1364c6dd56d61c44bff05eea188d33ff847449608ef58ed343161bee15c19b96f35410e4ee35815dc611a
languageName: node
linkType: hard
-"@lumino/disposable@npm:^1.10.0 || ^2.0.0, @lumino/disposable@npm:^2.1.0, @lumino/disposable@npm:^2.1.1":
- version: 2.1.1
- resolution: "@lumino/disposable@npm:2.1.1"
+"@lumino/dragdrop@npm:^2.1.4":
+ version: 2.1.4
+ resolution: "@lumino/dragdrop@npm:2.1.4"
dependencies:
- "@lumino/signaling": ^2.1.1
- checksum: ed6cdfe13f3346178a087690d4e7baeccaed7e73ca23cb239765202409f5c01b4729a4058b4717f963462ee9ef2e5cb14ad1974e3163741267290edc3715c85c
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ checksum: 43d82484b13b38b612e7dfb424a840ed6a38d0db778af10655c4ba235c67b5b12db1683929b35a36ab2845f77466066dfd1ee25c1c273e8e175677eba9dc560d
languageName: node
linkType: hard
-"@lumino/domutils@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/domutils@npm:2.0.0"
- checksum: 4a146bfc1006d5fd00ccecc61d9803965d269c15c48c892fd87216336ce967f0db91f31203c5616c83d260224cddf25af4abb6704a6770757d19e44068f690bf
+"@lumino/keyboard@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/keyboard@npm:2.0.1"
+ checksum: cf33f13427a418efd7cc91061233321e860d5404f3d86397781028309bef86c8ad2d88276ffe335c1db0fe619bd9d1e60641c81f881696957a58703ee4652c3e
+ languageName: node
+ linkType: hard
+
+"@lumino/messaging@npm:^2.0.0, @lumino/messaging@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/messaging@npm:2.0.1"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/collections": ^2.0.1
+ checksum: 964c4651c374b17452b4252b7d71500b32d2ecd87c192fc5bcf5d3bd1070661d78d07edcac8eca7d1d6fd50aa25992505485e1296d6dd995691b8e349b652045
+ languageName: node
+ linkType: hard
+
+"@lumino/polling@npm:^2.1.1, @lumino/polling@npm:^2.1.2":
+ version: 2.1.2
+ resolution: "@lumino/polling@npm:2.1.2"
+ dependencies:
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/signaling": ^2.1.2
+ checksum: fa9b401e6dbeb8f31d7e3ba485e8ef1e0c92b3f2da086239c0ed49931026f5d3528709193c93e031e35ac624fb4bbbfcdcbaa0e25eb797f36e2952e5cd91e9e3
+ languageName: node
+ linkType: hard
+
+"@lumino/properties@npm:^2.0.0, @lumino/properties@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/properties@npm:2.0.1"
+ checksum: c50173a935148cc4148fdaea119df1d323ee004ae16ab666800388d27e9730345629662d85f25591683329b39f0cdae60ee8c94e8943b4d0ef7d7370a38128d6
languageName: node
linkType: hard
-"@lumino/dragdrop@npm:^2.1.1, @lumino/dragdrop@npm:^2.1.2":
+"@lumino/signaling@npm:^1.10.0 || ^2.0.0, @lumino/signaling@npm:^2.1.0, @lumino/signaling@npm:^2.1.1, @lumino/signaling@npm:^2.1.2":
version: 2.1.2
- resolution: "@lumino/dragdrop@npm:2.1.2"
+ resolution: "@lumino/signaling@npm:2.1.2"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/coreutils": ^2.1.2
+ checksum: ad7d7153db57980da899c43e412e6130316ef30b231a70250e7af49058db16cadb018c1417a2ea8083d83c48623cfe6b705fa82bf10216b1a8949aed9f4aca4e
+ languageName: node
+ linkType: hard
+
+"@lumino/virtualdom@npm:^2.0.0, @lumino/virtualdom@npm:^2.0.1":
+ version: 2.0.1
+ resolution: "@lumino/virtualdom@npm:2.0.1"
+ dependencies:
+ "@lumino/algorithm": ^2.0.1
+ checksum: cf59b6f15b430e13e9e657b7a0619b9056cd9ea7b2a87f407391d071c501b77403c302b6a66dca510382045e75b2e3fe551630bb391f1c6b33678057d4bec164
+ languageName: node
+ linkType: hard
+
+"@lumino/widgets@npm:^1.37.2 || ^2.1.1, @lumino/widgets@npm:^1.37.2 || ^2.3.0, @lumino/widgets@npm:^1.37.2 || ^2.3.1, @lumino/widgets@npm:^2.1.0, @lumino/widgets@npm:^2.1.1, @lumino/widgets@npm:^2.3.0, @lumino/widgets@npm:^2.3.1":
+ version: 2.3.1
+ resolution: "@lumino/widgets@npm:2.3.1"
dependencies:
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- checksum: 7ac64ec11423ec89fea937aa6c9ca818933ee98e775e500018a0a948f32171932033a1e302a48395cbe9bfeaa635acde2393fd935db14d7b1d569ca6a1daaa77
+ "@lumino/algorithm": ^2.0.1
+ "@lumino/commands": ^2.2.0
+ "@lumino/coreutils": ^2.1.2
+ "@lumino/disposable": ^2.1.2
+ "@lumino/domutils": ^2.0.1
+ "@lumino/dragdrop": ^2.1.4
+ "@lumino/keyboard": ^2.0.1
+ "@lumino/messaging": ^2.0.1
+ "@lumino/properties": ^2.0.1
+ "@lumino/signaling": ^2.1.2
+ "@lumino/virtualdom": ^2.0.1
+ checksum: ba7b8f8839c1cd2a41dbda13281094eb6981a270cccf4f25a0cf83686dcc526a2d8044a20204317630bb7dd4a04d65361408c7623a921549c781afca84b91c67
languageName: node
linkType: hard
-"@lumino/keyboard@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/keyboard@npm:2.0.0"
- checksum: 3852ba51f437b1c1d7e552a0f844592a05e04dd5012070dc6e4384c58965d1ebf536c6875c1b7bae03cde3c715ddc36cd290992fcefc1a8c39094194f4689fdd
+"@microsoft/fast-colors@npm:^5.3.0, @microsoft/fast-colors@npm:^5.3.1":
+ version: 5.3.1
+ resolution: "@microsoft/fast-colors@npm:5.3.1"
+ checksum: ff87f402faadb4b5aeee3d27762566c11807f927cd4012b8bbc7f073ca68de0e2197f95330ff5dfd7038f4b4f0e2f51b11feb64c5d570f5c598d37850a5daf60
languageName: node
linkType: hard
-"@lumino/messaging@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/messaging@npm:2.0.0"
+"@microsoft/fast-components@npm:^2.30.6":
+ version: 2.30.6
+ resolution: "@microsoft/fast-components@npm:2.30.6"
dependencies:
- "@lumino/algorithm": ^2.0.0
- "@lumino/collections": ^2.0.0
- checksum: 1e82dcf9b110834d4342dc63dfeac0ee780880fb99051bd82d00a1f83afd91b276c1cea5af85a414d92c527adc365d54f20ec780123b562f89c5a2cd3e96bf81
+ "@microsoft/fast-colors": ^5.3.0
+ "@microsoft/fast-element": ^1.10.1
+ "@microsoft/fast-foundation": ^2.46.2
+ "@microsoft/fast-web-utilities": ^5.4.1
+ tslib: ^1.13.0
+ checksum: 1fbf3b7c265bcbf6abcae4d2f72430f7f871104a3d8344f16667a4cc7b123698cdf2bab8b760cbed92ef761c4db350a67f570665c76b132d6996990ac93cbd4f
languageName: node
linkType: hard
-"@lumino/polling@npm:^2.1.1":
- version: 2.1.1
- resolution: "@lumino/polling@npm:2.1.1"
- dependencies:
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/signaling": ^2.1.1
- checksum: 69177b26d5fc541e72533cbe7d7f7999eea541d392f1082d20dbd9e1797e7d46fba47bae9c65c06f9ccb2780cbae636e9354d9bf4423b5e1020754d4b07d4f6b
+"@microsoft/fast-element@npm:^1.10.1, @microsoft/fast-element@npm:^1.12.0":
+ version: 1.12.0
+ resolution: "@microsoft/fast-element@npm:1.12.0"
+ checksum: bbff4e9c83106d1d74f3eeedc87bf84832429e78fee59c6a4ae8164ee4f42667503f586896bea72341b4d2c76c244a3cb0d4fd0d5d3732755f00357714dd609e
languageName: node
linkType: hard
-"@lumino/properties@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/properties@npm:2.0.0"
- checksum: 81187a11a779eed4e20ff0035e77dee99bd271b0cf649096c4e8809dd6bdd06955b1a974bc1a115e536f8d2840b30183bb78a362b2c6991824477df6d17e6c59
+"@microsoft/fast-foundation@npm:^2.46.2, @microsoft/fast-foundation@npm:^2.49.0, @microsoft/fast-foundation@npm:^2.49.4":
+ version: 2.49.4
+ resolution: "@microsoft/fast-foundation@npm:2.49.4"
+ dependencies:
+ "@microsoft/fast-element": ^1.12.0
+ "@microsoft/fast-web-utilities": ^5.4.1
+ tabbable: ^5.2.0
+ tslib: ^1.13.0
+ checksum: e979cd500aaba28090e8d9cdc6192933db01803c13288c11aded89aa54da6f0a70256ff2f249754b1c95d9abad369a18401e1df98d672e2823b83cf4cd88ad55
languageName: node
linkType: hard
-"@lumino/signaling@npm:^1.10.0 || ^2.0.0, @lumino/signaling@npm:^2.1.0, @lumino/signaling@npm:^2.1.1":
- version: 2.1.1
- resolution: "@lumino/signaling@npm:2.1.1"
+"@microsoft/fast-react-wrapper@npm:^0.3.18":
+ version: 0.3.22
+ resolution: "@microsoft/fast-react-wrapper@npm:0.3.22"
dependencies:
- "@lumino/algorithm": ^2.0.0
- "@lumino/coreutils": ^2.1.1
- checksum: 283ad4239b8577f68aca3d0b2606f73cc1c775f84cab25cf49aa6cd195f0d87949ef43fdff03b38b5a49ebbf2468581c6786d5f8b6159a04b2051260be5eab86
+ "@microsoft/fast-element": ^1.12.0
+ "@microsoft/fast-foundation": ^2.49.4
+ peerDependencies:
+ react: ">=16.9.0"
+ checksum: 6c7c0992dbaf91b32bc53b9d7ac21c7c8a89e6f45cc1b015cea1d1f3e766184ac7cea159479e34ddd30c347291cd5939e8d55696712086187deae37687054328
languageName: node
linkType: hard
-"@lumino/virtualdom@npm:^2.0.0":
- version: 2.0.0
- resolution: "@lumino/virtualdom@npm:2.0.0"
+"@microsoft/fast-web-utilities@npm:^5.4.1":
+ version: 5.4.1
+ resolution: "@microsoft/fast-web-utilities@npm:5.4.1"
dependencies:
- "@lumino/algorithm": ^2.0.0
- checksum: 6fc1d88e7d4a656be7664ccfc5745eb1d4e3d2034db0b11ad6abefcc642f22d265003eef0e1d02bca2e42b6da127123118c631369006f78e88a08885a6f36c25
+ exenv-es6: ^1.1.1
+ checksum: 303e87847f962944f474e3716c3eb305668243916ca9e0719e26bb9a32346144bc958d915c103776b3e552cea0f0f6233f839fad66adfdf96a8436b947288ca7
languageName: node
linkType: hard
-"@lumino/widgets@npm:^1.37.2 || ^2.1.1, @lumino/widgets@npm:^2.1.0, @lumino/widgets@npm:^2.1.1, @lumino/widgets@npm:^2.2.0":
- version: 2.2.0
- resolution: "@lumino/widgets@npm:2.2.0"
+"@microsoft/fast-web-utilities@npm:^6.0.0":
+ version: 6.0.0
+ resolution: "@microsoft/fast-web-utilities@npm:6.0.0"
dependencies:
- "@lumino/algorithm": ^2.0.0
- "@lumino/commands": ^2.1.2
- "@lumino/coreutils": ^2.1.1
- "@lumino/disposable": ^2.1.1
- "@lumino/domutils": ^2.0.0
- "@lumino/dragdrop": ^2.1.2
- "@lumino/keyboard": ^2.0.0
- "@lumino/messaging": ^2.0.0
- "@lumino/properties": ^2.0.0
- "@lumino/signaling": ^2.1.1
- "@lumino/virtualdom": ^2.0.0
- checksum: 963c0e54102b786a9cbf3467041c9f6f5c275af751afc311ebeba30d56516767c463c425e321bb389eaa66726dfc4420119a9a58573dcbf3110aba9515c80606
+ exenv-es6: ^1.1.1
+ checksum: b4b906dbbf626212446d5952c160b1f7e7ce72dd33087c7ed634cb2745c31767bab7d17fba0e9fc32e42984fc5bc0a9929b4f05cbbcbe52869abe3666b5bfa39
languageName: node
linkType: hard
@@ -4239,25 +4637,25 @@ __metadata:
languageName: node
linkType: hard
-"@rjsf/core@npm:^5.1.0":
- version: 5.10.0
- resolution: "@rjsf/core@npm:5.10.0"
+"@rjsf/core@npm:^5.1.0, @rjsf/core@npm:^5.13.4":
+ version: 5.16.1
+ resolution: "@rjsf/core@npm:5.16.1"
dependencies:
lodash: ^4.17.21
lodash-es: ^4.17.21
- markdown-to-jsx: ^7.2.1
- nanoid: ^3.3.6
+ markdown-to-jsx: ^7.4.0
+ nanoid: ^3.3.7
prop-types: ^15.8.1
peerDependencies:
- "@rjsf/utils": ^5.8.x
+ "@rjsf/utils": ^5.16.x
react: ^16.14.0 || >=17
- checksum: 11ff7f07e31ba13c1c6cb5e9aee94c4a5916a3f0013cb19fdeaea9254a77b50acee05d531a70adf92ee8a2024525916b20bb1af79d7afaadbd212a6124a57e5a
+ checksum: 2f88dc6af9dda8ec5c8cbac63f3f9e776a11fe363ce938aa7b5c7a3baaa84a7a2f3796ebf55b361a8cb65267a1715ab880a4743636fb88e06b0240d07f0e4c7b
languageName: node
linkType: hard
-"@rjsf/utils@npm:^5.1.0":
- version: 5.10.0
- resolution: "@rjsf/utils@npm:5.10.0"
+"@rjsf/utils@npm:^5.1.0, @rjsf/utils@npm:^5.13.4":
+ version: 5.16.1
+ resolution: "@rjsf/utils@npm:5.16.1"
dependencies:
json-schema-merge-allof: ^0.8.1
jsonpointer: ^5.0.1
@@ -4266,7 +4664,7 @@ __metadata:
react-is: ^18.2.0
peerDependencies:
react: ^16.14.0 || >=17
- checksum: 5f44334598cfee3c2bf9a9561680e9c91abce9240ddf54cdb800fbbbb69b182fa7cc1839127558b3661aadbb185fba676eb3189352c8a8b5eea83d0b46987fa7
+ checksum: 0c69527de4ab6f9d6ec4d1a5e05a31a0a38062d40abe2a2da7bc2324b20b08b0e90c188977ac4408f3b004c758c28097444746f3215e21e184c11cad7e9278c1
languageName: node
linkType: hard
@@ -4558,13 +4956,6 @@ __metadata:
languageName: node
linkType: hard
-"@types/prettier@npm:^2.1.5":
- version: 2.7.3
- resolution: "@types/prettier@npm:2.7.3"
- checksum: 705384209cea6d1433ff6c187c80dcc0b95d99d5c5ce21a46a9a58060c527973506822e428789d842761e0280d25e3359300f017fbe77b9755bc772ab3dc2f83
- languageName: node
- linkType: hard
-
"@types/prop-types@npm:*, @types/prop-types@npm:^15.0.0, @types/prop-types@npm:^15.7.5":
version: 15.7.5
resolution: "@types/prop-types@npm:15.7.5"
@@ -5040,20 +5431,7 @@ __metadata:
languageName: node
linkType: hard
-"abstract-leveldown@npm:^6.2.1":
- version: 6.3.0
- resolution: "abstract-leveldown@npm:6.3.0"
- dependencies:
- buffer: ^5.5.0
- immediate: ^3.2.3
- level-concat-iterator: ~2.0.0
- level-supports: ~1.0.0
- xtend: ~4.0.0
- checksum: 121a8509d8c6a540e656c2a69e5b8d853d4df71072011afefc868b98076991bb00120550e90643de9dc18889c675f62413409eeb4c8c204663124c7d215e4ec3
- languageName: node
- linkType: hard
-
-"abstract-leveldown@npm:~6.2.1, abstract-leveldown@npm:~6.2.3":
+"abstract-leveldown@npm:^6.2.1, abstract-leveldown@npm:~6.2.1, abstract-leveldown@npm:~6.2.3":
version: 6.2.3
resolution: "abstract-leveldown@npm:6.2.3"
dependencies:
@@ -5441,23 +5819,6 @@ __metadata:
languageName: node
linkType: hard
-"babel-jest@npm:^29.6.1":
- version: 29.6.1
- resolution: "babel-jest@npm:29.6.1"
- dependencies:
- "@jest/transform": ^29.6.1
- "@types/babel__core": ^7.1.14
- babel-plugin-istanbul: ^6.1.1
- babel-preset-jest: ^29.5.0
- chalk: ^4.0.0
- graceful-fs: ^4.2.9
- slash: ^3.0.0
- peerDependencies:
- "@babel/core": ^7.8.0
- checksum: bc46cfba468edde91f34a8292501d4448a39fab72d80d7d95f4349feb114fa21becb01def007d6166de7933ab9633bf5b5e1b72ba6ffeaa991f7abf014a2f61d
- languageName: node
- linkType: hard
-
"babel-jest@npm:^29.6.2":
version: 29.6.2
resolution: "babel-jest@npm:29.6.2"
@@ -5982,20 +6343,13 @@ __metadata:
languageName: node
linkType: hard
-"cli-spinners@npm:2.6.1":
+"cli-spinners@npm:2.6.1, cli-spinners@npm:^2.5.0":
version: 2.6.1
resolution: "cli-spinners@npm:2.6.1"
checksum: 423409baaa7a58e5104b46ca1745fbfc5888bbd0b0c5a626e052ae1387060839c8efd512fb127e25769b3dc9562db1dc1b5add6e0b93b7ef64f477feb6416a45
languageName: node
linkType: hard
-"cli-spinners@npm:^2.5.0":
- version: 2.9.0
- resolution: "cli-spinners@npm:2.9.0"
- checksum: a9c56e1f44457d4a9f4f535364e729cb8726198efa9e98990cfd9eda9e220dfa4ba12f92808d1be5e29029cdfead781db82dc8549b97b31c907d55f96aa9b0e2
- languageName: node
- linkType: hard
-
"cli-width@npm:^3.0.0":
version: 3.0.0
resolution: "cli-width@npm:3.0.0"
@@ -7371,7 +7725,7 @@ __metadata:
languageName: node
linkType: hard
-"execa@npm:5.0.0":
+"execa@npm:5.0.0, execa@npm:^5.0.0":
version: 5.0.0
resolution: "execa@npm:5.0.0"
dependencies:
@@ -7388,20 +7742,10 @@ __metadata:
languageName: node
linkType: hard
-"execa@npm:^5.0.0":
- version: 5.1.1
- resolution: "execa@npm:5.1.1"
- dependencies:
- cross-spawn: ^7.0.3
- get-stream: ^6.0.0
- human-signals: ^2.1.0
- is-stream: ^2.0.0
- merge-stream: ^2.0.0
- npm-run-path: ^4.0.1
- onetime: ^5.1.2
- signal-exit: ^3.0.3
- strip-final-newline: ^2.0.0
- checksum: fba9022c8c8c15ed862847e94c252b3d946036d7547af310e344a527e59021fd8b6bb0723883ea87044dc4f0201f949046993124a42ccb0855cae5bf8c786343
+"exenv-es6@npm:^1.1.1":
+ version: 1.1.1
+ resolution: "exenv-es6@npm:1.1.1"
+ checksum: 7f2aa12025e6f06c48dc286f380cf3183bb19c6017b36d91695034a3e5124a7235c4f8ff24ca2eb88ae801322f0f99605cedfcfd996a5fcbba7669320e2a448e
languageName: node
linkType: hard
@@ -7426,20 +7770,6 @@ __metadata:
languageName: node
linkType: hard
-"expect@npm:^29.6.1":
- version: 29.6.1
- resolution: "expect@npm:29.6.1"
- dependencies:
- "@jest/expect-utils": ^29.6.1
- "@types/node": "*"
- jest-get-type: ^29.4.3
- jest-matcher-utils: ^29.6.1
- jest-message-util: ^29.6.1
- jest-util: ^29.6.1
- checksum: 4e712e52c90f6c54e748fd2876be33c43ada6a59088ddf6a1acb08b18b3b97b3a672124684abe32599986d2f2a438d5afad148837ee06ea386d2a4bf0348de78
- languageName: node
- linkType: hard
-
"exponential-backoff@npm:^3.1.1":
version: 3.1.1
resolution: "exponential-backoff@npm:3.1.1"
@@ -7923,20 +8253,13 @@ __metadata:
languageName: node
linkType: hard
-"get-stream@npm:6.0.0":
+"get-stream@npm:6.0.0, get-stream@npm:^6.0.0":
version: 6.0.0
resolution: "get-stream@npm:6.0.0"
checksum: 587e6a93127f9991b494a566f4971cf7a2645dfa78034818143480a80587027bdd8826cdcf80d0eff4a4a19de0d231d157280f24789fc9cc31492e1dcc1290cf
languageName: node
linkType: hard
-"get-stream@npm:^6.0.0":
- version: 6.0.1
- resolution: "get-stream@npm:6.0.1"
- checksum: e04ecece32c92eebf5b8c940f51468cd53554dcbb0ea725b2748be583c9523d00128137966afce410b9b051eb2ef16d657cd2b120ca8edafcf5a65e81af63cad
- languageName: node
- linkType: hard
-
"get-symbol-description@npm:^1.0.0":
version: 1.0.0
resolution: "get-symbol-description@npm:1.0.0"
@@ -8057,17 +8380,17 @@ __metadata:
languageName: node
linkType: hard
-"glob@npm:^7.1.3, glob@npm:^7.1.4":
- version: 7.2.3
- resolution: "glob@npm:7.2.3"
+"glob@npm:^7.1.3, glob@npm:^7.1.4, glob@npm:~7.1.6":
+ version: 7.1.7
+ resolution: "glob@npm:7.1.7"
dependencies:
fs.realpath: ^1.0.0
inflight: ^1.0.4
inherits: 2
- minimatch: ^3.1.1
+ minimatch: ^3.0.4
once: ^1.3.0
path-is-absolute: ^1.0.0
- checksum: 29452e97b38fa704dabb1d1045350fb2467cf0277e155aa9ff7077e90ad81d1ea9d53d3ee63bd37c05b09a065e90f16aec4a65f5b8de401d1dac40bc5605d133
+ checksum: b61f48973bbdcf5159997b0874a2165db572b368b931135832599875919c237fc05c12984e38fe828e69aa8a921eb0e8a4997266211c517c9cfaae8a93988bb8
languageName: node
linkType: hard
@@ -8096,20 +8419,6 @@ __metadata:
languageName: node
linkType: hard
-"glob@npm:~7.1.6":
- version: 7.1.7
- resolution: "glob@npm:7.1.7"
- dependencies:
- fs.realpath: ^1.0.0
- inflight: ^1.0.4
- inherits: 2
- minimatch: ^3.0.4
- once: ^1.3.0
- path-is-absolute: ^1.0.0
- checksum: b61f48973bbdcf5159997b0874a2165db572b368b931135832599875919c237fc05c12984e38fe828e69aa8a921eb0e8a4997266211c517c9cfaae8a93988bb8
- languageName: node
- linkType: hard
-
"global-modules@npm:^2.0.0":
version: 2.0.0
resolution: "global-modules@npm:2.0.0"
@@ -8185,20 +8494,13 @@ __metadata:
languageName: node
linkType: hard
-"graceful-fs@npm:4.2.10":
+"graceful-fs@npm:4.2.10, graceful-fs@npm:^4.1.11, graceful-fs@npm:^4.1.15, graceful-fs@npm:^4.1.2, graceful-fs@npm:^4.1.6, graceful-fs@npm:^4.2.0, graceful-fs@npm:^4.2.4, graceful-fs@npm:^4.2.6, graceful-fs@npm:^4.2.9":
version: 4.2.10
resolution: "graceful-fs@npm:4.2.10"
checksum: 3f109d70ae123951905d85032ebeae3c2a5a7a997430df00ea30df0e3a6c60cf6689b109654d6fdacd28810a053348c4d14642da1d075049e6be1ba5216218da
languageName: node
linkType: hard
-"graceful-fs@npm:^4.1.11, graceful-fs@npm:^4.1.15, graceful-fs@npm:^4.1.2, graceful-fs@npm:^4.1.6, graceful-fs@npm:^4.2.0, graceful-fs@npm:^4.2.4, graceful-fs@npm:^4.2.6, graceful-fs@npm:^4.2.9":
- version: 4.2.11
- resolution: "graceful-fs@npm:4.2.11"
- checksum: ac85f94da92d8eb6b7f5a8b20ce65e43d66761c55ce85ac96df6865308390da45a8d3f0296dd3a663de65d30ba497bd46c696cc1e248c72b13d6d567138a4fc7
- languageName: node
- linkType: hard
-
"handlebars@npm:^4.7.7":
version: 4.7.7
resolution: "handlebars@npm:4.7.7"
@@ -8737,7 +9039,7 @@ __metadata:
languageName: node
linkType: hard
-"inquirer@npm:8.2.4":
+"inquirer@npm:8.2.4, inquirer@npm:^8.2.4":
version: 8.2.4
resolution: "inquirer@npm:8.2.4"
dependencies:
@@ -8760,29 +9062,6 @@ __metadata:
languageName: node
linkType: hard
-"inquirer@npm:^8.2.4":
- version: 8.2.5
- resolution: "inquirer@npm:8.2.5"
- dependencies:
- ansi-escapes: ^4.2.1
- chalk: ^4.1.1
- cli-cursor: ^3.1.0
- cli-width: ^3.0.0
- external-editor: ^3.0.3
- figures: ^3.0.0
- lodash: ^4.17.21
- mute-stream: 0.0.8
- ora: ^5.4.1
- run-async: ^2.4.0
- rxjs: ^7.5.5
- string-width: ^4.1.0
- strip-ansi: ^6.0.0
- through: ^2.3.6
- wrap-ansi: ^7.0.0
- checksum: f13ee4c444187786fb393609dedf6b30870115a57b603f2e6424f29a99abc13446fd45ee22461c33c9c40a92a60a8df62d0d6b25d74fc6676fa4cb211de55b55
- languageName: node
- linkType: hard
-
"internal-slot@npm:^1.0.5":
version: 1.0.5
resolution: "internal-slot@npm:1.0.5"
@@ -9081,20 +9360,13 @@ __metadata:
languageName: node
linkType: hard
-"is-stream@npm:2.0.0":
+"is-stream@npm:2.0.0, is-stream@npm:^2.0.0":
version: 2.0.0
resolution: "is-stream@npm:2.0.0"
checksum: 4dc47738e26bc4f1b3be9070b6b9e39631144f204fc6f87db56961220add87c10a999ba26cf81699f9ef9610426f69cb08a4713feff8deb7d8cadac907826935
languageName: node
linkType: hard
-"is-stream@npm:^2.0.0":
- version: 2.0.1
- resolution: "is-stream@npm:2.0.1"
- checksum: b8e05ccdf96ac330ea83c12450304d4a591f9958c11fd17bed240af8d5ffe08aedafa4c0f4cfccd4d28dc9d4d129daca1023633d5c11601a6cbc77521f6fae66
- languageName: node
- linkType: hard
-
"is-string@npm:^1.0.5, is-string@npm:^1.0.7":
version: 1.0.7
resolution: "is-string@npm:1.0.7"
@@ -9280,34 +9552,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-circus@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-circus@npm:29.6.1"
- dependencies:
- "@jest/environment": ^29.6.1
- "@jest/expect": ^29.6.1
- "@jest/test-result": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- chalk: ^4.0.0
- co: ^4.6.0
- dedent: ^0.7.0
- is-generator-fn: ^2.0.0
- jest-each: ^29.6.1
- jest-matcher-utils: ^29.6.1
- jest-message-util: ^29.6.1
- jest-runtime: ^29.6.1
- jest-snapshot: ^29.6.1
- jest-util: ^29.6.1
- p-limit: ^3.1.0
- pretty-format: ^29.6.1
- pure-rand: ^6.0.0
- slash: ^3.0.0
- stack-utils: ^2.0.3
- checksum: f3e39a74b601929448df92037f0599978d4d7a4b8f636f64e8020533d2d2b2f669d6729c80c6efed69341ca26753e5061e9787a0acd6c70af2127a94375ebb76
- languageName: node
- linkType: hard
-
"jest-circus@npm:^29.6.2":
version: 29.6.2
resolution: "jest-circus@npm:29.6.2"
@@ -9336,33 +9580,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-cli@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-cli@npm:29.6.1"
- dependencies:
- "@jest/core": ^29.6.1
- "@jest/test-result": ^29.6.1
- "@jest/types": ^29.6.1
- chalk: ^4.0.0
- exit: ^0.1.2
- graceful-fs: ^4.2.9
- import-local: ^3.0.2
- jest-config: ^29.6.1
- jest-util: ^29.6.1
- jest-validate: ^29.6.1
- prompts: ^2.0.1
- yargs: ^17.3.1
- peerDependencies:
- node-notifier: ^8.0.1 || ^9.0.0 || ^10.0.0
- peerDependenciesMeta:
- node-notifier:
- optional: true
- bin:
- jest: bin/jest.js
- checksum: f5854ffea977b9a12520ea71f8d0cc8a626cbb93d7e1e6eea18a2a1f2b25f70f1b6b08a89f11b4dc7dd36a1776a9ac2cf8ec5c7998086f913ee690c06c07c949
- languageName: node
- linkType: hard
-
"jest-cli@npm:^29.6.2":
version: 29.6.2
resolution: "jest-cli@npm:29.6.2"
@@ -9390,44 +9607,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-config@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-config@npm:29.6.1"
- dependencies:
- "@babel/core": ^7.11.6
- "@jest/test-sequencer": ^29.6.1
- "@jest/types": ^29.6.1
- babel-jest: ^29.6.1
- chalk: ^4.0.0
- ci-info: ^3.2.0
- deepmerge: ^4.2.2
- glob: ^7.1.3
- graceful-fs: ^4.2.9
- jest-circus: ^29.6.1
- jest-environment-node: ^29.6.1
- jest-get-type: ^29.4.3
- jest-regex-util: ^29.4.3
- jest-resolve: ^29.6.1
- jest-runner: ^29.6.1
- jest-util: ^29.6.1
- jest-validate: ^29.6.1
- micromatch: ^4.0.4
- parse-json: ^5.2.0
- pretty-format: ^29.6.1
- slash: ^3.0.0
- strip-json-comments: ^3.1.1
- peerDependencies:
- "@types/node": "*"
- ts-node: ">=9.0.0"
- peerDependenciesMeta:
- "@types/node":
- optional: true
- ts-node:
- optional: true
- checksum: 3a30afeb28cc5658ef9cd95f2551ab8a29641bb6d377eb239cba8e7522eb4611c9a98cdcf173d87f5ad7b5e1ad242c3cd5434a260107bd3c7e8305d05023e05c
- languageName: node
- linkType: hard
-
"jest-config@npm:^29.6.2":
version: 29.6.2
resolution: "jest-config@npm:29.6.2"
@@ -9466,18 +9645,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-diff@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-diff@npm:29.6.1"
- dependencies:
- chalk: ^4.0.0
- diff-sequences: ^29.4.3
- jest-get-type: ^29.4.3
- pretty-format: ^29.6.1
- checksum: c6350178ca27d92c7fd879790fb2525470c1ff1c5d29b1834a240fecd26c6904fb470ebddb98dc96dd85389c56c3b50e6965a1f5203e9236d213886ed9806219
- languageName: node
- linkType: hard
-
"jest-diff@npm:^29.6.2":
version: 29.6.2
resolution: "jest-diff@npm:29.6.2"
@@ -9499,19 +9666,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-each@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-each@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- chalk: ^4.0.0
- jest-get-type: ^29.4.3
- jest-util: ^29.6.1
- pretty-format: ^29.6.1
- checksum: 9d2ea7ed5326ee8c22523b22c66c85fe73754ea39f9b389881956508ee441392c61072a5fbf673e39beddd31d011bb94acae3edc77053ba4f9aa5c060114a5c8
- languageName: node
- linkType: hard
-
"jest-each@npm:^29.6.2":
version: 29.6.2
resolution: "jest-each@npm:29.6.2"
@@ -9546,20 +9700,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-environment-node@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-environment-node@npm:29.6.1"
- dependencies:
- "@jest/environment": ^29.6.1
- "@jest/fake-timers": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- jest-mock: ^29.6.1
- jest-util: ^29.6.1
- checksum: a50287e1ff29d131646bd09acc3222ac6ea0ad61e86bf73851d318ef2be0633a421b8558c4a15ddc67e0ffcfc32da7f6a0d8a2ddbfa85453837899dec88d256c
- languageName: node
- linkType: hard
-
"jest-environment-node@npm:^29.6.2":
version: 29.6.2
resolution: "jest-environment-node@npm:29.6.2"
@@ -9581,29 +9721,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-haste-map@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-haste-map@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- "@types/graceful-fs": ^4.1.3
- "@types/node": "*"
- anymatch: ^3.0.3
- fb-watchman: ^2.0.0
- fsevents: ^2.3.2
- graceful-fs: ^4.2.9
- jest-regex-util: ^29.4.3
- jest-util: ^29.6.1
- jest-worker: ^29.6.1
- micromatch: ^4.0.4
- walker: ^1.0.8
- dependenciesMeta:
- fsevents:
- optional: true
- checksum: 7c74d5a0f6aafa9f4e60fae7949d4774770c0243fb529c24f2f4c81229db479fa318dc8b81e8d226865aef1d600af10bd8404dd208e802318434b46f75d5d869
- languageName: node
- linkType: hard
-
"jest-haste-map@npm:^29.6.2":
version: 29.6.2
resolution: "jest-haste-map@npm:29.6.2"
@@ -9639,35 +9756,13 @@ __metadata:
languageName: node
linkType: hard
-"jest-leak-detector@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-leak-detector@npm:29.6.1"
- dependencies:
- jest-get-type: ^29.4.3
- pretty-format: ^29.6.1
- checksum: 5122d40c248effaede4c9ee3a99046a3f30088fef7bfc4af534678b432455161399357af46deb6423de7e05c6597920d6ee8cd570e26048886a90d541334f8c8
- languageName: node
- linkType: hard
-
"jest-leak-detector@npm:^29.6.2":
- version: 29.6.2
- resolution: "jest-leak-detector@npm:29.6.2"
- dependencies:
- jest-get-type: ^29.4.3
- pretty-format: ^29.6.2
- checksum: e00152acdba8aa8f9334775b77375947508051c34646fbeb702275da2b6ac6145f8cad6d5893112e76484d00fa8c0b4fd71b78ab0b4ef34950f5b6a84f37ae67
- languageName: node
- linkType: hard
-
-"jest-matcher-utils@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-matcher-utils@npm:29.6.1"
+ version: 29.6.2
+ resolution: "jest-leak-detector@npm:29.6.2"
dependencies:
- chalk: ^4.0.0
- jest-diff: ^29.6.1
jest-get-type: ^29.4.3
- pretty-format: ^29.6.1
- checksum: d2efa6aed6e4820758b732b9fefd315c7fa4508ee690da656e1c5ac4c1a0f4cee5b04c9719ee1fda9aeb883b4209186c145089ced521e715b9fa70afdfa4a9c6
+ pretty-format: ^29.6.2
+ checksum: e00152acdba8aa8f9334775b77375947508051c34646fbeb702275da2b6ac6145f8cad6d5893112e76484d00fa8c0b4fd71b78ab0b4ef34950f5b6a84f37ae67
languageName: node
linkType: hard
@@ -9683,23 +9778,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-message-util@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-message-util@npm:29.6.1"
- dependencies:
- "@babel/code-frame": ^7.12.13
- "@jest/types": ^29.6.1
- "@types/stack-utils": ^2.0.0
- chalk: ^4.0.0
- graceful-fs: ^4.2.9
- micromatch: ^4.0.4
- pretty-format: ^29.6.1
- slash: ^3.0.0
- stack-utils: ^2.0.3
- checksum: 3e7cb2ff087fe72255292e151d24e4fbb4cd6134885c0a67a4b302f233fe4110bf7580b176f427f05ad7550eb878ed94237209785d09d659a7d171ffa59c068f
- languageName: node
- linkType: hard
-
"jest-message-util@npm:^29.6.2":
version: 29.6.2
resolution: "jest-message-util@npm:29.6.2"
@@ -9717,18 +9795,7 @@ __metadata:
languageName: node
linkType: hard
-"jest-mock@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-mock@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- "@types/node": "*"
- jest-util: ^29.6.1
- checksum: 5e902f1a7eba1eb1a64eb6c19947fe1316834359d9869d0e2644d8979b9cad0465885dc4c9909c471888cddeea835c938cec6263d386d3d1aad720fc74e52ea1
- languageName: node
- linkType: hard
-
-"jest-mock@npm:^29.6.2":
+"jest-mock@npm:^29.6.1, jest-mock@npm:^29.6.2":
version: 29.6.2
resolution: "jest-mock@npm:29.6.2"
dependencies:
@@ -9758,16 +9825,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-resolve-dependencies@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-resolve-dependencies@npm:29.6.1"
- dependencies:
- jest-regex-util: ^29.4.3
- jest-snapshot: ^29.6.1
- checksum: cee0a0fe53fd4531492a526b6ccd32377baad1eff6e6c124f04e9dc920219fd23fd39be88bb9551ee68d5fe92a3af627b423c9bc65a2aa0ac8a223c0e74dbbbb
- languageName: node
- linkType: hard
-
"jest-resolve-dependencies@npm:^29.6.2":
version: 29.6.2
resolution: "jest-resolve-dependencies@npm:29.6.2"
@@ -9778,23 +9835,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-resolve@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-resolve@npm:29.6.1"
- dependencies:
- chalk: ^4.0.0
- graceful-fs: ^4.2.9
- jest-haste-map: ^29.6.1
- jest-pnp-resolver: ^1.2.2
- jest-util: ^29.6.1
- jest-validate: ^29.6.1
- resolve: ^1.20.0
- resolve.exports: ^2.0.0
- slash: ^3.0.0
- checksum: 9ce979a0f4a751bea58caea76415112df2a3f4d58e294019872244728aadd001f0ec20c873a3c805dd8f7c762143b3c14d00f87d124ed87c9981fbf8723090ef
- languageName: node
- linkType: hard
-
"jest-resolve@npm:^29.6.2":
version: 29.6.2
resolution: "jest-resolve@npm:29.6.2"
@@ -9812,35 +9852,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-runner@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-runner@npm:29.6.1"
- dependencies:
- "@jest/console": ^29.6.1
- "@jest/environment": ^29.6.1
- "@jest/test-result": ^29.6.1
- "@jest/transform": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- chalk: ^4.0.0
- emittery: ^0.13.1
- graceful-fs: ^4.2.9
- jest-docblock: ^29.4.3
- jest-environment-node: ^29.6.1
- jest-haste-map: ^29.6.1
- jest-leak-detector: ^29.6.1
- jest-message-util: ^29.6.1
- jest-resolve: ^29.6.1
- jest-runtime: ^29.6.1
- jest-util: ^29.6.1
- jest-watcher: ^29.6.1
- jest-worker: ^29.6.1
- p-limit: ^3.1.0
- source-map-support: 0.5.13
- checksum: 0e4dbda26669ae31fee32f8a62b3119bba510f2d17a098d6157b48a73ed2fc9842405bf893f3045c12b3632c7c0e3399fb22684b18ab5566aff4905b26c79a9a
- languageName: node
- linkType: hard
-
"jest-runner@npm:^29.6.2":
version: 29.6.2
resolution: "jest-runner@npm:29.6.2"
@@ -9870,36 +9881,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-runtime@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-runtime@npm:29.6.1"
- dependencies:
- "@jest/environment": ^29.6.1
- "@jest/fake-timers": ^29.6.1
- "@jest/globals": ^29.6.1
- "@jest/source-map": ^29.6.0
- "@jest/test-result": ^29.6.1
- "@jest/transform": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- chalk: ^4.0.0
- cjs-module-lexer: ^1.0.0
- collect-v8-coverage: ^1.0.0
- glob: ^7.1.3
- graceful-fs: ^4.2.9
- jest-haste-map: ^29.6.1
- jest-message-util: ^29.6.1
- jest-mock: ^29.6.1
- jest-regex-util: ^29.4.3
- jest-resolve: ^29.6.1
- jest-snapshot: ^29.6.1
- jest-util: ^29.6.1
- slash: ^3.0.0
- strip-bom: ^4.0.0
- checksum: 7c360c9694467d996f3d6d914fefa0e7bda554adda8c2b9fba31546dba663d71a64eda103ff68120a2422f3c16db8f0bc2c445923fe8fb934f37e53ef74fb429
- languageName: node
- linkType: hard
-
"jest-runtime@npm:^29.6.2":
version: 29.6.2
resolution: "jest-runtime@npm:29.6.2"
@@ -9930,35 +9911,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-snapshot@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-snapshot@npm:29.6.1"
- dependencies:
- "@babel/core": ^7.11.6
- "@babel/generator": ^7.7.2
- "@babel/plugin-syntax-jsx": ^7.7.2
- "@babel/plugin-syntax-typescript": ^7.7.2
- "@babel/types": ^7.3.3
- "@jest/expect-utils": ^29.6.1
- "@jest/transform": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/prettier": ^2.1.5
- babel-preset-current-node-syntax: ^1.0.0
- chalk: ^4.0.0
- expect: ^29.6.1
- graceful-fs: ^4.2.9
- jest-diff: ^29.6.1
- jest-get-type: ^29.4.3
- jest-matcher-utils: ^29.6.1
- jest-message-util: ^29.6.1
- jest-util: ^29.6.1
- natural-compare: ^1.4.0
- pretty-format: ^29.6.1
- semver: ^7.5.3
- checksum: e8f69d1fd4a29d354d4dca9eb2a22674b300f8ef509e4f1e75337c880414a00d2bdc9d3849a6855dbb5a76bfbe74603f33435378a3877e69f0838e4cc2244350
- languageName: node
- linkType: hard
-
"jest-snapshot@npm:^29.6.2":
version: 29.6.2
resolution: "jest-snapshot@npm:29.6.2"
@@ -9987,21 +9939,7 @@ __metadata:
languageName: node
linkType: hard
-"jest-util@npm:^29.0.0, jest-util@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-util@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- "@types/node": "*"
- chalk: ^4.0.0
- ci-info: ^3.2.0
- graceful-fs: ^4.2.9
- picomatch: ^2.2.3
- checksum: fc553556c1350c443449cadaba5fb9d604628e8b5ceb6ceaf4e7e08975b24277d0a14bf2e0f956024e03c23e556fcb074659423422a06fbedf2ab52978697ac7
- languageName: node
- linkType: hard
-
-"jest-util@npm:^29.6.2":
+"jest-util@npm:^29.0.0, jest-util@npm:^29.6.1, jest-util@npm:^29.6.2":
version: 29.6.2
resolution: "jest-util@npm:29.6.2"
dependencies:
@@ -10015,20 +9953,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-validate@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-validate@npm:29.6.1"
- dependencies:
- "@jest/types": ^29.6.1
- camelcase: ^6.2.0
- chalk: ^4.0.0
- jest-get-type: ^29.4.3
- leven: ^3.1.0
- pretty-format: ^29.6.1
- checksum: d2491f3f33d9bbc2dcaaa6acbff26f257b59c5eeceb65a52a9c1cec2f679b836ec2a4658b7004c0ef9d90cd0d9bd664e41d5ed6900f932bea742dd8e6b85e7f1
- languageName: node
- linkType: hard
-
"jest-validate@npm:^29.6.2":
version: 29.6.2
resolution: "jest-validate@npm:29.6.2"
@@ -10043,22 +9967,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-watcher@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-watcher@npm:29.6.1"
- dependencies:
- "@jest/test-result": ^29.6.1
- "@jest/types": ^29.6.1
- "@types/node": "*"
- ansi-escapes: ^4.2.1
- chalk: ^4.0.0
- emittery: ^0.13.1
- jest-util: ^29.6.1
- string-length: ^4.0.1
- checksum: 69bd5a602284fdce6eba5486c5c57aca6b511d91cb0907c34c104d6dd931e18ce67baa7f8e280fa473e5d81ea3e7b9e7d94f712c37ab0b3b8cc2aec30676955d
- languageName: node
- linkType: hard
-
"jest-watcher@npm:^29.6.2":
version: 29.6.2
resolution: "jest-watcher@npm:29.6.2"
@@ -10086,18 +9994,6 @@ __metadata:
languageName: node
linkType: hard
-"jest-worker@npm:^29.6.1":
- version: 29.6.1
- resolution: "jest-worker@npm:29.6.1"
- dependencies:
- "@types/node": "*"
- jest-util: ^29.6.1
- merge-stream: ^2.0.0
- supports-color: ^8.0.0
- checksum: 0af309ea4db17c4c47e84a9246f907960a15577683c005fdeafc8f3c06bc455136f95a6f28fa2a3e924b767eb4dacd9b40915a7707305f88586f099af3ac27a8
- languageName: node
- linkType: hard
-
"jest-worker@npm:^29.6.2":
version: 29.6.2
resolution: "jest-worker@npm:29.6.2"
@@ -10110,7 +10006,7 @@ __metadata:
languageName: node
linkType: hard
-"jest@npm:^29":
+"jest@npm:^29, jest@npm:^29.2.0":
version: 29.6.2
resolution: "jest@npm:29.6.2"
dependencies:
@@ -10129,25 +10025,6 @@ __metadata:
languageName: node
linkType: hard
-"jest@npm:^29.2.0":
- version: 29.6.1
- resolution: "jest@npm:29.6.1"
- dependencies:
- "@jest/core": ^29.6.1
- "@jest/types": ^29.6.1
- import-local: ^3.0.2
- jest-cli: ^29.6.1
- peerDependencies:
- node-notifier: ^8.0.1 || ^9.0.0 || ^10.0.0
- peerDependenciesMeta:
- node-notifier:
- optional: true
- bin:
- jest: bin/jest.js
- checksum: 7b8c0ca72f483e00ec19dcf9549f9a9af8ae468ab62925b148d714b58eb52d5fea9a082625193bc833d2d9b64cf65a11f3d37857636c5551af05c10aec4ce71b
- languageName: node
- linkType: hard
-
"js-tokens@npm:^3.0.0 || ^4.0.0, js-tokens@npm:^4.0.0":
version: 4.0.0
resolution: "js-tokens@npm:4.0.0"
@@ -10995,12 +10872,12 @@ __metadata:
languageName: node
linkType: hard
-"markdown-to-jsx@npm:^7.2.1":
- version: 7.2.1
- resolution: "markdown-to-jsx@npm:7.2.1"
+"markdown-to-jsx@npm:^7.4.0":
+ version: 7.4.0
+ resolution: "markdown-to-jsx@npm:7.4.0"
peerDependencies:
react: ">= 0.14.0"
- checksum: 0c8c715229044401ea48c2fc26c2554464100074959dafacdd9e4a0e849f0a190b02f39edb373bbdd95e38b8f910074b83b63d08752b8ae6be6ddcfb40ea50a0
+ checksum: 59959d14d7927ed8a97e42d39771e2b445b90fa098477fb6ab040f044d230517dc4a95ba38a4f924cfc965a96b32211d93def150a6184f0e51d2cefdc8cb415d
languageName: node
linkType: hard
@@ -11484,7 +11361,7 @@ __metadata:
languageName: node
linkType: hard
-"minimatch@npm:^3.0.4, minimatch@npm:^3.1.1, minimatch@npm:^3.1.2":
+"minimatch@npm:^3.0.4, minimatch@npm:^3.1.2":
version: 3.1.2
resolution: "minimatch@npm:3.1.2"
dependencies:
@@ -11639,20 +11516,13 @@ __metadata:
languageName: node
linkType: hard
-"minipass@npm:^5.0.0":
+"minipass@npm:^5.0.0, minipass@npm:^5.0.0 || ^6.0.2 || ^7.0.0":
version: 5.0.0
resolution: "minipass@npm:5.0.0"
checksum: 425dab288738853fded43da3314a0b5c035844d6f3097a8e3b5b29b328da8f3c1af6fc70618b32c29ff906284cf6406b6841376f21caaadd0793c1d5a6a620ea
languageName: node
linkType: hard
-"minipass@npm:^5.0.0 || ^6.0.2 || ^7.0.0":
- version: 7.0.2
- resolution: "minipass@npm:7.0.2"
- checksum: 46776de732eb7cef2c7404a15fb28c41f5c54a22be50d47b03c605bf21f5c18d61a173c0a20b49a97e7a65f78d887245066410642551e45fffe04e9ac9e325bc
- languageName: node
- linkType: hard
-
"minizlib@npm:^2.1.1, minizlib@npm:^2.1.2":
version: 2.1.2
resolution: "minizlib@npm:2.1.2"
@@ -11697,20 +11567,13 @@ __metadata:
languageName: node
linkType: hard
-"ms@npm:2.1.2":
+"ms@npm:2.1.2, ms@npm:^2.0.0":
version: 2.1.2
resolution: "ms@npm:2.1.2"
checksum: 673cdb2c3133eb050c745908d8ce632ed2c02d85640e2edb3ace856a2266a813b30c613569bf3354fdf4ea7d1a1494add3bfa95e2713baa27d0c2c71fc44f58f
languageName: node
linkType: hard
-"ms@npm:^2.0.0":
- version: 2.1.3
- resolution: "ms@npm:2.1.3"
- checksum: aa92de608021b242401676e35cfa5aa42dd70cbdc082b916da7fb925c542173e36bce97ea3e804923fe92c0ad991434e4a38327e15a1b5b5f945d66df615ae6d
- languageName: node
- linkType: hard
-
"multimatch@npm:5.0.0":
version: 5.0.0
resolution: "multimatch@npm:5.0.0"
@@ -11731,12 +11594,12 @@ __metadata:
languageName: node
linkType: hard
-"nanoid@npm:^3.3.6":
- version: 3.3.6
- resolution: "nanoid@npm:3.3.6"
+"nanoid@npm:^3.3.6, nanoid@npm:^3.3.7":
+ version: 3.3.7
+ resolution: "nanoid@npm:3.3.7"
bin:
nanoid: bin/nanoid.cjs
- checksum: 7d0eda657002738aa5206107bd0580aead6c95c460ef1bdd0b1a87a9c7ae6277ac2e9b945306aaa5b32c6dcb7feaf462d0f552e7f8b5718abfc6ead5c94a71b3
+ checksum: d36c427e530713e4ac6567d488b489a36582ef89da1d6d4e3b87eded11eb10d7042a877958c6f104929809b2ab0bafa17652b076cdf84324aa75b30b722204f2
languageName: node
linkType: hard
@@ -11784,7 +11647,7 @@ __metadata:
languageName: node
linkType: hard
-"node-fetch@npm:2.6.7":
+"node-fetch@npm:2.6.7, node-fetch@npm:^2.6.0, node-fetch@npm:^2.6.7":
version: 2.6.7
resolution: "node-fetch@npm:2.6.7"
dependencies:
@@ -11798,20 +11661,6 @@ __metadata:
languageName: node
linkType: hard
-"node-fetch@npm:^2.6.0, node-fetch@npm:^2.6.7":
- version: 2.6.12
- resolution: "node-fetch@npm:2.6.12"
- dependencies:
- whatwg-url: ^5.0.0
- peerDependencies:
- encoding: ^0.1.0
- peerDependenciesMeta:
- encoding:
- optional: true
- checksum: 3bc1655203d47ee8e313c0d96664b9673a3d4dd8002740318e9d27d14ef306693a4b2ef8d6525775056fd912a19e23f3ac0d7111ad8925877b7567b29a625592
- languageName: node
- linkType: hard
-
"node-gyp-build@npm:^4.3.0":
version: 4.6.0
resolution: "node-gyp-build@npm:4.6.0"
@@ -11980,13 +11829,6 @@ __metadata:
languageName: node
linkType: hard
-"npm-normalize-package-bin@npm:^2.0.0":
- version: 2.0.0
- resolution: "npm-normalize-package-bin@npm:2.0.0"
- checksum: 7c5379f9b188b564c4332c97bdd9a5d6b7b15f02b5823b00989d6a0e6fb31eb0280f02b0a924f930e1fcaf00e60fae333aec8923d2a4c7747613c7d629d8aa25
- languageName: node
- linkType: hard
-
"npm-normalize-package-bin@npm:^3.0.0, npm-normalize-package-bin@npm:^3.0.1":
version: 3.0.1
resolution: "npm-normalize-package-bin@npm:3.0.1"
@@ -12064,7 +11906,7 @@ __metadata:
languageName: node
linkType: hard
-"npm-registry-fetch@npm:14.0.3":
+"npm-registry-fetch@npm:14.0.3, npm-registry-fetch@npm:^14.0.0, npm-registry-fetch@npm:^14.0.3":
version: 14.0.3
resolution: "npm-registry-fetch@npm:14.0.3"
dependencies:
@@ -12094,21 +11936,6 @@ __metadata:
languageName: node
linkType: hard
-"npm-registry-fetch@npm:^14.0.0, npm-registry-fetch@npm:^14.0.3":
- version: 14.0.5
- resolution: "npm-registry-fetch@npm:14.0.5"
- dependencies:
- make-fetch-happen: ^11.0.0
- minipass: ^5.0.0
- minipass-fetch: ^3.0.0
- minipass-json-stream: ^1.0.1
- minizlib: ^2.1.2
- npm-package-arg: ^10.0.0
- proc-log: ^3.0.0
- checksum: c63649642955b424bc1baaff5955027144af312ae117ba8c24829e74484f859482591fe89687c6597d83e930c8054463eef23020ac69146097a72cc62ff10986
- languageName: node
- linkType: hard
-
"npm-run-all@npm:^4.1.5":
version: 4.1.5
resolution: "npm-run-all@npm:4.1.5"
@@ -12484,7 +12311,7 @@ __metadata:
languageName: node
linkType: hard
-"pacote@npm:15.1.1":
+"pacote@npm:15.1.1, pacote@npm:^15.0.0, pacote@npm:^15.0.8":
version: 15.1.1
resolution: "pacote@npm:15.1.1"
dependencies:
@@ -12512,34 +12339,6 @@ __metadata:
languageName: node
linkType: hard
-"pacote@npm:^15.0.0, pacote@npm:^15.0.8":
- version: 15.2.0
- resolution: "pacote@npm:15.2.0"
- dependencies:
- "@npmcli/git": ^4.0.0
- "@npmcli/installed-package-contents": ^2.0.1
- "@npmcli/promise-spawn": ^6.0.1
- "@npmcli/run-script": ^6.0.0
- cacache: ^17.0.0
- fs-minipass: ^3.0.0
- minipass: ^5.0.0
- npm-package-arg: ^10.0.0
- npm-packlist: ^7.0.0
- npm-pick-manifest: ^8.0.0
- npm-registry-fetch: ^14.0.0
- proc-log: ^3.0.0
- promise-retry: ^2.0.1
- read-package-json: ^6.0.0
- read-package-json-fast: ^3.0.0
- sigstore: ^1.3.0
- ssri: ^10.0.0
- tar: ^6.1.11
- bin:
- pacote: lib/bin.js
- checksum: c731572be2bf226b117eba076d242bd4cd8be7aa01e004af3374a304ad7ab330539e22644bc33de12d2a7d45228ccbcbf4d710f59c84414f3d09a1a95ee6f0bf
- languageName: node
- linkType: hard
-
"parent-module@npm:^1.0.0":
version: 1.0.1
resolution: "parent-module@npm:1.0.1"
@@ -12914,25 +12713,7 @@ __metadata:
languageName: node
linkType: hard
-"pretty-format@npm:^29.6.1":
- version: 29.6.1
- resolution: "pretty-format@npm:29.6.1"
- dependencies:
- "@jest/schemas": ^29.6.0
- ansi-styles: ^5.0.0
- react-is: ^18.0.0
- checksum: 6f923a2379a37a425241dc223d76f671c73c4f37dba158050575a54095867d565c068b441843afdf3d7c37bed9df4bbadf46297976e60d4149972b779474203a
- languageName: node
- linkType: hard
-
-"prismjs@npm:^1.27.0":
- version: 1.29.0
- resolution: "prismjs@npm:1.29.0"
- checksum: 007a8869d4456ff8049dc59404e32d5666a07d99c3b0e30a18bd3b7676dfa07d1daae9d0f407f20983865fd8da56de91d09cb08e6aa61f5bc420a27c0beeaf93
- languageName: node
- linkType: hard
-
-"prismjs@npm:~1.27.0":
+"prismjs@npm:^1.27.0, prismjs@npm:~1.27.0":
version: 1.27.0
resolution: "prismjs@npm:1.27.0"
checksum: 85c7f4a3e999073502cc9e1882af01e3709706369ec254b60bff1149eda701f40d02512acab956012dc7e61cfd61743a3a34c1bd0737e8dbacd79141e5698bbc
@@ -13219,7 +13000,7 @@ __metadata:
languageName: node
linkType: hard
-"react@npm:^18.2.0":
+"react@npm:>=17.0.0 <19.0.0, react@npm:^18.2.0":
version: 18.2.0
resolution: "react@npm:18.2.0"
dependencies:
@@ -13262,7 +13043,7 @@ __metadata:
languageName: node
linkType: hard
-"read-package-json@npm:5.0.1":
+"read-package-json@npm:5.0.1, read-package-json@npm:^5.0.0":
version: 5.0.1
resolution: "read-package-json@npm:5.0.1"
dependencies:
@@ -13274,18 +13055,6 @@ __metadata:
languageName: node
linkType: hard
-"read-package-json@npm:^5.0.0":
- version: 5.0.2
- resolution: "read-package-json@npm:5.0.2"
- dependencies:
- glob: ^8.0.1
- json-parse-even-better-errors: ^2.3.1
- normalize-package-data: ^4.0.0
- npm-normalize-package-bin: ^2.0.0
- checksum: 0882ac9cec1bc92fb5515e9727611fb2909351e1e5c840dce3503cbb25b4cd48eb44b61071986e0fc51043208161f07d364a7336206c8609770186818753b51a
- languageName: node
- linkType: hard
-
"read-package-json@npm:^6.0.0":
version: 6.0.4
resolution: "read-package-json@npm:6.0.4"
@@ -13969,7 +13738,7 @@ __metadata:
languageName: node
linkType: hard
-"sigstore@npm:^1.0.0, sigstore@npm:^1.3.0, sigstore@npm:^1.4.0":
+"sigstore@npm:^1.0.0, sigstore@npm:^1.4.0":
version: 1.8.0
resolution: "sigstore@npm:1.8.0"
dependencies:
@@ -14568,6 +14337,13 @@ __metadata:
languageName: node
linkType: hard
+"tabbable@npm:^5.2.0":
+ version: 5.3.3
+ resolution: "tabbable@npm:5.3.3"
+ checksum: 1aa56e1bb617cc10616c407f4e756f0607f3e2d30f9803664d70b85db037ca27e75918ed1c71443f3dc902e21dc9f991ce4b52d63a538c9b69b3218d3babcd70
+ languageName: node
+ linkType: hard
+
"table@npm:^6.0.9, table@npm:^6.8.1":
version: 6.8.1
resolution: "table@npm:6.8.1"
@@ -14601,7 +14377,7 @@ __metadata:
languageName: node
linkType: hard
-"tar@npm:6.1.11":
+"tar@npm:6.1.11, tar@npm:^6.1.11, tar@npm:^6.1.2":
version: 6.1.11
resolution: "tar@npm:6.1.11"
dependencies:
@@ -14615,20 +14391,6 @@ __metadata:
languageName: node
linkType: hard
-"tar@npm:^6.1.11, tar@npm:^6.1.2":
- version: 6.1.15
- resolution: "tar@npm:6.1.15"
- dependencies:
- chownr: ^2.0.0
- fs-minipass: ^2.0.0
- minipass: ^5.0.0
- minizlib: ^2.1.1
- mkdirp: ^1.0.3
- yallist: ^4.0.0
- checksum: f23832fceeba7578bf31907aac744ae21e74a66f4a17a9e94507acf460e48f6db598c7023882db33bab75b80e027c21f276d405e4a0322d58f51c7088d428268
- languageName: node
- linkType: hard
-
"temp-dir@npm:1.0.0":
version: 1.0.0
resolution: "temp-dir@npm:1.0.0"
@@ -14893,7 +14655,7 @@ __metadata:
languageName: node
linkType: hard
-"tslib@npm:^1.8.1":
+"tslib@npm:^1.13.0, tslib@npm:^1.8.1":
version: 1.14.1
resolution: "tslib@npm:1.14.1"
checksum: dbe628ef87f66691d5d2959b3e41b9ca0045c3ee3c7c7b906cc1e328b39f199bb1ad9e671c39025bd56122ac57dfbf7385a94843b1cc07c60a4db74795829acd
@@ -15048,7 +14810,7 @@ __metadata:
languageName: node
linkType: hard
-"typescript@npm:^3 || ^4":
+"typescript@npm:^3 || ^4, typescript@npm:~4.9.0":
version: 4.9.5
resolution: "typescript@npm:4.9.5"
bin:
@@ -15058,33 +14820,13 @@ __metadata:
languageName: node
linkType: hard
-"typescript@npm:~4.3.0":
- version: 4.3.5
- resolution: "typescript@npm:4.3.5"
- bin:
- tsc: bin/tsc
- tsserver: bin/tsserver
- checksum: bab033b5e2b0790dd35b77fd005df976ef80b8d84fd2c6e63cc31808151875beae9216e5a315fe7068e8499905c3c354248fe83272cdfc13b7705635f0c66c97
- languageName: node
- linkType: hard
-
-"typescript@patch:typescript@^3 || ^4#~builtin":
+"typescript@patch:typescript@^3 || ^4#~builtin, typescript@patch:typescript@~4.9.0#~builtin":
version: 4.9.5
- resolution: "typescript@patch:typescript@npm%3A4.9.5#~builtin::version=4.9.5&hash=23ec76"
- bin:
- tsc: bin/tsc
- tsserver: bin/tsserver
- checksum: ab417a2f398380c90a6cf5a5f74badd17866adf57f1165617d6a551f059c3ba0a3e4da0d147b3ac5681db9ac76a303c5876394b13b3de75fdd5b1eaa06181c9d
- languageName: node
- linkType: hard
-
-"typescript@patch:typescript@~4.3.0#~builtin":
- version: 4.3.5
- resolution: "typescript@patch:typescript@npm%3A4.3.5#~builtin::version=4.3.5&hash=dba6d9"
+ resolution: "typescript@patch:typescript@npm%3A4.9.5#~builtin::version=4.9.5&hash=289587"
bin:
tsc: bin/tsc
tsserver: bin/tsserver
- checksum: 365df18cf979c971ef9543b2acaa8694377a803f98e1804c41d0ede0b09d7046cb0cd98f2eaf3884b0fe923c01a60af1f653841bd8805c9715d5479c09a4ebe4
+ checksum: 1f8f3b6aaea19f0f67cba79057674ba580438a7db55057eb89cc06950483c5d632115c14077f6663ea76fd09fce3c190e6414bb98582ec80aa5a4eaf345d5b68
languageName: node
linkType: hard
@@ -16085,7 +15827,7 @@ __metadata:
languageName: node
linkType: hard
-"yargs-parser@npm:20.2.4":
+"yargs-parser@npm:20.2.4, yargs-parser@npm:^20.2.2, yargs-parser@npm:^20.2.3":
version: 20.2.4
resolution: "yargs-parser@npm:20.2.4"
checksum: d251998a374b2743a20271c2fd752b9fbef24eb881d53a3b99a7caa5e8227fcafd9abf1f345ac5de46435821be25ec12189a11030c12ee6481fef6863ed8b924
@@ -16099,13 +15841,6 @@ __metadata:
languageName: node
linkType: hard
-"yargs-parser@npm:^20.2.2, yargs-parser@npm:^20.2.3":
- version: 20.2.9
- resolution: "yargs-parser@npm:20.2.9"
- checksum: 8bb69015f2b0ff9e17b2c8e6bfe224ab463dd00ca211eece72a4cd8a906224d2703fb8a326d36fdd0e68701e201b2a60ed7cf81ce0fd9b3799f9fe7745977ae3
- languageName: node
- linkType: hard
-
"yargs@npm:16.2.0, yargs@npm:^16.2.0":
version: 16.2.0
resolution: "yargs@npm:16.2.0"