From dbd1884bf2b6386213abda41eae348bac084b072 Mon Sep 17 00:00:00 2001 From: bracesproul Date: Tue, 30 Jul 2024 18:35:47 -0700 Subject: [PATCH 1/4] docs[minor]: Updated mistral docs --- .../docs/integrations/chat/mistral.ipynb | 738 ++++++++++++++++++ .../docs/integrations/chat/mistral.mdx | 105 --- 2 files changed, 738 insertions(+), 105 deletions(-) create mode 100644 docs/core_docs/docs/integrations/chat/mistral.ipynb delete mode 100644 docs/core_docs/docs/integrations/chat/mistral.mdx diff --git a/docs/core_docs/docs/integrations/chat/mistral.ipynb b/docs/core_docs/docs/integrations/chat/mistral.ipynb new file mode 100644 index 000000000000..0718227e5063 --- /dev/null +++ b/docs/core_docs/docs/integrations/chat/mistral.ipynb @@ -0,0 +1,738 @@ +{ + "cells": [ + { + "cell_type": "raw", + "id": "afaf8039", + "metadata": {}, + "source": [ + "---\n", + "sidebar_label: ChatMistralAI\n", + "---" + ] + }, + { + "cell_type": "markdown", + "id": "e49f1e0d", + "metadata": {}, + "source": [ + "# ChatMistralAI\n", + "\n", + "This will help you getting started with ChatMistralAI [chat models](/docs/concepts/#chat-models). For detailed documentation of all ChatMistralAI features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html).\n", + "\n", + "## Overview\n", + "### Integration details\n", + "\n", + "| Class | Package | Local | Serializable | [PY support](https:/python.langchain.com/v0.2/docs/integrations/chat/mistralai) | Package downloads | Package latest |\n", + "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n", + "| [ChatMistralAI](https://api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html) | [@langchain/mistralai](https://api.js.langchain.com/modules/langchain_mistralai.html) | ❌ | ❌ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/mistralai?style=flat-square&label=%20) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/mistralai?style=flat-square&label=%20) |\n", + "\n", + "### Model features\n", + "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n", + "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n", + "| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ❌ | \n", + "\n", + "## Setup\n", + "\n", + "To access `ChatMistralAI` models you'll need to create a `ChatMistralAI` account, get an API key, and install the `@langchain/mistralai` integration package.\n", + "\n", + "### Credentials\n", + "\n", + "Head [here](https://console.mistral.ai/) to sign up to Mistral AI and generate an API key. Once you've done this set the `MISTRAL_API_KEY` environment variable:\n", + "\n", + "```{=mdx}\n", + "\n", + "```bash\n", + "export MISTRAL_API_KEY=\"your-api-key\"\n", + "```\n", + "\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "72ee0c4b-9764-423a-9dbf-95129e185210", + "metadata": {}, + "source": [ + "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", + "\n", + "```{=mdx}\n", + "\n", + "```bash\n", + "# export LANGCHAIN_TRACING_V2=\"true\"\n", + "# export LANGCHAIN_API_KEY=\"your-api-key\"\n", + "```\n", + "\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "0730d6a1-c893-4840-9817-5e5251676d5d", + "metadata": {}, + "source": [ + "### Installation\n", + "\n", + "The LangChain ChatMistralAI integration lives in the `@langchain/mistralai` package:\n", + "\n", + "```{=mdx}\n", + "\n", + "```bash npm2yarn\n", + "npm i @langchain/mistralai\n", + "```\n", + "\n", + "```" + ] + }, + { + "cell_type": "markdown", + "id": "a38cde65-254d-4219-a441-068766c0d4b5", + "metadata": {}, + "source": [ + "## Instantiation\n", + "\n", + "Now we can instantiate our model object and generate chat completions:" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "3:7 - Exported variable 'llm' has or is using name 'MistralAICallOptions' from external module \"/Users/bracesproul/code/lang-chain-ai/langchainjs/libs/langchain-mistralai/dist/chat_models\" but cannot be named.\n" + ] + } + ], + "source": [ + "import { ChatMistralAI } from \"@langchain/mistralai\" \n", + "\n", + "const llm = new ChatMistralAI({\n", + " model: \"mistral-small\",\n", + " temperature: 0,\n", + " maxTokens: undefined,\n", + " maxRetries: 2,\n", + " // other params...\n", + "})" + ] + }, + { + "cell_type": "markdown", + "id": "2b4f3e15", + "metadata": {}, + "source": [ + "## Invocation\n", + "\n", + "When sending chat messages to mistral, there are a few requirements to follow:\n", + "\n", + "- The first message can _*not*_ be an assistant (ai) message.\n", + "- Messages _*must*_ alternate between user and assistant (ai) messages.\n", + "- Messages can _*not*_ end with an assistant (ai) or system message." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "62e0dbc3", + "metadata": { + "tags": [] + }, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage {\n", + " lc_serializable: \u001b[33mtrue\u001b[39m,\n", + " lc_kwargs: {\n", + " content: \u001b[32m`Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love pro`\u001b[39m... 126 more characters,\n", + " tool_calls: [],\n", + " invalid_tool_calls: [],\n", + " additional_kwargs: { tool_calls: [] },\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", + " content: \u001b[32m`Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love pro`\u001b[39m... 126 more characters,\n", + " name: \u001b[90mundefined\u001b[39m,\n", + " additional_kwargs: { tool_calls: [] },\n", + " response_metadata: {\n", + " tokenUsage: { completionTokens: \u001b[33m52\u001b[39m, promptTokens: \u001b[33m32\u001b[39m, totalTokens: \u001b[33m84\u001b[39m },\n", + " finish_reason: \u001b[32m\"stop\"\u001b[39m\n", + " },\n", + " tool_calls: [],\n", + " invalid_tool_calls: []\n", + "}" + ] + }, + "execution_count": 2, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "const aiMsg = await llm.invoke([\n", + " [\n", + " \"system\",\n", + " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n", + " ],\n", + " [\"human\", \"I love programming.\"],\n", + "])\n", + "aiMsg" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love programming\" translates to \"J'aime programmer\" in French. Let me know if you have any other questions or need further assistance!\n" + ] + } + ], + "source": [ + "console.log(aiMsg.content)" + ] + }, + { + "cell_type": "markdown", + "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8", + "metadata": {}, + "source": [ + "## Chaining\n", + "\n", + "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", + "metadata": {}, + "outputs": [ + { + "data": { + "text/plain": [ + "AIMessage {\n", + " lc_serializable: \u001b[33mtrue\u001b[39m,\n", + " lc_kwargs: {\n", + " content: \u001b[32m\"Ich liebe Programmierung. (German translation)\"\u001b[39m,\n", + " tool_calls: [],\n", + " invalid_tool_calls: [],\n", + " additional_kwargs: { tool_calls: [] },\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", + " content: \u001b[32m\"Ich liebe Programmierung. (German translation)\"\u001b[39m,\n", + " name: \u001b[90mundefined\u001b[39m,\n", + " additional_kwargs: { tool_calls: [] },\n", + " response_metadata: {\n", + " tokenUsage: { completionTokens: \u001b[33m12\u001b[39m, promptTokens: \u001b[33m26\u001b[39m, totalTokens: \u001b[33m38\u001b[39m },\n", + " finish_reason: \u001b[32m\"stop\"\u001b[39m\n", + " },\n", + " tool_calls: [],\n", + " invalid_tool_calls: []\n", + "}" + ] + }, + "execution_count": 4, + "metadata": {}, + "output_type": "execute_result" + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" + ] + } + ], + "source": [ + "import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n", + "\n", + "const prompt = ChatPromptTemplate.fromMessages(\n", + " [\n", + " [\n", + " \"system\",\n", + " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n", + " ],\n", + " [\"human\", \"{input}\"],\n", + " ]\n", + ")\n", + "\n", + "const chain = prompt.pipe(llm);\n", + "await chain.invoke(\n", + " {\n", + " input_language: \"English\",\n", + " output_language: \"German\",\n", + " input: \"I love programming.\",\n", + " }\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd", + "metadata": {}, + "source": [ + "## Tool calling\n", + "\n", + "Mistral's API now supports tool calling and JSON mode!\n", + "The examples below demonstrates how to use them, along with how to use the `withStructuredOutput` method to easily compose structured output LLM calls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "98d9034c", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "[\n", + " {\n", + " name: \"calculator\",\n", + " args: { operation: \"add\", number1: 2, number2: 2 },\n", + " id: \"Qcw6so4hJ\"\n", + " }\n", + "]\n" + ] + } + ], + "source": [ + "import { ChatMistralAI } from \"@langchain/mistralai\";\n", + "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", + "import { z } from \"zod\";\n", + "import { tool } from \"@langchain/core/tools\";\n", + "\n", + "const calculatorSchema2 = z.object({\n", + " operation: z\n", + " .enum([\"add\", \"subtract\", \"multiply\", \"divide\"])\n", + " .describe(\"The type of operation to execute.\"),\n", + " number1: z.number().describe(\"The first number to operate on.\"),\n", + " number2: z.number().describe(\"The second number to operate on.\"),\n", + "});\n", + "\n", + "const calculatorTool2 = tool((input) => {\n", + " return JSON.stringify(input);\n", + "}, {\n", + " name: \"calculator\",\n", + " description: \"A simple calculator tool\",\n", + " schema: calculatorSchema2,\n", + "});\n", + "\n", + "const llm2 = new ChatMistralAI({\n", + " model: \"mistral-large-latest\",\n", + "});\n", + "\n", + "// Bind the tool to the model\n", + "const modelWithTool2 = llm2.bind({\n", + " tools: [calculatorTool2],\n", + "});\n", + "\n", + "const prompt2 = ChatPromptTemplate.fromMessages([\n", + " [\n", + " \"system\",\n", + " \"You are a helpful assistant who always needs to use a calculator.\",\n", + " ],\n", + " [\"human\", \"{input}\"],\n", + "]);\n", + "\n", + "// Chain your prompt, model, and output parser together\n", + "const chain2 = prompt2.pipe(modelWithTool2);\n", + "\n", + "const response2 = await chain2.invoke({\n", + " input: \"What is 2 + 2?\",\n", + "});\n", + "console.log(response2.tool_calls);" + ] + }, + { + "cell_type": "markdown", + "id": "e6ff9f76", + "metadata": {}, + "source": [ + "### `.withStructuredOutput({ ... })`\n", + "\n", + "Using the `.withStructuredOutput` method, you can easily make the LLM return structured output, given only a Zod or JSON schema:\n", + "\n", + "```{=mdx}\n", + "\n", + ":::note\n", + "The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error.\n", + ":::\n", + "\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "a8638d82", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{ operation: \"add\", number1: 2, number2: 2 }\n" + ] + } + ], + "source": [ + "import { ChatMistralAI } from \"@langchain/mistralai\";\n", + "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", + "import { z } from \"zod\";\n", + "\n", + "const calculatorSchema3 = z\n", + " .object({\n", + " operation: z\n", + " .enum([\"add\", \"subtract\", \"multiply\", \"divide\"])\n", + " .describe(\"The type of operation to execute.\"),\n", + " number1: z.number().describe(\"The first number to operate on.\"),\n", + " number2: z.number().describe(\"The second number to operate on.\"),\n", + " })\n", + " .describe(\"A simple calculator tool\");\n", + "\n", + "const llm3 = new ChatMistralAI({\n", + " model: \"mistral-large-latest\",\n", + "});\n", + "\n", + "// Pass the schema and tool name to the withStructuredOutput method\n", + "const modelWithTool3 = llm3.withStructuredOutput(calculatorSchema3);\n", + "\n", + "const prompt3 = ChatPromptTemplate.fromMessages([\n", + " [\n", + " \"system\",\n", + " \"You are a helpful assistant who always needs to use a calculator.\",\n", + " ],\n", + " [\"human\", \"{input}\"],\n", + "]);\n", + "\n", + "// Chain your prompt and model together\n", + "const chain3 = prompt3.pipe(modelWithTool3);\n", + "\n", + "const response3 = await chain3.invoke({\n", + " input: \"What is 2 + 2?\",\n", + "});\n", + "console.log(response3);" + ] + }, + { + "cell_type": "markdown", + "id": "38d8a048", + "metadata": {}, + "source": [ + "You can supply a \"name\" field to give the LLM additional context around what you are trying to generate. You can also pass 'includeRaw' to get the raw message back from the model too." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9786b41a", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{\n", + " raw: AIMessage {\n", + " lc_serializable: true,\n", + " lc_kwargs: {\n", + " content: \"\",\n", + " tool_calls: [\n", + " {\n", + " name: \"calculator\",\n", + " args: { operation: \"add\", number1: 2, number2: 2 },\n", + " id: \"qQz1AWzNd\"\n", + " }\n", + " ],\n", + " invalid_tool_calls: [],\n", + " additional_kwargs: {\n", + " tool_calls: [\n", + " {\n", + " id: \"qQz1AWzNd\",\n", + " function: {\n", + " name: \"calculator\",\n", + " arguments: '{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}'\n", + " }\n", + " }\n", + " ]\n", + " },\n", + " response_metadata: {}\n", + " },\n", + " lc_namespace: [ \"langchain_core\", \"messages\" ],\n", + " content: \"\",\n", + " name: undefined,\n", + " additional_kwargs: {\n", + " tool_calls: [\n", + " {\n", + " id: \"qQz1AWzNd\",\n", + " function: {\n", + " name: \"calculator\",\n", + " arguments: '{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}'\n", + " }\n", + " }\n", + " ]\n", + " },\n", + " response_metadata: {\n", + " tokenUsage: { completionTokens: 34, promptTokens: 205, totalTokens: 239 },\n", + " finish_reason: \"tool_calls\"\n", + " },\n", + " tool_calls: [\n", + " {\n", + " name: \"calculator\",\n", + " args: { operation: \"add\", number1: 2, number2: 2 },\n", + " id: \"qQz1AWzNd\"\n", + " }\n", + " ],\n", + " invalid_tool_calls: []\n", + " },\n", + " parsed: { operation: \"add\", number1: 2, number2: 2 }\n", + "}\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "AbortError: The user aborted a request.\n", + " at abort (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1458:16)\n", + " at AbortSignal.abortAndFinalize (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1473:4)\n", + " at innerInvokeEventListeners (ext:deno_web/02_event.js:754:7)\n", + " at invokeEventListeners (ext:deno_web/02_event.js:801:5)\n", + " at dispatch (ext:deno_web/02_event.js:658:9)\n", + " at AbortSignal.dispatchEvent (ext:deno_web/02_event.js:1043:12)\n", + " at AbortSignal.[[[signalAbort]]] (ext:deno_web/03_abort_signal.js:146:11)\n", + " at ext:deno_web/03_abort_signal.js:116:28\n", + " at Object.action (ext:deno_web/02_timers.js:154:11)\n", + " at handleTimerMacrotask (ext:deno_web/02_timers.js:68:10) {\n", + " type: \"aborted\",\n", + " message: \"The user aborted a request.\"\n", + "}\n", + "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" + ] + } + ], + "source": [ + "const includeRawModel3 = llm3.withStructuredOutput(calculatorSchema3, {\n", + " name: \"calculator\",\n", + " includeRaw: true,\n", + "});\n", + "const includeRawChain3 = prompt3.pipe(includeRawModel3);\n", + "\n", + "const includeRawResponse3 = await includeRawChain3.invoke({\n", + " input: \"What is 2 + 2?\",\n", + "});\n", + "console.dir(includeRawResponse3, { depth: null });" + ] + }, + { + "cell_type": "markdown", + "id": "6b7b374f", + "metadata": {}, + "source": [ + "### Using JSON schema:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "9f1dc9bd", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{ operation: \"add\", number1: 2, number2: 2 }\n" + ] + }, + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" + ] + } + ], + "source": [ + "import { ChatMistralAI } from \"@langchain/mistralai\";\n", + "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", + "\n", + "const calculatorJsonSchema4 = {\n", + " type: \"object\",\n", + " properties: {\n", + " operation: {\n", + " type: \"string\",\n", + " enum: [\"add\", \"subtract\", \"multiply\", \"divide\"],\n", + " description: \"The type of operation to execute.\",\n", + " },\n", + " number1: { type: \"number\", description: \"The first number to operate on.\" },\n", + " number2: {\n", + " type: \"number\",\n", + " description: \"The second number to operate on.\",\n", + " },\n", + " },\n", + " required: [\"operation\", \"number1\", \"number2\"],\n", + " description: \"A simple calculator tool\",\n", + "};\n", + "\n", + "const llm4 = new ChatMistralAI({\n", + " model: \"mistral-large-latest\",\n", + "});\n", + "\n", + "// Pass the schema and tool name to the withStructuredOutput method\n", + "const modelWithTool4 = llm4.withStructuredOutput(calculatorJsonSchema4);\n", + "\n", + "const prompt4 = ChatPromptTemplate.fromMessages([\n", + " [\n", + " \"system\",\n", + " \"You are a helpful assistant who always needs to use a calculator.\",\n", + " ],\n", + " [\"human\", \"{input}\"],\n", + "]);\n", + "\n", + "// Chain your prompt and model together\n", + "const chain4 = prompt4.pipe(modelWithTool4);\n", + "\n", + "const response4 = await chain4.invoke({\n", + " input: \"What is 2 + 2?\",\n", + "});\n", + "console.log(response4);\n" + ] + }, + { + "cell_type": "markdown", + "id": "3c8bc1d4", + "metadata": {}, + "source": [ + "### Tool calling agent\n", + "\n", + "The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent.\n", + "Here's an example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "76bd0061", + "metadata": {}, + "outputs": [ + { + "name": "stderr", + "output_type": "stream", + "text": [ + "Request failed: HTTP error! status: 400 Response: \n", + "{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}\n" + ] + }, + { + "ename": "MistralAPIError", + "evalue": "HTTP error! status: 400 Response: \n{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}", + "output_type": "error", + "traceback": [ + "Stack trace:", + "MistralAPIError: HTTP error! status: 400 Response: ", + "{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}", + " at MistralClient._request (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@mistralai/mistralai/0.1.3/src/client.js:132:17)", + " at Object.runMicrotasks (ext:core/01_core.js:484:26)", + " at processTicksAndRejections (ext:deno_node/_next_tick.ts:53:10)", + " at runNextTicks (ext:deno_node/_next_tick.ts:71:3)", + " at eventLoopTick (ext:core/01_core.js:70:21)", + " at async MistralClient.chatStream (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@mistralai/mistralai/0.1.3/src/client.js:306:22)", + " at async ChatMistralAI._streamResponseChunks (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/mistralai/0.0.19/dist/chat_models.js:403:26)", + " at async ChatMistralAI._streamIterator (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/language_models/chat_models.js:78:34)", + " at async ChatMistralAI.transform (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/runnables/base.js:377:9)", + " at async RunnableBinding.transform (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/runnables/base.js:733:9)" + ] + } + ], + "source": [ + "import { z } from \"zod\";\n", + "import { ChatMistralAI } from \"@langchain/mistralai\";\n", + "import { tool } from \"@langchain/core/tools\";\n", + "import { AgentExecutor, createToolCallingAgent } from \"langchain/agents\";\n", + "\n", + "import { ChatPromptTemplate } from \"@langchain/core/prompts\";\n", + "\n", + "const llm5 = new ChatMistralAI({\n", + " temperature: 0,\n", + " model: \"mistral-large-latest\",\n", + "});\n", + "\n", + "// Prompt template must have \"input\" and \"agent_scratchpad input variables\"\n", + "const prompt5 = ChatPromptTemplate.fromMessages([\n", + " [\"system\", \"You are a helpful assistant\"],\n", + " [\"placeholder\", \"{chat_history}\"],\n", + " [\"human\", \"{input}\"],\n", + " [\"placeholder\", \"{agent_scratchpad}\"],\n", + "]);\n", + "\n", + "// Mocked tool\n", + "const currentWeatherTool5 = tool(async () => \"28 °C\", {\n", + " name: \"get_current_weather\",\n", + " description: \"Get the current weather in a given location\",\n", + " schema: z.object({\n", + " location: z.string().describe(\"The city and state, e.g. San Francisco, CA\"),\n", + " }),\n", + "});\n", + "\n", + "const agent = createToolCallingAgent({\n", + " llm: llm5,\n", + " tools: [currentWeatherTool5],\n", + " prompt: prompt5,\n", + "});\n", + "\n", + "const agentExecutor = new AgentExecutor({\n", + " agent,\n", + " tools: [currentWeatherTool5],\n", + "});\n", + "\n", + "const input = \"What's the weather like in Paris?\";\n", + "const { output } = await agentExecutor.invoke({ input });\n", + "\n", + "console.log(output);\n" + ] + }, + { + "cell_type": "markdown", + "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3", + "metadata": {}, + "source": [ + "## API reference\n", + "\n", + "For detailed documentation of all ChatMistralAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_mistralai.ChatMistralAI.html" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "TypeScript", + "language": "typescript", + "name": "tslab" + }, + "language_info": { + "codemirror_mode": { + "mode": "typescript", + "name": "javascript", + "typescript": true + }, + "file_extension": ".ts", + "mimetype": "text/typescript", + "name": "typescript", + "version": "3.7.2" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} diff --git a/docs/core_docs/docs/integrations/chat/mistral.mdx b/docs/core_docs/docs/integrations/chat/mistral.mdx deleted file mode 100644 index 2566ee06c223..000000000000 --- a/docs/core_docs/docs/integrations/chat/mistral.mdx +++ /dev/null @@ -1,105 +0,0 @@ ---- -sidebar_label: Mistral AI ---- - -import CodeBlock from "@theme/CodeBlock"; - -# ChatMistralAI - -[Mistral AI](https://mistral.ai/) is a research organization and hosting platform for LLMs. -The LangChain implementation of Mistral's models uses their hosted generation API, making it easier to access their models without needing to run them locally. - -:::tip -Want to run Mistral's models locally? Check out our [Ollama integration](/docs/integrations/chat/ollama). -::: - -## Models - -Mistral's API offers access to two of their open source, and proprietary models. -See [this page](https://docs.mistral.ai/getting-started/models/) for an up to date list. - -## Setup - -In order to use the Mistral API you'll need an API key. You can sign up for a Mistral account and create an API key [here](https://console.mistral.ai/). - -You'll first need to install the [`@langchain/mistralai`](https://www.npmjs.com/package/@langchain/mistralai) package: - -import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx"; - - - -```bash npm2yarn -npm install @langchain/mistralai -``` - -import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx"; - - - -## Usage - -When sending chat messages to mistral, there are a few requirements to follow: - -- The first message can _*not*_ be an assistant (ai) message. -- Messages _*must*_ alternate between user and assistant (ai) messages. -- Messages can _*not*_ end with an assistant (ai) or system message. - -import ChatMistralAIExample from "@examples/models/chat/chat_mistralai.ts"; - -{ChatMistralAIExample} - -:::info -You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/d69d0db9-f29e-45aa-a40d-b53f6273d7d0/r) -::: - -### Streaming - -Mistral's API also supports streaming token responses. The example below demonstrates how to use this feature. - -import ChatStreamMistralAIExample from "@examples/models/chat/chat_stream_mistralai.ts"; - -{ChatStreamMistralAIExample} - -:::info -You can see a LangSmith trace of this example [here](https://smith.langchain.com/public/061d90f2-ac7e-44c5-8790-8b23299f9217/r) -::: - -### Tool calling - -Mistral's API now supports tool calling and JSON mode! -The examples below demonstrates how to use them, along with how to use the `withStructuredOutput` method to easily compose structured output LLM calls. - -import ToolCalling from "@examples/models/chat/chat_mistralai_tools.ts"; - -{ToolCalling} - -### `.withStructuredOutput({ ... })` - -:::info -The `.withStructuredOutput` method is in beta. It is actively being worked on, so the API may change. -::: - -Using the `.withStructuredOutput` method, you can easily make the LLM return structured output, given only a Zod or JSON schema: - -:::note -The Mistral tool calling API requires descriptions for each tool field. If descriptions are not supplied, the API will error. -::: - -import WSAExample from "@examples/models/chat/chat_mistralai_wsa.ts"; - -{WSAExample} - -### Using JSON schema: - -import WSAJSONExample from "@examples/models/chat/chat_mistralai_wsa_json.ts"; - -{WSAJSONExample} - -### Tool calling agent - -The larger Mistral models not only support tool calling, but can also be used in the Tool Calling agent. -Here's an example: - -import AgentsExample from "@examples/models/chat/chat_mistralai_agents.ts"; - -{AgentsExample} From a708561bb22a52be5886cf52a97ed968090bc89e Mon Sep 17 00:00:00 2001 From: bracesproul Date: Wed, 31 Jul 2024 15:39:03 -0700 Subject: [PATCH 2/4] :cr --- docs/core_docs/.gitignore | 4 +- .../docs/integrations/chat/mistral.ipynb | 254 ++++++------------ libs/langchain-mistralai/src/chat_models.ts | 8 +- 3 files changed, 91 insertions(+), 175 deletions(-) diff --git a/docs/core_docs/.gitignore b/docs/core_docs/.gitignore index aa23525d4033..d6d1df5db70f 100644 --- a/docs/core_docs/.gitignore +++ b/docs/core_docs/.gitignore @@ -209,4 +209,6 @@ docs/how_to/assign.mdx docs/how_to/agent_executor.md docs/how_to/agent_executor.mdx docs/integrations/llms/mistral.md -docs/integrations/llms/mistral.mdx \ No newline at end of file +docs/integrations/llms/mistral.mdx +docs/integrations/chat/mistral.md +docs/integrations/chat/mistral.mdx \ No newline at end of file diff --git a/docs/core_docs/docs/integrations/chat/mistral.ipynb b/docs/core_docs/docs/integrations/chat/mistral.ipynb index 0718227e5063..e0da1701d8f1 100644 --- a/docs/core_docs/docs/integrations/chat/mistral.ipynb +++ b/docs/core_docs/docs/integrations/chat/mistral.ipynb @@ -39,37 +39,17 @@ "\n", "Head [here](https://console.mistral.ai/) to sign up to Mistral AI and generate an API key. Once you've done this set the `MISTRAL_API_KEY` environment variable:\n", "\n", - "```{=mdx}\n", - "\n", "```bash\n", "export MISTRAL_API_KEY=\"your-api-key\"\n", "```\n", "\n", - "```" - ] - }, - { - "cell_type": "markdown", - "id": "72ee0c4b-9764-423a-9dbf-95129e185210", - "metadata": {}, - "source": [ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n", "\n", - "```{=mdx}\n", - "\n", "```bash\n", "# export LANGCHAIN_TRACING_V2=\"true\"\n", "# export LANGCHAIN_API_KEY=\"your-api-key\"\n", "```\n", "\n", - "```" - ] - }, - { - "cell_type": "markdown", - "id": "0730d6a1-c893-4840-9817-5e5251676d5d", - "metadata": {}, - "source": [ "### Installation\n", "\n", "The LangChain ChatMistralAI integration lives in the `@langchain/mistralai` package:\n", @@ -98,15 +78,7 @@ "execution_count": 1, "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae", "metadata": {}, - "outputs": [ - { - "name": "stderr", - "output_type": "stream", - "text": [ - "3:7 - Exported variable 'llm' has or is using name 'MistralAICallOptions' from external module \"/Users/bracesproul/code/lang-chain-ai/langchainjs/libs/langchain-mistralai/dist/chat_models\" but cannot be named.\n" - ] - } - ], + "outputs": [], "source": [ "import { ChatMistralAI } from \"@langchain/mistralai\" \n", "\n", @@ -135,40 +107,36 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 2, "id": "62e0dbc3", "metadata": { "tags": [] }, "outputs": [ { - "data": { - "text/plain": [ - "AIMessage {\n", - " lc_serializable: \u001b[33mtrue\u001b[39m,\n", - " lc_kwargs: {\n", - " content: \u001b[32m`Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love pro`\u001b[39m... 126 more characters,\n", - " tool_calls: [],\n", - " invalid_tool_calls: [],\n", - " additional_kwargs: { tool_calls: [] },\n", - " response_metadata: {}\n", - " },\n", - " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", - " content: \u001b[32m`Sure, I'd be happy to help you translate that sentence into French! The English sentence \"I love pro`\u001b[39m... 126 more characters,\n", - " name: \u001b[90mundefined\u001b[39m,\n", - " additional_kwargs: { tool_calls: [] },\n", - " response_metadata: {\n", - " tokenUsage: { completionTokens: \u001b[33m52\u001b[39m, promptTokens: \u001b[33m32\u001b[39m, totalTokens: \u001b[33m84\u001b[39m },\n", - " finish_reason: \u001b[32m\"stop\"\u001b[39m\n", - " },\n", - " tool_calls: [],\n", - " invalid_tool_calls: []\n", - "}" - ] - }, - "execution_count": 2, - "metadata": {}, - "output_type": "execute_result" + "name": "stdout", + "output_type": "stream", + "text": [ + "AIMessage {\n", + " \"content\": \"Sure, I'd be happy to help you translate that sentence into French! The English sentence \\\"I love programming\\\" translates to \\\"J'aime programmer\\\" in French. Let me know if you have any other questions or need further assistance!\",\n", + " \"additional_kwargs\": {},\n", + " \"response_metadata\": {\n", + " \"tokenUsage\": {\n", + " \"completionTokens\": 52,\n", + " \"promptTokens\": 32,\n", + " \"totalTokens\": 84\n", + " },\n", + " \"finish_reason\": \"stop\"\n", + " },\n", + " \"tool_calls\": [],\n", + " \"invalid_tool_calls\": [],\n", + " \"usage_metadata\": {\n", + " \"input_tokens\": 32,\n", + " \"output_tokens\": 52,\n", + " \"total_tokens\": 84\n", + " }\n", + "}\n" + ] } ], "source": [ @@ -184,7 +152,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 3, "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705", "metadata": {}, "outputs": [ @@ -212,44 +180,33 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b", "metadata": {}, "outputs": [ { - "data": { - "text/plain": [ - "AIMessage {\n", - " lc_serializable: \u001b[33mtrue\u001b[39m,\n", - " lc_kwargs: {\n", - " content: \u001b[32m\"Ich liebe Programmierung. (German translation)\"\u001b[39m,\n", - " tool_calls: [],\n", - " invalid_tool_calls: [],\n", - " additional_kwargs: { tool_calls: [] },\n", - " response_metadata: {}\n", - " },\n", - " lc_namespace: [ \u001b[32m\"langchain_core\"\u001b[39m, \u001b[32m\"messages\"\u001b[39m ],\n", - " content: \u001b[32m\"Ich liebe Programmierung. (German translation)\"\u001b[39m,\n", - " name: \u001b[90mundefined\u001b[39m,\n", - " additional_kwargs: { tool_calls: [] },\n", - " response_metadata: {\n", - " tokenUsage: { completionTokens: \u001b[33m12\u001b[39m, promptTokens: \u001b[33m26\u001b[39m, totalTokens: \u001b[33m38\u001b[39m },\n", - " finish_reason: \u001b[32m\"stop\"\u001b[39m\n", - " },\n", - " tool_calls: [],\n", - " invalid_tool_calls: []\n", - "}" - ] - }, - "execution_count": 4, - "metadata": {}, - "output_type": "execute_result" - }, - { - "name": "stderr", + "name": "stdout", "output_type": "stream", "text": [ - "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" + "AIMessage {\n", + " \"content\": \"Ich liebe Programmierung. (German translation)\",\n", + " \"additional_kwargs\": {},\n", + " \"response_metadata\": {\n", + " \"tokenUsage\": {\n", + " \"completionTokens\": 12,\n", + " \"promptTokens\": 26,\n", + " \"totalTokens\": 38\n", + " },\n", + " \"finish_reason\": \"stop\"\n", + " },\n", + " \"tool_calls\": [],\n", + " \"invalid_tool_calls\": [],\n", + " \"usage_metadata\": {\n", + " \"input_tokens\": 26,\n", + " \"output_tokens\": 12,\n", + " \"total_tokens\": 38\n", + " }\n", + "}\n" ] } ], @@ -289,7 +246,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 5, "id": "98d9034c", "metadata": {}, "outputs": [ @@ -299,9 +256,10 @@ "text": [ "[\n", " {\n", - " name: \"calculator\",\n", - " args: { operation: \"add\", number1: 2, number2: 2 },\n", - " id: \"Qcw6so4hJ\"\n", + " name: 'calculator',\n", + " args: { operation: 'add', number1: 2, number2: 2 },\n", + " type: 'tool_call',\n", + " id: '2HFfjvCvo'\n", " }\n", "]\n" ] @@ -375,7 +333,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 6, "id": "a8638d82", "metadata": {}, "outputs": [ @@ -383,7 +341,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "{ operation: \"add\", number1: 2, number2: 2 }\n" + "{ operation: 'add', number1: 2, number2: 2 }\n" ] } ], @@ -436,7 +394,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 7, "id": "9786b41a", "metadata": {}, "outputs": [ @@ -448,79 +406,65 @@ " raw: AIMessage {\n", " lc_serializable: true,\n", " lc_kwargs: {\n", - " content: \"\",\n", + " content: '',\n", " tool_calls: [\n", " {\n", - " name: \"calculator\",\n", - " args: { operation: \"add\", number1: 2, number2: 2 },\n", - " id: \"qQz1AWzNd\"\n", + " name: 'calculator',\n", + " args: { operation: 'add', number1: 2, number2: 2 },\n", + " type: 'tool_call',\n", + " id: 'qVxKofNLR'\n", " }\n", " ],\n", " invalid_tool_calls: [],\n", " additional_kwargs: {\n", " tool_calls: [\n", " {\n", - " id: \"qQz1AWzNd\",\n", + " id: 'qVxKofNLR',\n", " function: {\n", - " name: \"calculator\",\n", + " name: 'calculator',\n", " arguments: '{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}'\n", - " }\n", + " },\n", + " type: 'function'\n", " }\n", " ]\n", " },\n", + " usage_metadata: { input_tokens: 205, output_tokens: 34, total_tokens: 239 },\n", " response_metadata: {}\n", " },\n", - " lc_namespace: [ \"langchain_core\", \"messages\" ],\n", - " content: \"\",\n", + " lc_namespace: [ 'langchain_core', 'messages' ],\n", + " content: '',\n", " name: undefined,\n", " additional_kwargs: {\n", " tool_calls: [\n", " {\n", - " id: \"qQz1AWzNd\",\n", + " id: 'qVxKofNLR',\n", " function: {\n", - " name: \"calculator\",\n", + " name: 'calculator',\n", " arguments: '{\"operation\": \"add\", \"number1\": 2, \"number2\": 2}'\n", - " }\n", + " },\n", + " type: 'function'\n", " }\n", " ]\n", " },\n", " response_metadata: {\n", " tokenUsage: { completionTokens: 34, promptTokens: 205, totalTokens: 239 },\n", - " finish_reason: \"tool_calls\"\n", + " finish_reason: 'tool_calls'\n", " },\n", + " id: undefined,\n", " tool_calls: [\n", " {\n", - " name: \"calculator\",\n", - " args: { operation: \"add\", number1: 2, number2: 2 },\n", - " id: \"qQz1AWzNd\"\n", + " name: 'calculator',\n", + " args: { operation: 'add', number1: 2, number2: 2 },\n", + " type: 'tool_call',\n", + " id: 'qVxKofNLR'\n", " }\n", " ],\n", - " invalid_tool_calls: []\n", + " invalid_tool_calls: [],\n", + " usage_metadata: { input_tokens: 205, output_tokens: 34, total_tokens: 239 }\n", " },\n", - " parsed: { operation: \"add\", number1: 2, number2: 2 }\n", + " parsed: { operation: 'add', number1: 2, number2: 2 }\n", "}\n" ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "AbortError: The user aborted a request.\n", - " at abort (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1458:16)\n", - " at AbortSignal.abortAndFinalize (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/node-fetch/2.7.0/lib/index.js:1473:4)\n", - " at innerInvokeEventListeners (ext:deno_web/02_event.js:754:7)\n", - " at invokeEventListeners (ext:deno_web/02_event.js:801:5)\n", - " at dispatch (ext:deno_web/02_event.js:658:9)\n", - " at AbortSignal.dispatchEvent (ext:deno_web/02_event.js:1043:12)\n", - " at AbortSignal.[[[signalAbort]]] (ext:deno_web/03_abort_signal.js:146:11)\n", - " at ext:deno_web/03_abort_signal.js:116:28\n", - " at Object.action (ext:deno_web/02_timers.js:154:11)\n", - " at handleTimerMacrotask (ext:deno_web/02_timers.js:68:10) {\n", - " type: \"aborted\",\n", - " message: \"The user aborted a request.\"\n", - "}\n", - "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" - ] } ], "source": [ @@ -546,7 +490,7 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 8, "id": "9f1dc9bd", "metadata": {}, "outputs": [ @@ -554,14 +498,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "{ operation: \"add\", number1: 2, number2: 2 }\n" - ] - }, - { - "name": "stderr", - "output_type": "stream", - "text": [ - "Error in handler LangChainTracer, handleChainEnd: AbortError: The user aborted a request.\n" + "{ operation: 'add', number1: 2, number2: 2 }\n" ] } ], @@ -624,36 +561,15 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 9, "id": "76bd0061", "metadata": {}, "outputs": [ { - "name": "stderr", + "name": "stdout", "output_type": "stream", "text": [ - "Request failed: HTTP error! status: 400 Response: \n", - "{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}\n" - ] - }, - { - "ename": "MistralAPIError", - "evalue": "HTTP error! status: 400 Response: \n{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}", - "output_type": "error", - "traceback": [ - "Stack trace:", - "MistralAPIError: HTTP error! status: 400 Response: ", - "{\"object\":\"error\",\"message\":\"Tool call id has to be defined in serving mode.\",\"type\":\"invalid_request_error\",\"param\":null,\"code\":null}", - " at MistralClient._request (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@mistralai/mistralai/0.1.3/src/client.js:132:17)", - " at Object.runMicrotasks (ext:core/01_core.js:484:26)", - " at processTicksAndRejections (ext:deno_node/_next_tick.ts:53:10)", - " at runNextTicks (ext:deno_node/_next_tick.ts:71:3)", - " at eventLoopTick (ext:core/01_core.js:70:21)", - " at async MistralClient.chatStream (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@mistralai/mistralai/0.1.3/src/client.js:306:22)", - " at async ChatMistralAI._streamResponseChunks (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/mistralai/0.0.19/dist/chat_models.js:403:26)", - " at async ChatMistralAI._streamIterator (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/language_models/chat_models.js:78:34)", - " at async ChatMistralAI.transform (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/runnables/base.js:377:9)", - " at async RunnableBinding.transform (file:///Users/bracesproul/Library/Caches/deno/npm/registry.npmjs.org/@langchain/core/0.1.63/dist/runnables/base.js:733:9)" + "The weather in Paris is 28 °C.\n" ] } ], diff --git a/libs/langchain-mistralai/src/chat_models.ts b/libs/langchain-mistralai/src/chat_models.ts index 4c14e304a9dd..fe58f740676c 100644 --- a/libs/langchain-mistralai/src/chat_models.ts +++ b/libs/langchain-mistralai/src/chat_models.ts @@ -77,8 +77,8 @@ interface TokenUsage { export type MistralAIToolChoice = "auto" | "any" | "none"; type MistralAIToolInput = { type: string; function: MistralAIFunction }; -interface MistralAICallOptions - extends Omit { + +export interface ChatMistralAICallOptions extends Omit { response_format?: { type: "text" | "json_object"; }; @@ -91,8 +91,6 @@ interface MistralAICallOptions streamUsage?: boolean; } -export interface ChatMistralAICallOptions extends MistralAICallOptions {} - /** * Input to chat model class. */ @@ -406,7 +404,7 @@ function _convertStructuredToolToMistralTool( * Integration with a chat model. */ export class ChatMistralAI< - CallOptions extends MistralAICallOptions = MistralAICallOptions + CallOptions extends ChatMistralAICallOptions = ChatMistralAICallOptions > extends BaseChatModel implements ChatMistralAIInput From ddf656bb22dd854fb3489eecf10744f61cd9afbe Mon Sep 17 00:00:00 2001 From: bracesproul Date: Wed, 31 Jul 2024 15:41:55 -0700 Subject: [PATCH 3/4] cr --- docs/core_docs/.gitignore | 40 +++++++++---------- .../docs/integrations/chat/mistral.ipynb | 10 +++-- 2 files changed, 27 insertions(+), 23 deletions(-) diff --git a/docs/core_docs/.gitignore b/docs/core_docs/.gitignore index d6d1df5db70f..df8abd01615f 100644 --- a/docs/core_docs/.gitignore +++ b/docs/core_docs/.gitignore @@ -34,26 +34,6 @@ yarn-error.log* /.quarto/ # AUTO_GENERATED_DOCS -docs/tutorials/rag.md -docs/tutorials/rag.mdx -docs/tutorials/query_analysis.md -docs/tutorials/query_analysis.mdx -docs/tutorials/qa_chat_history.md -docs/tutorials/qa_chat_history.mdx -docs/tutorials/pdf_qa.md -docs/tutorials/pdf_qa.mdx -docs/tutorials/local_rag.md -docs/tutorials/local_rag.mdx -docs/tutorials/llm_chain.md -docs/tutorials/llm_chain.mdx -docs/tutorials/graph.md -docs/tutorials/graph.mdx -docs/tutorials/extraction.md -docs/tutorials/extraction.mdx -docs/tutorials/classification.md -docs/tutorials/classification.mdx -docs/tutorials/chatbot.md -docs/tutorials/chatbot.mdx docs/how_to/trim_messages.md docs/how_to/trim_messages.mdx docs/how_to/tools_prompting.md @@ -208,6 +188,26 @@ docs/how_to/assign.md docs/how_to/assign.mdx docs/how_to/agent_executor.md docs/how_to/agent_executor.mdx +docs/tutorials/rag.md +docs/tutorials/rag.mdx +docs/tutorials/query_analysis.md +docs/tutorials/query_analysis.mdx +docs/tutorials/qa_chat_history.md +docs/tutorials/qa_chat_history.mdx +docs/tutorials/pdf_qa.md +docs/tutorials/pdf_qa.mdx +docs/tutorials/local_rag.md +docs/tutorials/local_rag.mdx +docs/tutorials/llm_chain.md +docs/tutorials/llm_chain.mdx +docs/tutorials/graph.md +docs/tutorials/graph.mdx +docs/tutorials/extraction.md +docs/tutorials/extraction.mdx +docs/tutorials/classification.md +docs/tutorials/classification.mdx +docs/tutorials/chatbot.md +docs/tutorials/chatbot.mdx docs/integrations/llms/mistral.md docs/integrations/llms/mistral.mdx docs/integrations/chat/mistral.md diff --git a/docs/core_docs/docs/integrations/chat/mistral.ipynb b/docs/core_docs/docs/integrations/chat/mistral.ipynb index e0da1701d8f1..f308e0c1d768 100644 --- a/docs/core_docs/docs/integrations/chat/mistral.ipynb +++ b/docs/core_docs/docs/integrations/chat/mistral.ipynb @@ -55,10 +55,14 @@ "The LangChain ChatMistralAI integration lives in the `@langchain/mistralai` package:\n", "\n", "```{=mdx}\n", + "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n", + "import Npm2Yarn from \"@theme/Npm2Yarn\";\n", "\n", - "```bash npm2yarn\n", - "npm i @langchain/mistralai\n", - "```\n", + "\n", + "\n", + "\n", + "@langchain/mistralai\n", + "\n", "\n", "```" ] From 2953dbe72d1c754c0900f8b561c84c1ddff96173 Mon Sep 17 00:00:00 2001 From: bracesproul Date: Wed, 31 Jul 2024 15:45:15 -0700 Subject: [PATCH 4/4] chore: lint files --- libs/langchain-mistralai/src/chat_models.ts | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/libs/langchain-mistralai/src/chat_models.ts b/libs/langchain-mistralai/src/chat_models.ts index fe58f740676c..2f8265174beb 100644 --- a/libs/langchain-mistralai/src/chat_models.ts +++ b/libs/langchain-mistralai/src/chat_models.ts @@ -78,7 +78,8 @@ export type MistralAIToolChoice = "auto" | "any" | "none"; type MistralAIToolInput = { type: string; function: MistralAIFunction }; -export interface ChatMistralAICallOptions extends Omit { +export interface ChatMistralAICallOptions + extends Omit { response_format?: { type: "text" | "json_object"; };