diff --git a/docs/core_docs/docs/integrations/llms/openai.ipynb b/docs/core_docs/docs/integrations/llms/openai.ipynb
new file mode 100644
index 000000000000..bd08a1a333e7
--- /dev/null
+++ b/docs/core_docs/docs/integrations/llms/openai.ipynb
@@ -0,0 +1,260 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "67db2992",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: OpenAI\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9597802c",
+ "metadata": {},
+ "source": [
+ "# OpenAI\n",
+ "\n",
+ "```{=mdx}\n",
+ "\n",
+ ":::caution\n",
+ "You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/#chat-models).\n",
+ "\n",
+ "Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/openai/).\n",
+ ":::\n",
+ "\n",
+ "```\n",
+ "\n",
+ "This will help you get started with OpenAI completion models (LLMs) using LangChain. For detailed documentation on `OpenAI` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/llms/openai) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [OpenAI](https://api.js.langchain.com/classes/langchain_openai.OpenAI.html) | [@langchain/openai](https://api.js.langchain.com/modules/langchain_openai.html) | ❌ | ✅ | ✅ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/openai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/openai?style=flat-square&label=%20&) |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the `@langchain/openai` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "Head to [platform.openai.com](https://platform.openai.com/) to sign up to OpenAI and generate an API key. Once you've done this set the `OPENAI_API_KEY` environment variable:\n",
+ "\n",
+ "```bash\n",
+ "export OPENAI_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
+ "\n",
+ "```bash\n",
+ "# export LANGCHAIN_TRACING_V2=\"true\"\n",
+ "# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "### Installation\n",
+ "\n",
+ "The LangChain OpenAI integration lives in the `@langchain/openai` package:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
+ "import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ " @langchain/openai\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0a760037",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "a0562a13",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { OpenAI } from \"@langchain/openai\"\n",
+ "\n",
+ "const llm = new OpenAI({\n",
+ " model: \"gpt-3.5-turbo-instruct\",\n",
+ " temperature: 0,\n",
+ " maxTokens: undefined,\n",
+ " timeout: undefined,\n",
+ " maxRetries: 2,\n",
+ " apiKey: process.env.OPENAI_API_KEY,\n",
+ " // other params...\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ee90032",
+ "metadata": {},
+ "source": [
+ "## Invocation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "035dea0f",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "develops and promotes friendly AI for the benefit of humanity. It was founded in 2015 by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, John Schulman, and Chris Olah. The company's mission is to create and promote artificial general intelligence (AGI) that is safe and beneficial to humanity.\n",
+ "\n",
+ "OpenAI conducts research in various areas of AI, including deep learning, reinforcement learning, robotics, and natural language processing. The company also develops and releases open-source tools and platforms for AI research, such as the GPT-3 language model and the Gym toolkit for reinforcement learning.\n",
+ "\n",
+ "One of the main goals of OpenAI is to ensure that the development of AI is aligned with human values and does not pose a threat to humanity. To this end, the company has established a set of principles for safe and ethical AI development, and it actively collaborates with other organizations and researchers in the field.\n",
+ "\n",
+ "OpenAI has received funding from various sources, including tech giants like Microsoft and Amazon, as well as individual investors. It has also partnered with companies and organizations such as Google, IBM, and the United Nations to advance its research and promote responsible AI development.\n",
+ "\n",
+ "In addition to its research and development\n"
+ ]
+ }
+ ],
+ "source": [
+ "const inputText = \"OpenAI is an AI company that \"\n",
+ "\n",
+ "const completion = await llm.invoke(inputText)\n",
+ "completion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "add38532",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "078e9db2",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Ich liebe Programmieren.\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { PromptTemplate } from \"@langchain/core/prompts\"\n",
+ "\n",
+ "const prompt = new PromptTemplate({\n",
+ " template: \"How to say {input} in {output_language}:\\n\",\n",
+ " inputVariables: [\"input\", \"output_language\"],\n",
+ "})\n",
+ "\n",
+ "const chain = prompt.pipe(llm);\n",
+ "await chain.invoke(\n",
+ " {\n",
+ " output_language: \"German\",\n",
+ " input: \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e99eef30",
+ "metadata": {},
+ "source": [
+ "If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when\n",
+ "initializing the model.\n",
+ "\n",
+ "## Custom URLs\n",
+ "\n",
+ "You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "d958ab00",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "const llmCustomURL = new OpenAI({\n",
+ " temperature: 0.9,\n",
+ " configuration: {\n",
+ " baseURL: \"https://your_custom_url.com\",\n",
+ " },\n",
+ "});"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "81a5e2ea",
+ "metadata": {},
+ "source": [
+ "You can also pass other `ClientOptions` parameters accepted by the official SDK.\n",
+ "\n",
+ "If you are hosting on Azure OpenAI, see the [dedicated page instead](/docs/integrations/llms/azure).\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e9bdfcef",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all OpenAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_openai.OpenAI.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/core_docs/docs/integrations/llms/openai.mdx b/docs/core_docs/docs/integrations/llms/openai.mdx
deleted file mode 100644
index e8ba5cd1e8a9..000000000000
--- a/docs/core_docs/docs/integrations/llms/openai.mdx
+++ /dev/null
@@ -1,55 +0,0 @@
-# OpenAI
-
-:::caution
-You are currently on a page documenting the use of OpenAI [text completion models](/docs/concepts/#llms). The latest and most popular OpenAI models are [chat completion models](/docs/concepts/#chat-models).
-
-Unless you are specifically using `gpt-3.5-turbo-instruct`, you are probably looking for [this page instead](/docs/integrations/chat/openai/).
-:::
-
-Here's how you can initialize an `OpenAI` LLM instance:
-
-import IntegrationInstallTooltip from "@mdx_components/integration_install_tooltip.mdx";
-
-
-
-```bash npm2yarn
-npm install @langchain/openai
-```
-
-import UnifiedModelParamsTooltip from "@mdx_components/unified_model_params_tooltip.mdx";
-
-
-
-```typescript
-import { OpenAI } from "@langchain/openai";
-
-const model = new OpenAI({
- model: "gpt-3.5-turbo-instruct", // Defaults to "gpt-3.5-turbo-instruct" if no model provided.
- temperature: 0.9,
- apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.OPENAI_API_KEY
-});
-const res = await model.invoke(
- "What would be a good company name a company that makes colorful socks?"
-);
-console.log({ res });
-```
-
-If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` to your OpenAI organization id, or pass it in as `organization` when
-initializing the model.
-
-## Custom URLs
-
-You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:
-
-```typescript
-const model = new OpenAI({
- temperature: 0.9,
- configuration: {
- baseURL: "https://your_custom_url.com",
- },
-});
-```
-
-You can also pass other `ClientOptions` parameters accepted by the official SDK.
-
-If you are hosting on Azure OpenAI, see the [dedicated page instead](/docs/integrations/llms/azure).
diff --git a/libs/langchain-scripts/src/cli/docs/document_loaders.ts b/libs/langchain-scripts/src/cli/docs/document_loaders.ts
index 09498fd52d30..4a46511b0499 100644
--- a/libs/langchain-scripts/src/cli/docs/document_loaders.ts
+++ b/libs/langchain-scripts/src/cli/docs/document_loaders.ts
@@ -96,15 +96,6 @@ async function promptExtraFields(): Promise {
};
}
-/**
- * Will always be community. We care about:
- * Module name
- * file name
- * Is web loader (boolean)
- * Is node only (boolean)
- * Does it run locally (boolean)
- */
-
export async function fillDocLoaderIntegrationDocTemplate(fields: {
packageName: string;
moduleName: string;
diff --git a/libs/langchain-scripts/src/cli/docs/index.ts b/libs/langchain-scripts/src/cli/docs/index.ts
index a7a89745e7a1..d664a220a240 100644
--- a/libs/langchain-scripts/src/cli/docs/index.ts
+++ b/libs/langchain-scripts/src/cli/docs/index.ts
@@ -4,6 +4,7 @@
import { Command } from "commander";
import { fillChatIntegrationDocTemplate } from "./chat.js";
import { fillDocLoaderIntegrationDocTemplate } from "./document_loaders.js";
+import { fillLLMIntegrationDocTemplate } from "./llms.js";
type CLIInput = {
package: string;
@@ -49,9 +50,16 @@ async function main() {
moduleName,
});
break;
+ case "llm":
+ await fillLLMIntegrationDocTemplate({
+ packageName,
+ moduleName,
+ isCommunity,
+ });
+ break;
default:
console.error(
- `Invalid type: ${type}.\nOnly 'chat' and 'doc_loader' are supported at this time.`
+ `Invalid type: ${type}.\nOnly 'chat', 'llm' and 'doc_loader' are supported at this time.`
);
process.exit(1);
}
diff --git a/libs/langchain-scripts/src/cli/docs/llms.ts b/libs/langchain-scripts/src/cli/docs/llms.ts
new file mode 100644
index 000000000000..bd538bb55ac1
--- /dev/null
+++ b/libs/langchain-scripts/src/cli/docs/llms.ts
@@ -0,0 +1,179 @@
+import * as path from "node:path";
+import * as fs from "node:fs";
+import {
+ boldText,
+ getUserInput,
+ greenText,
+ redBackground,
+} from "../utils/get-input.js";
+
+const PACKAGE_NAME_PLACEHOLDER = "__package_name__";
+const PACKAGE_NAME_SHORT_SNAKE_CASE_PLACEHOLDER =
+ "__package_name_short_snake_case__";
+const PACKAGE_NAME_SNAKE_CASE_PLACEHOLDER = "__package_name_snake_case__";
+const PACKAGE_NAME_PRETTY_PLACEHOLDER = "__package_name_pretty__";
+const PACKAGE_IMPORT_PATH_PLACEHOLDER = "__import_path__";
+const MODULE_NAME_PLACEHOLDER = "__ModuleName__";
+// This should not be prefixed with `Chat` as it's used for API keys.
+const MODULE_NAME_ALL_CAPS_PLACEHOLDER = "__MODULE_NAME_ALL_CAPS__";
+
+const SERIALIZABLE_PLACEHOLDER = "__serializable__";
+const LOCAL_PLACEHOLDER = "__local__";
+const PY_SUPPORT_PLACEHOLDER = "__py_support__";
+
+const API_REF_BASE_PACKAGE_URL = `https://api.js.langchain.com/modules/langchain_${PACKAGE_NAME_PLACEHOLDER}.html`;
+const API_REF_BASE_MODULE_URL = `https://api.js.langchain.com/classes/langchain_${PACKAGE_NAME_PLACEHOLDER}.${MODULE_NAME_PLACEHOLDER}.html`;
+
+const TEMPLATE_PATH = path.resolve("./src/cli/docs/templates/llms.ipynb");
+const INTEGRATIONS_DOCS_PATH = path.resolve(
+ "../../docs/core_docs/docs/integrations/llms"
+);
+
+const fetchAPIRefUrl = async (url: string): Promise => {
+ try {
+ const res = await fetch(url);
+ if (res.status !== 200) {
+ throw new Error(`API Reference URL ${url} not found.`);
+ }
+ return true;
+ } catch (_) {
+ return false;
+ }
+};
+
+type ExtraFields = {
+ local: boolean;
+ serializable: boolean;
+ pySupport: boolean;
+};
+
+async function promptExtraFields(): Promise {
+ const hasLocal = await getUserInput(
+ "Does this integration support local usage? (y/n) ",
+ undefined,
+ true
+ );
+ const hasSerializable = await getUserInput(
+ "Does this integration support serializable output? (y/n) ",
+ undefined,
+ true
+ );
+ const hasPySupport = await getUserInput(
+ "Does this integration have Python support? (y/n) ",
+ undefined,
+ true
+ );
+
+ return {
+ local: hasLocal.toLowerCase() === "y",
+ serializable: hasSerializable.toLowerCase() === "y",
+ pySupport: hasPySupport.toLowerCase() === "y",
+ };
+}
+
+export async function fillLLMIntegrationDocTemplate(fields: {
+ packageName: string;
+ moduleName: string;
+ isCommunity: boolean;
+}) {
+ // Ask the user if they'd like to fill in extra fields, if so, prompt them.
+ let extraFields: ExtraFields | undefined;
+ const shouldPromptExtraFields = await getUserInput(
+ "Would you like to fill out optional fields? (y/n) ",
+ "white_background"
+ );
+ if (shouldPromptExtraFields.toLowerCase() === "y") {
+ extraFields = await promptExtraFields();
+ }
+
+ let formattedApiRefPackageUrl = "";
+ let formattedApiRefModuleUrl = "";
+ if (fields.isCommunity) {
+ formattedApiRefPackageUrl = API_REF_BASE_PACKAGE_URL.replace(
+ PACKAGE_NAME_PLACEHOLDER,
+ `community_llms_${fields.packageName}`
+ );
+ formattedApiRefModuleUrl = API_REF_BASE_MODULE_URL.replace(
+ PACKAGE_NAME_PLACEHOLDER,
+ `community_llms_${fields.packageName}`
+ ).replace(MODULE_NAME_PLACEHOLDER, fields.moduleName);
+ } else {
+ formattedApiRefPackageUrl = API_REF_BASE_PACKAGE_URL.replace(
+ PACKAGE_NAME_PLACEHOLDER,
+ fields.packageName
+ );
+ formattedApiRefModuleUrl = API_REF_BASE_MODULE_URL.replace(
+ PACKAGE_NAME_PLACEHOLDER,
+ fields.packageName
+ ).replace(MODULE_NAME_PLACEHOLDER, fields.moduleName);
+ }
+
+ const success = await Promise.all([
+ fetchAPIRefUrl(formattedApiRefPackageUrl),
+ fetchAPIRefUrl(formattedApiRefModuleUrl),
+ ]);
+ if (success.some((s) => s === false)) {
+ // Don't error out because this might be used before the package is released.
+ console.error("Invalid package or module name. API reference not found.");
+ }
+
+ const packageNameShortSnakeCase = fields.packageName.replaceAll("-", "_");
+ let fullPackageNameSnakeCase = "";
+ let packageNamePretty = "";
+ let fullPackageImportPath = "";
+
+ if (fields.isCommunity) {
+ fullPackageNameSnakeCase = `langchain_community_llms_${packageNameShortSnakeCase}`;
+ fullPackageImportPath = `@langchain/community/llms/${fields.packageName}`;
+ packageNamePretty = "@langchain/community";
+ } else {
+ fullPackageNameSnakeCase = `langchain_${packageNameShortSnakeCase}`;
+ packageNamePretty = `@langchain/${fields.packageName}`;
+ fullPackageImportPath = packageNamePretty;
+ }
+
+ let moduleNameAllCaps = fields.moduleName.toUpperCase();
+ if (moduleNameAllCaps.endsWith("_LLM")) {
+ moduleNameAllCaps = moduleNameAllCaps.replace("_LLM", "");
+ } else if (moduleNameAllCaps.endsWith("LLM")) {
+ moduleNameAllCaps = moduleNameAllCaps.replace("LLM", "");
+ }
+
+ const docTemplate = (await fs.promises.readFile(TEMPLATE_PATH, "utf-8"))
+ .replaceAll(PACKAGE_NAME_PLACEHOLDER, fields.packageName)
+ .replaceAll(PACKAGE_NAME_SNAKE_CASE_PLACEHOLDER, fullPackageNameSnakeCase)
+ .replaceAll(
+ PACKAGE_NAME_SHORT_SNAKE_CASE_PLACEHOLDER,
+ packageNameShortSnakeCase
+ )
+ .replaceAll(PACKAGE_NAME_PRETTY_PLACEHOLDER, packageNamePretty)
+ .replaceAll(PACKAGE_IMPORT_PATH_PLACEHOLDER, fullPackageImportPath)
+ .replaceAll(MODULE_NAME_PLACEHOLDER, fields.moduleName)
+ .replaceAll(MODULE_NAME_ALL_CAPS_PLACEHOLDER, moduleNameAllCaps)
+ .replace(LOCAL_PLACEHOLDER, extraFields?.local ? "✅" : "❌")
+ .replace(
+ SERIALIZABLE_PLACEHOLDER,
+ extraFields?.serializable ? "✅" : "beta"
+ )
+ .replace(PY_SUPPORT_PLACEHOLDER, extraFields?.pySupport ? "✅" : "❌");
+
+ const docPath = path.join(
+ INTEGRATIONS_DOCS_PATH,
+ `${packageNameShortSnakeCase}.ipynb`
+ );
+ await fs.promises.writeFile(docPath, docTemplate);
+ const prettyDocPath = docPath.split("docs/core_docs/")[1];
+
+ const updatePythonDocUrlText = ` ${redBackground(
+ "- Update the Python documentation URL with the proper URL."
+ )}`;
+ const successText = `\nSuccessfully created new chat model integration doc at ${prettyDocPath}.`;
+
+ console.log(
+ `${greenText(successText)}\n
+${boldText("Next steps:")}
+${extraFields?.pySupport ? updatePythonDocUrlText : ""}
+ - Run all code cells in the generated doc to record the outputs.
+ - Add extra sections on integration specific features.\n`
+ );
+}
diff --git a/libs/langchain-scripts/src/cli/docs/templates/llms.ipynb b/libs/langchain-scripts/src/cli/docs/templates/llms.ipynb
new file mode 100644
index 000000000000..659639446b34
--- /dev/null
+++ b/libs/langchain-scripts/src/cli/docs/templates/llms.ipynb
@@ -0,0 +1,223 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "67db2992",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: __ModuleName__\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "9597802c",
+ "metadata": {},
+ "source": [
+ "# __ModuleName__\n",
+ "\n",
+ "- [ ] TODO: Make sure API reference link is correct\n",
+ "\n",
+ "This will help you get started with __ModuleName__ completion models (LLMs) using LangChain. For detailed documentation on `__ModuleName__` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/__package_name_snake_case__.__ModuleName__.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "- TODO: Fill in table features.\n",
+ "- TODO: Remove JS support link if not relevant, otherwise ensure link is correct.\n",
+ "- TODO: Make sure API reference links are correct.\n",
+ "\n",
+ "| Class | Package | Local | Serializable | [PY support](https://python.langchain.com/docs/integrations/llms/__package_name_short_snake_case__) | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [__ModuleName__](https://api.js.langchain.com/classes/__package_name_snake_case__.__ModuleName__.html) | [__package_name_pretty__](https://api.js.langchain.com/modules/__package_name_snake_case__.html) | __local__ | __serializable__ | __py_support__ | ![NPM - Downloads](https://img.shields.io/npm/dm/__package_name_pretty__?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/__package_name_pretty__?style=flat-square&label=%20&) |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "- [ ] TODO: Update with relevant info.\n",
+ "\n",
+ "To access __ModuleName__ models you'll need to create a/an __ModuleName__ account, get an API key, and install the `__package_name_pretty__` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "- TODO: Update with relevant info.\n",
+ "\n",
+ "Head to (TODO: link) to sign up to __ModuleName__ and generate an API key. Once you've done this set the `__MODULE_NAME_ALL_CAPS___API_KEY` environment variable:\n",
+ "\n",
+ "```bash\n",
+ "export __MODULE_NAME_ALL_CAPS___API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
+ "\n",
+ "```bash\n",
+ "# export LANGCHAIN_TRACING_V2=\"true\"\n",
+ "# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "### Installation\n",
+ "\n",
+ "The LangChain __ModuleName__ integration lives in the `__package_name_pretty__` package:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
+ "import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ " __package_name_pretty__\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0a760037",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:\n",
+ "\n",
+ "- TODO: Update model instantiation with relevant params."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a0562a13",
+ "metadata": {
+ "vscode": {
+ "languageId": "typescript"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import { __ModuleName__ } from \"__import_path__\"\n",
+ "\n",
+ "const llm = new __ModuleName__({\n",
+ " model: \"model-name\",\n",
+ " temperature: 0,\n",
+ " maxTokens: undefined,\n",
+ " timeout: undefined,\n",
+ " maxRetries: 2,\n",
+ " // other params...\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0ee90032",
+ "metadata": {},
+ "source": [
+ "## Invocation\n",
+ "\n",
+ "- [ ] TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "035dea0f",
+ "metadata": {
+ "tags": [],
+ "vscode": {
+ "languageId": "typescript"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "const inputText = \"__ModuleName__ is an AI company that \"\n",
+ "\n",
+ "const completion = await llm.invoke(inputText)\n",
+ "completion"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "add38532",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:\n",
+ "\n",
+ "- TODO: Run cells so output can be seen."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "078e9db2",
+ "metadata": {
+ "vscode": {
+ "languageId": "typescript"
+ }
+ },
+ "outputs": [],
+ "source": [
+ "import { PromptTemplate } from \"@langchain/core/prompts\"\n",
+ "\n",
+ "const prompt = new PromptTemplate({\n",
+ " template: \"How to say {input} in {output_language}:\\n\",\n",
+ "})\n",
+ "\n",
+ "const chain = prompt.pipe(llm);\n",
+ "await chain.invoke(\n",
+ " {\n",
+ " output_language: \"German\",\n",
+ " input: \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e99eef30",
+ "metadata": {},
+ "source": [
+ "## TODO: Any functionality specific to this model provider\n",
+ "\n",
+ "E.g. creating/using finetuned models via this provider. Delete if not relevant"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e9bdfcef",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all __ModuleName__ features and configurations head to the API reference: https://api.js.langchain.com/classes/__package_name_snake_case__.__ModuleName__.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3.11.1 64-bit",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.9.7"
+ },
+ "vscode": {
+ "interpreter": {
+ "hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
+ }
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}