diff --git a/docs/core_docs/docs/integrations/chat/index.mdx b/docs/core_docs/docs/integrations/chat/index.mdx
index a94f1ccd6076..3432ab9aa4bf 100644
--- a/docs/core_docs/docs/integrations/chat/index.mdx
+++ b/docs/core_docs/docs/integrations/chat/index.mdx
@@ -29,6 +29,7 @@ If you'd like to write your own chat model, see [this how-to](/docs/how_to/custo
| [ChatOllama](/docs/integrations/chat/ollama/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatOpenAI](/docs/integrations/chat/openai/) | ✅ | ✅ | ✅ | ✅ | ✅ |
| [ChatTogetherAI](/docs/integrations/chat/togetherai/) | ✅ | ✅ | ✅ | ✅ | ✅ |
+| [ChatXAI](/docs/integrations/chat/xai/) | ✅ | ✅ | ✅ | ✅ | ❌ |
## All chat models
diff --git a/docs/core_docs/docs/integrations/chat/xai.ipynb b/docs/core_docs/docs/integrations/chat/xai.ipynb
new file mode 100644
index 000000000000..e07a8079dd0c
--- /dev/null
+++ b/docs/core_docs/docs/integrations/chat/xai.ipynb
@@ -0,0 +1,301 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "afaf8039",
+ "metadata": {
+ "vscode": {
+ "languageId": "raw"
+ }
+ },
+ "source": [
+ "---\n",
+ "sidebar_label: xAI\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e49f1e0d",
+ "metadata": {},
+ "source": [
+ "# ChatXAI\n",
+ "\n",
+ "[xAI](https://x.ai/) is an artificial intelligence company that develops large language models (LLMs). Their flagship model, Grok, is trained on real-time X (formerly Twitter) data and aims to provide witty, personality-rich responses while maintaining high capability on technical tasks.\n",
+ "\n",
+ "This guide will help you getting started with `ChatXAI` [chat models](/docs/concepts/chat_models). For detailed documentation of all `ChatXAI` features and configurations head to the [API reference](https://api.js.langchain.com/classes/langchain_community_chat_models_fireworks.ChatXAI.html).\n",
+ "\n",
+ "## Overview\n",
+ "### Integration details\n",
+ "\n",
+ "| Class | Package | Local | Serializable | PY support | Package downloads | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
+ "| [ChatXAI](https://api.js.langchain.com/classes/_langchain_xai.ChatXAI.html) | [`@langchain/xai`](https://www.npmjs.com/package/@langchain/xai) | ❌ | ✅ | ❌ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/xai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/xai?style=flat-square&label=%20&) |\n",
+ "\n",
+ "### Model features\n",
+ "\n",
+ "See the links in the table headers below for guides on how to use specific features.\n",
+ "\n",
+ "| [Tool calling](/docs/how_to/tool_calling) | [Structured output](/docs/how_to/structured_output/) | JSON mode | [Image input](/docs/how_to/multimodal_inputs/) | Audio input | Video input | [Token-level streaming](/docs/how_to/chat_streaming/) | [Token usage](/docs/how_to/chat_token_usage_tracking/) | [Logprobs](/docs/how_to/logprobs/) |\n",
+ "| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n",
+ "| ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | \n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To access `ChatXAI` models you'll need to create an xAI account, [get an API key](https://console.x.ai/), and install the `@langchain/xai` integration package.\n",
+ "\n",
+ "### Credentials\n",
+ "\n",
+ "Head to [the xAI website](https://x.ai) to sign up to xAI and generate an API key. Once you've done this set the `XAI_API_KEY` environment variable:\n",
+ "\n",
+ "```bash\n",
+ "export XAI_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
+ "\n",
+ "```bash\n",
+ "# export LANGCHAIN_TRACING_V2=\"true\"\n",
+ "# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
+ "```\n",
+ "\n",
+ "### Installation\n",
+ "\n",
+ "The LangChain `ChatXAI` integration lives in the `@langchain/xai` package:\n",
+ "\n",
+ "```{=mdx}\n",
+ "\n",
+ "import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
+ "import Npm2Yarn from \"@theme/Npm2Yarn\";\n",
+ "\n",
+ "\n",
+ "\n",
+ "\n",
+ " @langchain/xai @langchain/core\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our model object and generate chat completions:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "cb09c344-1836-4e0c-acf8-11d13ac1dbae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import { ChatXAI } from \"@langchain/xai\" \n",
+ "\n",
+ "const llm = new ChatXAI({\n",
+ " model: \"grok-beta\", // default\n",
+ " temperature: 0,\n",
+ " maxTokens: undefined,\n",
+ " maxRetries: 2,\n",
+ " // other params...\n",
+ "})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "2b4f3e15",
+ "metadata": {},
+ "source": [
+ "## Invocation"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "62e0dbc3",
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessage {\n",
+ " \"id\": \"71d7e3d8-30dd-472c-8038-b6b283dcee63\",\n",
+ " \"content\": \"J'adore programmer.\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {\n",
+ " \"tokenUsage\": {\n",
+ " \"promptTokens\": 30,\n",
+ " \"completionTokens\": 6,\n",
+ " \"totalTokens\": 36\n",
+ " },\n",
+ " \"finish_reason\": \"stop\",\n",
+ " \"usage\": {\n",
+ " \"prompt_tokens\": 30,\n",
+ " \"completion_tokens\": 6,\n",
+ " \"total_tokens\": 36\n",
+ " },\n",
+ " \"system_fingerprint\": \"fp_3e3898d4ce\"\n",
+ " },\n",
+ " \"tool_calls\": [],\n",
+ " \"invalid_tool_calls\": [],\n",
+ " \"usage_metadata\": {\n",
+ " \"output_tokens\": 6,\n",
+ " \"input_tokens\": 30,\n",
+ " \"total_tokens\": 36,\n",
+ " \"input_token_details\": {},\n",
+ " \"output_token_details\": {}\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "const aiMsg = await llm.invoke([\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
+ " ],\n",
+ " [\"human\", \"I love programming.\"],\n",
+ "])\n",
+ "console.log(aiMsg)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "d86145b3-bfef-46e8-b227-4dda5c9c2705",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "J'adore programmer.\n"
+ ]
+ }
+ ],
+ "source": [
+ "console.log(aiMsg.content)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "18e2bfc0-7e78-4528-a73f-499ac150dca8",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can [chain](/docs/how_to/sequence/) our model with a prompt template like so:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "e197d1d7-a070-4c96-9f8a-a0e86d046e0b",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "AIMessage {\n",
+ " \"id\": \"b2738008-8247-40e1-81dc-d9bf437a1a0c\",\n",
+ " \"content\": \"Ich liebe das Programmieren.\",\n",
+ " \"additional_kwargs\": {},\n",
+ " \"response_metadata\": {\n",
+ " \"tokenUsage\": {\n",
+ " \"promptTokens\": 25,\n",
+ " \"completionTokens\": 7,\n",
+ " \"totalTokens\": 32\n",
+ " },\n",
+ " \"finish_reason\": \"stop\",\n",
+ " \"usage\": {\n",
+ " \"prompt_tokens\": 25,\n",
+ " \"completion_tokens\": 7,\n",
+ " \"total_tokens\": 32\n",
+ " },\n",
+ " \"system_fingerprint\": \"fp_3e3898d4ce\"\n",
+ " },\n",
+ " \"tool_calls\": [],\n",
+ " \"invalid_tool_calls\": [],\n",
+ " \"usage_metadata\": {\n",
+ " \"output_tokens\": 7,\n",
+ " \"input_tokens\": 25,\n",
+ " \"total_tokens\": 32,\n",
+ " \"input_token_details\": {},\n",
+ " \"output_token_details\": {}\n",
+ " }\n",
+ "}\n"
+ ]
+ }
+ ],
+ "source": [
+ "import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n",
+ "\n",
+ "const prompt = ChatPromptTemplate.fromMessages(\n",
+ " [\n",
+ " [\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
+ " ],\n",
+ " [\"human\", \"{input}\"],\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "const chain = prompt.pipe(llm);\n",
+ "await chain.invoke(\n",
+ " {\n",
+ " input_language: \"English\",\n",
+ " output_language: \"German\",\n",
+ " input: \"I love programming.\",\n",
+ " }\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d1ee55bc-ffc8-4cfa-801c-993953a08cfd",
+ "metadata": {},
+ "source": [
+ "Behind the scenes, xAI uses the OpenAI SDK and OpenAI compatible API."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all ChatXAI features and configurations head to the API reference: https://api.js.langchain.com/classes/_langchain_xai.ChatXAI.html"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "TypeScript",
+ "language": "typescript",
+ "name": "tslab"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "mode": "typescript",
+ "name": "javascript",
+ "typescript": true
+ },
+ "file_extension": ".ts",
+ "mimetype": "text/typescript",
+ "name": "typescript",
+ "version": "3.7.2"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/examples/package.json b/examples/package.json
index 48ad17ca960b..7ace75f8b9d2 100644
--- a/examples/package.json
+++ b/examples/package.json
@@ -61,6 +61,7 @@
"@langchain/scripts": ">=0.1.0 <0.2.0",
"@langchain/textsplitters": "workspace:*",
"@langchain/weaviate": "workspace:*",
+ "@langchain/xai": "workspace:*",
"@langchain/yandex": "workspace:*",
"@layerup/layerup-security": "^1.5.12",
"@opensearch-project/opensearch": "^2.2.0",
diff --git a/libs/langchain-xai/.eslintrc.cjs b/libs/langchain-xai/.eslintrc.cjs
new file mode 100644
index 000000000000..6503be533320
--- /dev/null
+++ b/libs/langchain-xai/.eslintrc.cjs
@@ -0,0 +1,74 @@
+module.exports = {
+ extends: [
+ "airbnb-base",
+ "eslint:recommended",
+ "prettier",
+ "plugin:@typescript-eslint/recommended",
+ ],
+ parserOptions: {
+ ecmaVersion: 12,
+ parser: "@typescript-eslint/parser",
+ project: "./tsconfig.json",
+ sourceType: "module",
+ },
+ plugins: ["@typescript-eslint", "no-instanceof"],
+ ignorePatterns: [
+ ".eslintrc.cjs",
+ "scripts",
+ "node_modules",
+ "dist",
+ "dist-cjs",
+ "*.js",
+ "*.cjs",
+ "*.d.ts",
+ ],
+ rules: {
+ "no-process-env": 2,
+ "no-instanceof/no-instanceof": 2,
+ "@typescript-eslint/explicit-module-boundary-types": 0,
+ "@typescript-eslint/no-empty-function": 0,
+ "@typescript-eslint/no-shadow": 0,
+ "@typescript-eslint/no-empty-interface": 0,
+ "@typescript-eslint/no-use-before-define": ["error", "nofunc"],
+ "@typescript-eslint/no-unused-vars": ["warn", { args: "none" }],
+ "@typescript-eslint/no-floating-promises": "error",
+ "@typescript-eslint/no-misused-promises": "error",
+ camelcase: 0,
+ "class-methods-use-this": 0,
+ "import/extensions": [2, "ignorePackages"],
+ "import/no-extraneous-dependencies": [
+ "error",
+ { devDependencies: ["**/*.test.ts"] },
+ ],
+ "import/no-unresolved": 0,
+ "import/prefer-default-export": 0,
+ "keyword-spacing": "error",
+ "max-classes-per-file": 0,
+ "max-len": 0,
+ "no-await-in-loop": 0,
+ "no-bitwise": 0,
+ "no-console": 0,
+ "no-restricted-syntax": 0,
+ "no-shadow": 0,
+ "no-continue": 0,
+ "no-void": 0,
+ "no-underscore-dangle": 0,
+ "no-use-before-define": 0,
+ "no-useless-constructor": 0,
+ "no-return-await": 0,
+ "consistent-return": 0,
+ "no-else-return": 0,
+ "func-names": 0,
+ "no-lonely-if": 0,
+ "prefer-rest-params": 0,
+ "new-cap": ["error", { properties: false, capIsNew: false }],
+ },
+ overrides: [
+ {
+ files: ['**/*.test.ts'],
+ rules: {
+ '@typescript-eslint/no-unused-vars': 'off'
+ }
+ }
+ ]
+};
diff --git a/libs/langchain-xai/.gitignore b/libs/langchain-xai/.gitignore
new file mode 100644
index 000000000000..c10034e2f1be
--- /dev/null
+++ b/libs/langchain-xai/.gitignore
@@ -0,0 +1,7 @@
+index.cjs
+index.js
+index.d.ts
+index.d.cts
+node_modules
+dist
+.yarn
diff --git a/libs/langchain-xai/.prettierrc b/libs/langchain-xai/.prettierrc
new file mode 100644
index 000000000000..ba08ff04f677
--- /dev/null
+++ b/libs/langchain-xai/.prettierrc
@@ -0,0 +1,19 @@
+{
+ "$schema": "https://json.schemastore.org/prettierrc",
+ "printWidth": 80,
+ "tabWidth": 2,
+ "useTabs": false,
+ "semi": true,
+ "singleQuote": false,
+ "quoteProps": "as-needed",
+ "jsxSingleQuote": false,
+ "trailingComma": "es5",
+ "bracketSpacing": true,
+ "arrowParens": "always",
+ "requirePragma": false,
+ "insertPragma": false,
+ "proseWrap": "preserve",
+ "htmlWhitespaceSensitivity": "css",
+ "vueIndentScriptAndStyle": false,
+ "endOfLine": "lf"
+}
diff --git a/libs/langchain-xai/.release-it.json b/libs/langchain-xai/.release-it.json
new file mode 100644
index 000000000000..06850ca85be1
--- /dev/null
+++ b/libs/langchain-xai/.release-it.json
@@ -0,0 +1,12 @@
+{
+ "github": {
+ "release": true,
+ "autoGenerate": true,
+ "tokenRef": "GITHUB_TOKEN_RELEASE"
+ },
+ "npm": {
+ "versionArgs": [
+ "--workspaces-update=false"
+ ]
+ }
+}
diff --git a/libs/langchain-xai/LICENSE b/libs/langchain-xai/LICENSE
new file mode 100644
index 000000000000..e7530f5e9e10
--- /dev/null
+++ b/libs/langchain-xai/LICENSE
@@ -0,0 +1,21 @@
+The MIT License
+
+Copyright (c) 2024 LangChain
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
\ No newline at end of file
diff --git a/libs/langchain-xai/README.md b/libs/langchain-xai/README.md
new file mode 100644
index 000000000000..a4db12a6f6d1
--- /dev/null
+++ b/libs/langchain-xai/README.md
@@ -0,0 +1,76 @@
+# @langchain/xai
+
+This package contains the LangChain.js integrations for xAI.
+
+## Installation
+
+```bash npm2yarn
+npm install @langchain/xai @langchain/core
+```
+
+## Chat models
+
+This package adds support for xAI chat model inference.
+
+Set the necessary environment variable (or pass it in via the constructor):
+
+```bash
+export XAI_API_KEY=
+```
+
+```typescript
+import { ChatXAI } from "@langchain/xai";
+import { HumanMessage } from "@langchain/core/messages";
+
+const model = new ChatXAI({
+ apiKey: process.env.XAI_API_KEY, // Default value.
+});
+
+const message = new HumanMessage("What color is the sky?");
+
+const res = await model.invoke([message]);
+```
+
+## Development
+
+To develop the `@langchain/xai` package, you'll need to follow these instructions:
+
+### Install dependencies
+
+```bash
+yarn install
+```
+
+### Build the package
+
+```bash
+yarn build
+```
+
+Or from the repo root:
+
+```bash
+yarn build --filter=@langchain/xai
+```
+
+### Run tests
+
+Test files should live within a `tests/` file in the `src/` folder. Unit tests should end in `.test.ts` and integration tests should
+end in `.int.test.ts`:
+
+```bash
+$ yarn test
+$ yarn test:int
+```
+
+### Lint & Format
+
+Run the linter & formatter to ensure your code is up to standard:
+
+```bash
+yarn lint && yarn format
+```
+
+### Adding new entrypoints
+
+If you add a new file to be exported, either import & re-export from `src/index.ts`, or add it to the `entrypoints` field in the `config` variable located inside `langchain.config.js` and run `yarn build` to generate the new entrypoint.
diff --git a/libs/langchain-xai/jest.config.cjs b/libs/langchain-xai/jest.config.cjs
new file mode 100644
index 000000000000..a06cb3338861
--- /dev/null
+++ b/libs/langchain-xai/jest.config.cjs
@@ -0,0 +1,20 @@
+/** @type {import('ts-jest').JestConfigWithTsJest} */
+module.exports = {
+ preset: "ts-jest/presets/default-esm",
+ testEnvironment: "./jest.env.cjs",
+ modulePathIgnorePatterns: ["dist/", "docs/"],
+ moduleNameMapper: {
+ "^(\\.{1,2}/.*)\\.js$": "$1",
+ },
+ transform: {
+ "^.+\\.tsx?$": ["@swc/jest"],
+ },
+ transformIgnorePatterns: [
+ "/node_modules/",
+ "\\.pnp\\.[^\\/]+$",
+ "./scripts/jest-setup-after-env.js",
+ ],
+ setupFiles: ["dotenv/config"],
+ testTimeout: 20_000,
+ passWithNoTests: true,
+};
diff --git a/libs/langchain-xai/jest.env.cjs b/libs/langchain-xai/jest.env.cjs
new file mode 100644
index 000000000000..2ccedccb8672
--- /dev/null
+++ b/libs/langchain-xai/jest.env.cjs
@@ -0,0 +1,12 @@
+const { TestEnvironment } = require("jest-environment-node");
+
+class AdjustedTestEnvironmentToSupportFloat32Array extends TestEnvironment {
+ constructor(config, context) {
+ // Make `instanceof Float32Array` return true in tests
+ // to avoid https://github.com/xenova/transformers.js/issues/57 and https://github.com/jestjs/jest/issues/2549
+ super(config, context);
+ this.global.Float32Array = Float32Array;
+ }
+}
+
+module.exports = AdjustedTestEnvironmentToSupportFloat32Array;
diff --git a/libs/langchain-xai/langchain.config.js b/libs/langchain-xai/langchain.config.js
new file mode 100644
index 000000000000..19512b23d29b
--- /dev/null
+++ b/libs/langchain-xai/langchain.config.js
@@ -0,0 +1,21 @@
+import { resolve, dirname } from "node:path";
+import { fileURLToPath } from "node:url";
+
+/**
+ * @param {string} relativePath
+ * @returns {string}
+ */
+function abs(relativePath) {
+ return resolve(dirname(fileURLToPath(import.meta.url)), relativePath);
+}
+
+export const config = {
+ internals: [/node\:/, /@langchain\/core\//],
+ entrypoints: {
+ index: "index",
+ },
+ tsConfigPath: resolve("./tsconfig.json"),
+ cjsSource: "./dist-cjs",
+ cjsDestination: "./dist",
+ abs,
+}
\ No newline at end of file
diff --git a/libs/langchain-xai/package.json b/libs/langchain-xai/package.json
new file mode 100644
index 000000000000..5a690400a871
--- /dev/null
+++ b/libs/langchain-xai/package.json
@@ -0,0 +1,94 @@
+{
+ "name": "@langchain/xai",
+ "version": "0.0.1",
+ "description": "xAI integration for LangChain.js",
+ "type": "module",
+ "engines": {
+ "node": ">=18"
+ },
+ "main": "./index.js",
+ "types": "./index.d.ts",
+ "repository": {
+ "type": "git",
+ "url": "git@github.com:langchain-ai/langchainjs.git"
+ },
+ "homepage": "https://github.com/langchain-ai/langchainjs/tree/main/libs/langchain-xai/",
+ "scripts": {
+ "build": "yarn turbo:command build:internal --filter=@langchain/xai",
+ "build:internal": "yarn lc_build --create-entrypoints --pre --tree-shaking",
+ "lint:eslint": "NODE_OPTIONS=--max-old-space-size=4096 eslint --cache --ext .ts,.js src/",
+ "lint:dpdm": "dpdm --exit-code circular:1 --no-warning --no-tree src/*.ts src/**/*.ts",
+ "lint": "yarn lint:eslint && yarn lint:dpdm",
+ "lint:fix": "yarn lint:eslint --fix && yarn lint:dpdm",
+ "clean": "rm -rf .turbo dist/",
+ "prepack": "yarn build",
+ "test": "NODE_OPTIONS=--experimental-vm-modules jest --testPathIgnorePatterns=\\.int\\.test.ts --testTimeout 30000 --maxWorkers=50%",
+ "test:watch": "NODE_OPTIONS=--experimental-vm-modules jest --watch --testPathIgnorePatterns=\\.int\\.test.ts",
+ "test:single": "NODE_OPTIONS=--experimental-vm-modules yarn run jest --config jest.config.cjs --testTimeout 100000",
+ "test:int": "NODE_OPTIONS=--experimental-vm-modules jest --testPathPattern=\\.int\\.test.ts --testTimeout 100000 --maxWorkers=50%",
+ "test:standard:unit": "NODE_OPTIONS=--experimental-vm-modules jest --testPathPattern=\\.standard\\.test.ts --testTimeout 100000 --maxWorkers=50%",
+ "test:standard:int": "NODE_OPTIONS=--experimental-vm-modules jest --testPathPattern=\\.standard\\.int\\.test.ts --testTimeout 100000 --maxWorkers=50%",
+ "test:standard": "yarn test:standard:unit && yarn test:standard:int",
+ "format": "prettier --config .prettierrc --write \"src\"",
+ "format:check": "prettier --config .prettierrc --check \"src\""
+ },
+ "author": "LangChain",
+ "license": "MIT",
+ "dependencies": {
+ "@langchain/openai": "~0.3.0"
+ },
+ "peerDependencies": {
+ "@langchain/core": ">=0.2.21 <0.4.0"
+ },
+ "devDependencies": {
+ "@jest/globals": "^29.5.0",
+ "@langchain/core": "workspace:*",
+ "@langchain/openai": "workspace:^",
+ "@langchain/scripts": ">=0.1.0 <0.2.0",
+ "@langchain/standard-tests": "0.0.0",
+ "@swc/core": "^1.3.90",
+ "@swc/jest": "^0.2.29",
+ "@tsconfig/recommended": "^1.0.3",
+ "@types/uuid": "^9",
+ "@typescript-eslint/eslint-plugin": "^6.12.0",
+ "@typescript-eslint/parser": "^6.12.0",
+ "dotenv": "^16.3.1",
+ "dpdm": "^3.12.0",
+ "eslint": "^8.33.0",
+ "eslint-config-airbnb-base": "^15.0.0",
+ "eslint-config-prettier": "^8.6.0",
+ "eslint-plugin-import": "^2.27.5",
+ "eslint-plugin-no-instanceof": "^1.0.1",
+ "eslint-plugin-prettier": "^4.2.1",
+ "jest": "^29.5.0",
+ "jest-environment-node": "^29.6.4",
+ "prettier": "^2.8.3",
+ "release-it": "^17.6.0",
+ "rollup": "^4.5.2",
+ "ts-jest": "^29.1.0",
+ "typescript": "<5.2.0",
+ "zod": "^3.22.4"
+ },
+ "publishConfig": {
+ "access": "public"
+ },
+ "exports": {
+ ".": {
+ "types": {
+ "import": "./index.d.ts",
+ "require": "./index.d.cts",
+ "default": "./index.d.ts"
+ },
+ "import": "./index.js",
+ "require": "./index.cjs"
+ },
+ "./package.json": "./package.json"
+ },
+ "files": [
+ "dist/",
+ "index.cjs",
+ "index.js",
+ "index.d.ts",
+ "index.d.cts"
+ ]
+}
diff --git a/libs/langchain-xai/scripts/jest-setup-after-env.js b/libs/langchain-xai/scripts/jest-setup-after-env.js
new file mode 100644
index 000000000000..7323083d0ea5
--- /dev/null
+++ b/libs/langchain-xai/scripts/jest-setup-after-env.js
@@ -0,0 +1,9 @@
+import { awaitAllCallbacks } from "@langchain/core/callbacks/promises";
+import { afterAll, jest } from "@jest/globals";
+
+afterAll(awaitAllCallbacks);
+
+// Allow console.log to be disabled in tests
+if (process.env.DISABLE_CONSOLE_LOGS === "true") {
+ console.log = jest.fn();
+}
diff --git a/libs/langchain-xai/src/chat_models.ts b/libs/langchain-xai/src/chat_models.ts
new file mode 100644
index 000000000000..5a1f5177c246
--- /dev/null
+++ b/libs/langchain-xai/src/chat_models.ts
@@ -0,0 +1,497 @@
+import {
+ BaseChatModelCallOptions,
+ BindToolsInput,
+ LangSmithParams,
+ type BaseChatModelParams,
+} from "@langchain/core/language_models/chat_models";
+import { Serialized } from "@langchain/core/load/serializable";
+import { getEnvironmentVariable } from "@langchain/core/utils/env";
+import {
+ type OpenAICoreRequestOptions,
+ type OpenAIClient,
+ ChatOpenAI,
+ OpenAIToolChoice,
+} from "@langchain/openai";
+
+type ChatXAIToolType = BindToolsInput | OpenAIClient.ChatCompletionTool;
+
+export interface ChatXAICallOptions extends BaseChatModelCallOptions {
+ headers?: Record;
+ tools?: ChatXAIToolType[];
+ tool_choice?: OpenAIToolChoice | string | "auto" | "any";
+}
+
+export interface ChatXAIInput extends BaseChatModelParams {
+ /**
+ * The xAI API key to use for requests.
+ * @default process.env.XAI_API_KEY
+ */
+ apiKey?: string;
+ /**
+ * The name of the model to use.
+ * @default "grok-beta"
+ */
+ model?: string;
+ /**
+ * Up to 4 sequences where the API will stop generating further tokens. The
+ * returned text will not contain the stop sequence.
+ * Alias for `stopSequences`
+ */
+ stop?: Array;
+ /**
+ * Up to 4 sequences where the API will stop generating further tokens. The
+ * returned text will not contain the stop sequence.
+ */
+ stopSequences?: Array;
+ /**
+ * Whether or not to stream responses.
+ */
+ streaming?: boolean;
+ /**
+ * The temperature to use for sampling.
+ * @default 0.7
+ */
+ temperature?: number;
+ /**
+ * The maximum number of tokens that the model can process in a single response.
+ * This limits ensures computational efficiency and resource management.
+ */
+ maxTokens?: number;
+}
+
+/**
+ * xAI chat model integration.
+ *
+ * The xAI API is compatible to the OpenAI API with some limitations.
+ *
+ * Setup:
+ * Install `@langchain/xai` and set an environment variable named `XAI_API_KEY`.
+ *
+ * ```bash
+ * npm install @langchain/xai
+ * export XAI_API_KEY="your-api-key"
+ * ```
+ *
+ * ## [Constructor args](https://api.js.langchain.com/classes/langchain_xai.ChatXAI.html#constructor)
+ *
+ * ## [Runtime args](https://api.js.langchain.com/interfaces/langchain_xai.ChatXAICallOptions.html)
+ *
+ * Runtime args can be passed as the second argument to any of the base runnable methods `.invoke`. `.stream`, `.batch`, etc.
+ * They can also be passed via `.bind`, or the second arg in `.bindTools`, like shown in the examples below:
+ *
+ * ```typescript
+ * // When calling `.bind`, call options should be passed via the first argument
+ * const llmWithArgsBound = llm.bind({
+ * stop: ["\n"],
+ * tools: [...],
+ * });
+ *
+ * // When calling `.bindTools`, call options should be passed via the second argument
+ * const llmWithTools = llm.bindTools(
+ * [...],
+ * {
+ * tool_choice: "auto",
+ * }
+ * );
+ * ```
+ *
+ * ## Examples
+ *
+ *
+ * Instantiate
+ *
+ * ```typescript
+ * import { ChatXAI } from '@langchain/xai';
+ *
+ * const llm = new ChatXAI({
+ * model: "grok-beta",
+ * temperature: 0,
+ * // other params...
+ * });
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Invoking
+ *
+ * ```typescript
+ * const input = `Translate "I love programming" into French.`;
+ *
+ * // Models also accept a list of chat messages or a formatted prompt
+ * const result = await llm.invoke(input);
+ * console.log(result);
+ * ```
+ *
+ * ```txt
+ * AIMessage {
+ * "content": "The French translation of \"I love programming\" is \"J'aime programmer\". In this sentence, \"J'aime\" is the first person singular conjugation of the French verb \"aimer\" which means \"to love\", and \"programmer\" is the French infinitive for \"to program\". I hope this helps! Let me know if you have any other questions.",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "tokenUsage": {
+ * "completionTokens": 82,
+ * "promptTokens": 20,
+ * "totalTokens": 102
+ * },
+ * "finish_reason": "stop"
+ * },
+ * "tool_calls": [],
+ * "invalid_tool_calls": []
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Streaming Chunks
+ *
+ * ```typescript
+ * for await (const chunk of await llm.stream(input)) {
+ * console.log(chunk);
+ * }
+ * ```
+ *
+ * ```txt
+ * AIMessageChunk {
+ * "content": "",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": "The",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": " French",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": " translation",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": " of",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": " \"",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": "I",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": " love",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * ...
+ * AIMessageChunk {
+ * "content": ".",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": null
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * AIMessageChunk {
+ * "content": "",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": "stop"
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Aggregate Streamed Chunks
+ *
+ * ```typescript
+ * import { AIMessageChunk } from '@langchain/core/messages';
+ * import { concat } from '@langchain/core/utils/stream';
+ *
+ * const stream = await llm.stream(input);
+ * let full: AIMessageChunk | undefined;
+ * for await (const chunk of stream) {
+ * full = !full ? chunk : concat(full, chunk);
+ * }
+ * console.log(full);
+ * ```
+ *
+ * ```txt
+ * AIMessageChunk {
+ * "content": "The French translation of \"I love programming\" is \"J'aime programmer\". In this sentence, \"J'aime\" is the first person singular conjugation of the French verb \"aimer\" which means \"to love\", and \"programmer\" is the French infinitive for \"to program\". I hope this helps! Let me know if you have any other questions.",
+ * "additional_kwargs": {},
+ * "response_metadata": {
+ * "finishReason": "stop"
+ * },
+ * "tool_calls": [],
+ * "tool_call_chunks": [],
+ * "invalid_tool_calls": []
+ * }
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Bind tools
+ *
+ * ```typescript
+ * import { z } from 'zod';
+ *
+ * const llmForToolCalling = new ChatXAI({
+ * model: "grok-beta",
+ * temperature: 0,
+ * // other params...
+ * });
+ *
+ * const GetWeather = {
+ * name: "GetWeather",
+ * description: "Get the current weather in a given location",
+ * schema: z.object({
+ * location: z.string().describe("The city and state, e.g. San Francisco, CA")
+ * }),
+ * }
+ *
+ * const GetPopulation = {
+ * name: "GetPopulation",
+ * description: "Get the current population in a given location",
+ * schema: z.object({
+ * location: z.string().describe("The city and state, e.g. San Francisco, CA")
+ * }),
+ * }
+ *
+ * const llmWithTools = llmForToolCalling.bindTools([GetWeather, GetPopulation]);
+ * const aiMsg = await llmWithTools.invoke(
+ * "Which city is hotter today and which is bigger: LA or NY?"
+ * );
+ * console.log(aiMsg.tool_calls);
+ * ```
+ *
+ * ```txt
+ * [
+ * {
+ * name: 'GetWeather',
+ * args: { location: 'Los Angeles, CA' },
+ * type: 'tool_call',
+ * id: 'call_cd34'
+ * },
+ * {
+ * name: 'GetWeather',
+ * args: { location: 'New York, NY' },
+ * type: 'tool_call',
+ * id: 'call_68rf'
+ * },
+ * {
+ * name: 'GetPopulation',
+ * args: { location: 'Los Angeles, CA' },
+ * type: 'tool_call',
+ * id: 'call_f81z'
+ * },
+ * {
+ * name: 'GetPopulation',
+ * args: { location: 'New York, NY' },
+ * type: 'tool_call',
+ * id: 'call_8byt'
+ * }
+ * ]
+ * ```
+ *
+ *
+ *
+ *
+ *
+ * Structured Output
+ *
+ * ```typescript
+ * import { z } from 'zod';
+ *
+ * const Joke = z.object({
+ * setup: z.string().describe("The setup of the joke"),
+ * punchline: z.string().describe("The punchline to the joke"),
+ * rating: z.number().optional().describe("How funny the joke is, from 1 to 10")
+ * }).describe('Joke to tell user.');
+ *
+ * const structuredLlm = llmForToolCalling.withStructuredOutput(Joke, { name: "Joke" });
+ * const jokeResult = await structuredLlm.invoke("Tell me a joke about cats");
+ * console.log(jokeResult);
+ * ```
+ *
+ * ```txt
+ * {
+ * setup: "Why don't cats play poker in the wild?",
+ * punchline: 'Because there are too many cheetahs.'
+ * }
+ * ```
+ *
+ *
+ *
+ */
+export class ChatXAI extends ChatOpenAI {
+ static lc_name() {
+ return "ChatXAI";
+ }
+
+ _llmType() {
+ return "xAI";
+ }
+
+ get lc_secrets(): { [key: string]: string } | undefined {
+ return {
+ apiKey: "XAI_API_KEY",
+ };
+ }
+
+ lc_serializable = true;
+
+ lc_namespace = ["langchain", "chat_models", "xai"];
+
+ constructor(fields?: Partial) {
+ const apiKey = fields?.apiKey || getEnvironmentVariable("XAI_API_KEY");
+ if (!apiKey) {
+ throw new Error(
+ `xAI API key not found. Please set the XAI_API_KEY environment variable or provide the key into "apiKey" field.`
+ );
+ }
+
+ super({
+ ...fields,
+ model: fields?.model || "grok-beta",
+ apiKey,
+ configuration: {
+ baseURL: "https://api.x.ai/v1",
+ },
+ });
+ }
+
+ toJSON(): Serialized {
+ const result = super.toJSON();
+
+ if (
+ "kwargs" in result &&
+ typeof result.kwargs === "object" &&
+ result.kwargs != null
+ ) {
+ delete result.kwargs.openai_api_key;
+ delete result.kwargs.configuration;
+ }
+
+ return result;
+ }
+
+ getLsParams(options: this["ParsedCallOptions"]): LangSmithParams {
+ const params = super.getLsParams(options);
+ params.ls_provider = "xai";
+ return params;
+ }
+
+ async completionWithRetry(
+ request: OpenAIClient.Chat.ChatCompletionCreateParamsStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise>;
+
+ async completionWithRetry(
+ request: OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise;
+
+ /**
+ * Calls the xAI API with retry logic in case of failures.
+ * @param request The request to send to the xAI API.
+ * @param options Optional configuration for the API call.
+ * @returns The response from the xAI API.
+ */
+ async completionWithRetry(
+ request:
+ | OpenAIClient.Chat.ChatCompletionCreateParamsStreaming
+ | OpenAIClient.Chat.ChatCompletionCreateParamsNonStreaming,
+ options?: OpenAICoreRequestOptions
+ ): Promise<
+ | AsyncIterable
+ | OpenAIClient.Chat.Completions.ChatCompletion
+ > {
+ delete request.frequency_penalty;
+ delete request.presence_penalty;
+ delete request.logit_bias;
+ delete request.functions;
+
+ const newRequestMessages = request.messages.map((msg) => {
+ if (!msg.content) {
+ return {
+ ...msg,
+ content: "",
+ };
+ }
+ return msg;
+ });
+
+ const newRequest = {
+ ...request,
+ messages: newRequestMessages,
+ };
+
+ if (newRequest.stream === true) {
+ return super.completionWithRetry(newRequest, options);
+ }
+
+ return super.completionWithRetry(newRequest, options);
+ }
+}
diff --git a/libs/langchain-xai/src/index.ts b/libs/langchain-xai/src/index.ts
new file mode 100644
index 000000000000..38c7cea7f478
--- /dev/null
+++ b/libs/langchain-xai/src/index.ts
@@ -0,0 +1 @@
+export * from "./chat_models.js";
diff --git a/libs/langchain-xai/src/tests/chat_models.int.test.ts b/libs/langchain-xai/src/tests/chat_models.int.test.ts
new file mode 100644
index 000000000000..efb6b04371fd
--- /dev/null
+++ b/libs/langchain-xai/src/tests/chat_models.int.test.ts
@@ -0,0 +1,233 @@
+import { test } from "@jest/globals";
+import {
+ AIMessage,
+ AIMessageChunk,
+ HumanMessage,
+ ToolMessage,
+} from "@langchain/core/messages";
+import { tool } from "@langchain/core/tools";
+import { z } from "zod";
+import { concat } from "@langchain/core/utils/stream";
+import { ChatXAI } from "../chat_models.js";
+
+test("invoke", async () => {
+ const chat = new ChatXAI({
+ maxRetries: 0,
+ });
+ const message = new HumanMessage("What color is the sky?");
+ const res = await chat.invoke([message]);
+ // console.log({ res });
+ expect(res.content.length).toBeGreaterThan(10);
+});
+
+test("invoke with stop sequence", async () => {
+ const chat = new ChatXAI({
+ maxRetries: 0,
+ });
+ const message = new HumanMessage("Count to ten.");
+ const res = await chat.bind({ stop: ["5", "five"] }).invoke([message]);
+ // console.log({ res });
+ expect((res.content as string).toLowerCase()).not.toContain("6");
+ expect((res.content as string).toLowerCase()).not.toContain("six");
+});
+
+test("stream should respect passed headers", async () => {
+ const chat = new ChatXAI({
+ maxRetries: 0,
+ });
+ const message = new HumanMessage("Count to ten.");
+ await expect(async () => {
+ await chat.stream([message], {
+ headers: { Authorization: "badbadbad" },
+ });
+ }).rejects.toThrowError();
+});
+
+test("generate", async () => {
+ const chat = new ChatXAI();
+ const message = new HumanMessage("Hello!");
+ const res = await chat.generate([[message]]);
+ // console.log(JSON.stringify(res, null, 2));
+ expect(res.generations[0][0].text.length).toBeGreaterThan(10);
+});
+
+test("streaming", async () => {
+ const chat = new ChatXAI();
+ const message = new HumanMessage("What color is the sky?");
+ const stream = await chat.stream([message]);
+ let iters = 0;
+ let finalRes = "";
+ for await (const chunk of stream) {
+ iters += 1;
+ finalRes += chunk.content;
+ }
+ // console.log({ finalRes, iters });
+ expect(iters).toBeGreaterThan(1);
+});
+
+test("invoke with bound tools", async () => {
+ const chat = new ChatXAI({
+ maxRetries: 0,
+ model: "grok-beta",
+ });
+ const message = new HumanMessage("What is the current weather in Hawaii?");
+ const res = await chat
+ .bind({
+ tools: [
+ {
+ type: "function",
+ function: {
+ name: "get_current_weather",
+ description: "Get the current weather in a given location",
+ parameters: {
+ type: "object",
+ properties: {
+ location: {
+ type: "string",
+ description: "The city and state, e.g. San Francisco, CA",
+ },
+ unit: { type: "string", enum: ["celsius", "fahrenheit"] },
+ },
+ required: ["location"],
+ },
+ },
+ },
+ ],
+ tool_choice: "auto",
+ })
+ .invoke([message]);
+ // console.log(JSON.stringify(res));
+ expect(res.additional_kwargs.tool_calls?.length).toEqual(1);
+ expect(
+ JSON.parse(
+ res.additional_kwargs?.tool_calls?.[0].function.arguments ?? "{}"
+ )
+ ).toEqual(res.tool_calls?.[0].args);
+});
+
+test("stream with bound tools, yielding a single chunk", async () => {
+ const chat = new ChatXAI({
+ maxRetries: 0,
+ });
+ const message = new HumanMessage("What is the current weather in Hawaii?");
+ const stream = await chat
+ .bind({
+ tools: [
+ {
+ type: "function",
+ function: {
+ name: "get_current_weather",
+ description: "Get the current weather in a given location",
+ parameters: {
+ type: "object",
+ properties: {
+ location: {
+ type: "string",
+ description: "The city and state, e.g. San Francisco, CA",
+ },
+ unit: { type: "string", enum: ["celsius", "fahrenheit"] },
+ },
+ required: ["location"],
+ },
+ },
+ },
+ ],
+ tool_choice: "auto",
+ })
+ .stream([message]);
+ // @eslint-disable-next-line/@typescript-eslint/ban-ts-comment
+ // @ts-expect-error unused var
+ for await (const chunk of stream) {
+ // console.log(JSON.stringify(chunk));
+ }
+});
+
+test("Few shotting with tool calls", async () => {
+ const chat = new ChatXAI({
+ model: "grok-beta",
+ temperature: 0,
+ }).bind({
+ tools: [
+ {
+ type: "function",
+ function: {
+ name: "get_current_weather",
+ description: "Get the current weather in a given location",
+ parameters: {
+ type: "object",
+ properties: {
+ location: {
+ type: "string",
+ description: "The city and state, e.g. San Francisco, CA",
+ },
+ unit: { type: "string", enum: ["celsius", "fahrenheit"] },
+ },
+ required: ["location"],
+ },
+ },
+ },
+ ],
+ tool_choice: "auto",
+ });
+ const res = await chat.invoke([
+ new HumanMessage("What is the weather in SF?"),
+ new AIMessage({
+ content: "",
+ tool_calls: [
+ {
+ id: "12345",
+ name: "get_current_weather",
+ args: {
+ location: "SF",
+ },
+ },
+ ],
+ }),
+ new ToolMessage({
+ tool_call_id: "12345",
+ content: "It is currently 24 degrees with hail in SF.",
+ }),
+ new AIMessage("It is currently 24 degrees in SF with hail in SF."),
+ new HumanMessage("What did you say the weather was?"),
+ ]);
+ // console.log(res);
+ expect(res.content).toContain("24");
+});
+
+test("Groq can stream tool calls", async () => {
+ const model = new ChatXAI({
+ model: "grok-beta",
+ temperature: 0,
+ });
+
+ const weatherTool = tool((_) => "The temperature is 24 degrees with hail.", {
+ name: "get_current_weather",
+ schema: z.object({
+ location: z
+ .string()
+ .describe("The location to get the current weather for."),
+ }),
+ description: "Get the current weather in a given location.",
+ });
+
+ const modelWithTools = model.bindTools([weatherTool]);
+
+ const stream = await modelWithTools.stream(
+ "What is the weather in San Francisco?"
+ );
+
+ let finalMessage: AIMessageChunk | undefined;
+ for await (const chunk of stream) {
+ finalMessage = !finalMessage ? chunk : concat(finalMessage, chunk);
+ }
+
+ expect(finalMessage).toBeDefined();
+ if (!finalMessage) return;
+
+ expect(finalMessage.tool_calls?.[0]).toBeDefined();
+ if (!finalMessage.tool_calls?.[0]) return;
+
+ expect(finalMessage.tool_calls?.[0].name).toBe("get_current_weather");
+ expect(finalMessage.tool_calls?.[0].args).toHaveProperty("location");
+ expect(finalMessage.tool_calls?.[0].id).toBeDefined();
+});
diff --git a/libs/langchain-xai/src/tests/chat_models.standard.int.test.ts b/libs/langchain-xai/src/tests/chat_models.standard.int.test.ts
new file mode 100644
index 000000000000..0eb03c4f111f
--- /dev/null
+++ b/libs/langchain-xai/src/tests/chat_models.standard.int.test.ts
@@ -0,0 +1,37 @@
+/* eslint-disable no-process-env */
+import { test, expect } from "@jest/globals";
+import { ChatModelIntegrationTests } from "@langchain/standard-tests";
+import { AIMessageChunk } from "@langchain/core/messages";
+import { ChatXAI, ChatXAICallOptions } from "../chat_models.js";
+
+class ChatXAIStandardIntegrationTests extends ChatModelIntegrationTests<
+ ChatXAICallOptions,
+ AIMessageChunk
+> {
+ constructor() {
+ if (!process.env.XAI_API_KEY) {
+ throw new Error(
+ "Can not run xAI integration tests because XAI_API_KEY is not set"
+ );
+ }
+ super({
+ Cls: ChatXAI,
+ chatModelHasToolCalling: true,
+ chatModelHasStructuredOutput: true,
+ constructorArgs: {
+ maxRetries: 1,
+ temperature: 0,
+ },
+ });
+ }
+}
+
+const testClass = new ChatXAIStandardIntegrationTests();
+
+test("ChatXAIStandardIntegrationTests", async () => {
+ console.warn = (..._args: unknown[]) => {
+ // no-op
+ };
+ const testResults = await testClass.runTests();
+ expect(testResults).toBe(true);
+});
diff --git a/libs/langchain-xai/src/tests/chat_models.standard.test.ts b/libs/langchain-xai/src/tests/chat_models.standard.test.ts
new file mode 100644
index 000000000000..99274a58ca9a
--- /dev/null
+++ b/libs/langchain-xai/src/tests/chat_models.standard.test.ts
@@ -0,0 +1,39 @@
+/* eslint-disable no-process-env */
+import { test, expect } from "@jest/globals";
+import { ChatModelUnitTests } from "@langchain/standard-tests";
+import { AIMessageChunk } from "@langchain/core/messages";
+import { ChatXAI, ChatXAICallOptions } from "../chat_models.js";
+
+class ChatXAIStandardUnitTests extends ChatModelUnitTests<
+ ChatXAICallOptions,
+ AIMessageChunk
+> {
+ constructor() {
+ super({
+ Cls: ChatXAI,
+ chatModelHasToolCalling: true,
+ chatModelHasStructuredOutput: true,
+ constructorArgs: {},
+ });
+ // This must be set so method like `.bindTools` or `.withStructuredOutput`
+ // which we call after instantiating the model will work.
+ // (constructor will throw if API key is not set)
+ process.env.XAI_API_KEY = "test";
+ }
+
+ testChatModelInitApiKey() {
+ // Unset the API key env var here so this test can properly check
+ // the API key class arg.
+ process.env.XAI_API_KEY = "";
+ super.testChatModelInitApiKey();
+ // Re-set the API key env var here so other tests can run properly.
+ process.env.XAI_API_KEY = "test";
+ }
+}
+
+const testClass = new ChatXAIStandardUnitTests();
+
+test("ChatXAIStandardUnitTests", () => {
+ const testResults = testClass.runTests();
+ expect(testResults).toBe(true);
+});
diff --git a/libs/langchain-xai/src/tests/chat_models.test.ts b/libs/langchain-xai/src/tests/chat_models.test.ts
new file mode 100644
index 000000000000..0412b08853eb
--- /dev/null
+++ b/libs/langchain-xai/src/tests/chat_models.test.ts
@@ -0,0 +1,20 @@
+/* eslint-disable no-process-env */
+import { test, expect } from "@jest/globals";
+import { ChatXAI } from "../chat_models.js";
+
+test("Serialization", () => {
+ const model = new ChatXAI({
+ apiKey: "foo",
+ });
+ expect(JSON.stringify(model)).toEqual(
+ `{"lc":1,"type":"constructor","id":["langchain","chat_models","xai","ChatXAI"],"kwargs":{"api_key":{"lc":1,"type":"secret","id":["XAI_API_KEY"]}}}`
+ );
+});
+
+test("Serialization with no params", () => {
+ process.env.GROQ_API_KEY = "foo";
+ const model = new ChatXAI();
+ expect(JSON.stringify(model)).toEqual(
+ `{"lc":1,"type":"constructor","id":["langchain","chat_models","xai","ChatXAI"],"kwargs":{"api_key":{"lc":1,"type":"secret","id":["XAI_API_KEY"]}}}`
+ );
+});
diff --git a/libs/langchain-xai/src/tests/chat_models_structured_output.int.test.ts b/libs/langchain-xai/src/tests/chat_models_structured_output.int.test.ts
new file mode 100644
index 000000000000..61071a5b2f4f
--- /dev/null
+++ b/libs/langchain-xai/src/tests/chat_models_structured_output.int.test.ts
@@ -0,0 +1,269 @@
+import { z } from "zod";
+import { zodToJsonSchema } from "zod-to-json-schema";
+import { ChatPromptTemplate } from "@langchain/core/prompts";
+import { AIMessage } from "@langchain/core/messages";
+import { ChatXAI } from "../chat_models.js";
+
+test("withStructuredOutput zod schema function calling", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput(
+ calculatorSchema,
+ {
+ name: "calculator",
+ }
+ );
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ ["system", "You are VERY bad at math and must always use a calculator."],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput zod schema JSON mode", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput(
+ calculatorSchema,
+ {
+ name: "calculator",
+ method: "jsonMode",
+ }
+ );
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ [
+ "system",
+ `You are VERY bad at math and must always use a calculator.
+Respond with a JSON object containing three keys:
+'operation': the type of operation to execute, either 'add', 'subtract', 'multiply' or 'divide',
+'number1': the first number to operate on,
+'number2': the second number to operate on.
+`,
+ ],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput JSON schema function calling", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput(
+ zodToJsonSchema(calculatorSchema),
+ {
+ name: "calculator",
+ }
+ );
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ ["system", `You are VERY bad at math and must always use a calculator.`],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput OpenAI function definition function calling", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput({
+ name: "calculator",
+ parameters: zodToJsonSchema(calculatorSchema),
+ });
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ "system",
+ `You are VERY bad at math and must always use a calculator.`,
+ "human",
+ "Please help me!! What is 2 + 2?",
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput JSON schema JSON mode", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput(
+ zodToJsonSchema(calculatorSchema),
+ {
+ name: "calculator",
+ method: "jsonMode",
+ }
+ );
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ [
+ "system",
+ `You are VERY bad at math and must always use a calculator.
+Respond with a JSON object containing three keys:
+'operation': the type of operation to execute, either 'add', 'subtract', 'multiply' or 'divide',
+'number1': the first number to operate on,
+'number2': the second number to operate on.
+`,
+ ],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput JSON schema", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const jsonSchema = {
+ title: "calculator",
+ description: "A simple calculator",
+ type: "object",
+ properties: {
+ operation: {
+ type: "string",
+ enum: ["add", "subtract", "multiply", "divide"],
+ },
+ number1: { type: "number" },
+ number2: { type: "number" },
+ },
+ };
+ const modelWithStructuredOutput = model.withStructuredOutput(jsonSchema);
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ [
+ "system",
+ `You are VERY bad at math and must always use a calculator.
+Respond with a JSON object containing three keys:
+'operation': the type of operation to execute, either 'add', 'subtract', 'multiply' or 'divide',
+'number1': the first number to operate on,
+'number2': the second number to operate on.
+`,
+ ],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+ expect("operation" in result).toBe(true);
+ expect("number1" in result).toBe(true);
+ expect("number2" in result).toBe(true);
+});
+
+test("withStructuredOutput includeRaw true", async () => {
+ const model = new ChatXAI({
+ temperature: 0,
+ model: "grok-beta",
+ });
+
+ const calculatorSchema = z.object({
+ operation: z.enum(["add", "subtract", "multiply", "divide"]),
+ number1: z.number(),
+ number2: z.number(),
+ });
+ const modelWithStructuredOutput = model.withStructuredOutput(
+ calculatorSchema,
+ {
+ name: "calculator",
+ includeRaw: true,
+ }
+ );
+
+ const prompt = ChatPromptTemplate.fromMessages([
+ ["system", "You are VERY bad at math and must always use a calculator."],
+ ["human", "Please help me!! What is 2 + 2?"],
+ ]);
+ const chain = prompt.pipe(modelWithStructuredOutput);
+ const result = await chain.invoke({});
+ // console.log(result);
+
+ expect("parsed" in result).toBe(true);
+ // Need to make TS happy :)
+ if (!("parsed" in result)) {
+ throw new Error("parsed not in result");
+ }
+ const { parsed } = result;
+ expect("operation" in parsed).toBe(true);
+ expect("number1" in parsed).toBe(true);
+ expect("number2" in parsed).toBe(true);
+
+ expect("raw" in result).toBe(true);
+ // Need to make TS happy :)
+ if (!("raw" in result)) {
+ throw new Error("raw not in result");
+ }
+ const { raw } = result as { raw: AIMessage };
+
+ expect(raw.tool_calls?.[0].args).toBeDefined();
+ if (!raw.tool_calls?.[0].args) {
+ throw new Error("args not in tool call");
+ }
+ expect(raw.tool_calls?.length).toBeGreaterThan(0);
+ expect(raw.tool_calls?.[0].name).toBe("calculator");
+ expect("operation" in raw.tool_calls[0].args).toBe(true);
+ expect("number1" in raw.tool_calls[0].args).toBe(true);
+ expect("number2" in raw.tool_calls[0].args).toBe(true);
+});
diff --git a/libs/langchain-xai/tsconfig.cjs.json b/libs/langchain-xai/tsconfig.cjs.json
new file mode 100644
index 000000000000..3b7026ea406c
--- /dev/null
+++ b/libs/langchain-xai/tsconfig.cjs.json
@@ -0,0 +1,8 @@
+{
+ "extends": "./tsconfig.json",
+ "compilerOptions": {
+ "module": "commonjs",
+ "declaration": false
+ },
+ "exclude": ["node_modules", "dist", "docs", "**/tests"]
+}
diff --git a/libs/langchain-xai/tsconfig.json b/libs/langchain-xai/tsconfig.json
new file mode 100644
index 000000000000..bc85d83b6229
--- /dev/null
+++ b/libs/langchain-xai/tsconfig.json
@@ -0,0 +1,23 @@
+{
+ "extends": "@tsconfig/recommended",
+ "compilerOptions": {
+ "outDir": "../dist",
+ "rootDir": "./src",
+ "target": "ES2021",
+ "lib": ["ES2021", "ES2022.Object", "DOM"],
+ "module": "ES2020",
+ "moduleResolution": "nodenext",
+ "esModuleInterop": true,
+ "declaration": true,
+ "noImplicitReturns": true,
+ "noFallthroughCasesInSwitch": true,
+ "noUnusedLocals": true,
+ "noUnusedParameters": true,
+ "useDefineForClassFields": true,
+ "strictPropertyInitialization": false,
+ "allowJs": true,
+ "strict": true
+ },
+ "include": ["src/**/*"],
+ "exclude": ["node_modules", "dist", "docs"]
+}
diff --git a/libs/langchain-xai/turbo.json b/libs/langchain-xai/turbo.json
new file mode 100644
index 000000000000..d024cee15c81
--- /dev/null
+++ b/libs/langchain-xai/turbo.json
@@ -0,0 +1,11 @@
+{
+ "extends": ["//"],
+ "pipeline": {
+ "build": {
+ "outputs": ["**/dist/**"]
+ },
+ "build:internal": {
+ "dependsOn": ["^build:internal"]
+ }
+ }
+}
diff --git a/yarn.lock b/yarn.lock
index 2fd5980a162b..17738b1968c9 100644
--- a/yarn.lock
+++ b/yarn.lock
@@ -12820,6 +12820,42 @@ __metadata:
languageName: unknown
linkType: soft
+"@langchain/xai@workspace:*, @langchain/xai@workspace:libs/langchain-xai":
+ version: 0.0.0-use.local
+ resolution: "@langchain/xai@workspace:libs/langchain-xai"
+ dependencies:
+ "@jest/globals": ^29.5.0
+ "@langchain/core": "workspace:*"
+ "@langchain/openai": "workspace:^"
+ "@langchain/scripts": ">=0.1.0 <0.2.0"
+ "@langchain/standard-tests": 0.0.0
+ "@swc/core": ^1.3.90
+ "@swc/jest": ^0.2.29
+ "@tsconfig/recommended": ^1.0.3
+ "@types/uuid": ^9
+ "@typescript-eslint/eslint-plugin": ^6.12.0
+ "@typescript-eslint/parser": ^6.12.0
+ dotenv: ^16.3.1
+ dpdm: ^3.12.0
+ eslint: ^8.33.0
+ eslint-config-airbnb-base: ^15.0.0
+ eslint-config-prettier: ^8.6.0
+ eslint-plugin-import: ^2.27.5
+ eslint-plugin-no-instanceof: ^1.0.1
+ eslint-plugin-prettier: ^4.2.1
+ jest: ^29.5.0
+ jest-environment-node: ^29.6.4
+ prettier: ^2.8.3
+ release-it: ^17.6.0
+ rollup: ^4.5.2
+ ts-jest: ^29.1.0
+ typescript: <5.2.0
+ zod: ^3.22.4
+ peerDependencies:
+ "@langchain/core": ">=0.2.21 <0.4.0"
+ languageName: unknown
+ linkType: soft
+
"@langchain/yandex@workspace:*, @langchain/yandex@workspace:libs/langchain-yandex":
version: 0.0.0-use.local
resolution: "@langchain/yandex@workspace:libs/langchain-yandex"
@@ -27304,6 +27340,7 @@ __metadata:
"@langchain/scripts": ">=0.1.0 <0.2.0"
"@langchain/textsplitters": "workspace:*"
"@langchain/weaviate": "workspace:*"
+ "@langchain/xai": "workspace:*"
"@langchain/yandex": "workspace:*"
"@layerup/layerup-security": ^1.5.12
"@opensearch-project/opensearch": ^2.2.0