Skip to content

Commit

Permalink
Merge branch 'main' into brace/stream-tokens-tools-standard-test
Browse files Browse the repository at this point in the history
  • Loading branch information
bracesproul authored Jul 24, 2024
2 parents bae980f + e51ea3e commit 0560863
Show file tree
Hide file tree
Showing 300 changed files with 17,299 additions and 4,942 deletions.
17 changes: 17 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,20 @@ jobs:
uses:
./.github/workflows/test-exports.yml
secrets: inherit

platform-compatibility:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
steps:
- uses: actions/checkout@v4
- name: Use Node.js ${{ env.NODE_VERSION }}
uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
cache: "yarn"
- name: Install dependencies
run: yarn install --immutable
- name: Build `@langchain/core`
run: yarn build --filter=@langchain/core
560 changes: 280 additions & 280 deletions .yarn/releases/yarn-3.4.1.cjs → .yarn/releases/yarn-3.5.1.cjs

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions .yarnrc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,14 @@ plugins:
spec: "@yarnpkg/plugin-typescript"

supportedArchitectures:
os:
- darwin
- linux
cpu:
- x64
- arm64
libc:
- glibc
- musl
os:
- darwin
- linux

yarnPath: .yarn/releases/yarn-3.4.1.cjs
yarnPath: .yarn/releases/yarn-3.5.1.cjs
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ You can use npm, yarn, or pnpm to install LangChain.js

LangChain is written in TypeScript and can be used in:

- Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x
- Node.js (ESM and CommonJS) - 18.x, 19.x, 20.x, 22.x
- Cloudflare Workers
- Vercel / Next.js (Browser, Serverless and Edge functions)
- Supabase Edge Functions
Expand Down
2 changes: 1 addition & 1 deletion deno.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
"@langchain/textsplitters": "npm:@langchain/textsplitters",
"@langchain/google-vertexai-web": "npm:@langchain/google-vertexai-web",
"@langchain/mistralai": "npm:@langchain/mistralai",
"@langchain/core/": "npm:/@langchain/core/",
"@langchain/core/": "npm:/@langchain/core@0.2.16/",
"@langchain/pinecone": "npm:@langchain/pinecone",
"@langchain/google-common": "npm:@langchain/google-common",
"@langchain/langgraph": "npm:/@langchain/[email protected]",
Expand Down
22 changes: 22 additions & 0 deletions docs/core_docs/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,28 @@ docs/how_to/trim_messages.md
docs/how_to/trim_messages.mdx
docs/how_to/tools_prompting.md
docs/how_to/tools_prompting.mdx
docs/how_to/tools_error.md
docs/how_to/tools_error.mdx
docs/how_to/tools_builtin.md
docs/how_to/tools_builtin.mdx
docs/how_to/tool_streaming.md
docs/how_to/tool_streaming.mdx
docs/how_to/tool_stream_events.md
docs/how_to/tool_stream_events.mdx
docs/how_to/tool_runtime.md
docs/how_to/tool_runtime.mdx
docs/how_to/tool_results_pass_to_model.md
docs/how_to/tool_results_pass_to_model.mdx
docs/how_to/tool_configure.md
docs/how_to/tool_configure.mdx
docs/how_to/tool_choice.md
docs/how_to/tool_choice.mdx
docs/how_to/tool_calls_multimodal.md
docs/how_to/tool_calls_multimodal.mdx
docs/how_to/tool_calling.md
docs/how_to/tool_calling.mdx
docs/how_to/tool_artifacts.md
docs/how_to/tool_artifacts.mdx
docs/how_to/structured_output.md
docs/how_to/structured_output.mdx
docs/how_to/streaming.md
Expand Down Expand Up @@ -155,8 +169,14 @@ docs/how_to/document_loader_html.md
docs/how_to/document_loader_html.mdx
docs/how_to/custom_tools.md
docs/how_to/custom_tools.mdx
docs/how_to/custom_llm.md
docs/how_to/custom_llm.mdx
docs/how_to/custom_chat.md
docs/how_to/custom_chat.mdx
docs/how_to/custom_callbacks.md
docs/how_to/custom_callbacks.mdx
docs/how_to/convert_runnable_to_tool.md
docs/how_to/convert_runnable_to_tool.mdx
docs/how_to/code_splitter.md
docs/how_to/code_splitter.mdx
docs/how_to/chatbots_tools.md
Expand All @@ -171,6 +191,8 @@ docs/how_to/character_text_splitter.md
docs/how_to/character_text_splitter.mdx
docs/how_to/callbacks_runtime.md
docs/how_to/callbacks_runtime.mdx
docs/how_to/callbacks_custom_events.md
docs/how_to/callbacks_custom_events.mdx
docs/how_to/callbacks_constructor.md
docs/how_to/callbacks_constructor.mdx
docs/how_to/callbacks_backgrounding.md
Expand Down
124 changes: 106 additions & 18 deletions docs/core_docs/docs/concepts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import useBaseUrl from "@docusaurus/useBaseUrl";
dark: useBaseUrl("/svg/langchain_stack_062024_dark.svg"),
}}
title="LangChain Framework Overview"
style={{ width: "100%" }}
/>

### `@langchain/core`
Expand Down Expand Up @@ -71,6 +72,9 @@ After that, you can enable it by setting environment variables:
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls__...

# Reduce tracing latency if you are not in a serverless environment
# export LANGCHAIN_CALLBACKS_BACKGROUND=true
```

## LangChain Expression Language
Expand Down Expand Up @@ -260,7 +264,7 @@ This is where information like log-probs and token usage may be stored.
These represent a decision from an language model to call a tool. They are included as part of an `AIMessage` output.
They can be accessed from there with the `.tool_calls` property.

This property returns an array of objects. Each object has the following keys:
This property returns a list of `ToolCall`s. A `ToolCall` is an object with the following arguments:

- `name`: The name of the tool that should be called.
- `args`: The arguments to that tool.
Expand All @@ -270,13 +274,18 @@ This property returns an array of objects. Each object has the following keys:

This represents a system message, which tells the model how to behave. Not every model provider supports this.

#### FunctionMessage
#### ToolMessage

This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.
This represents the result of a tool call. In addition to `role` and `content`, this message has:

#### ToolMessage
- a `tool_call_id` field which conveys the id of the call to the tool that was called to produce this result.
- an `artifact` field which can be used to pass along arbitrary artifacts of the tool execution which are useful to track but which should not be sent to the model.

#### (Legacy) FunctionMessage

This represents the result of a tool call. This is distinct from a FunctionMessage in order to match OpenAI's `function` and `tool` message types. In addition to `role` and `content`, this message has a `tool_call_id` parameter which conveys the id of the call to the tool that was called to produce this result.
This is a legacy message type, corresponding to OpenAI's legacy function-calling API. ToolMessage should be used instead to correspond to the updated tool-calling API.

This represents the result of a function call. In addition to `role` and `content`, this message has a `name` parameter which conveys the name of the function that was called to produce this result.

### Prompt templates

Expand Down Expand Up @@ -539,26 +548,104 @@ For specifics on how to use retrievers, see the [relevant how-to guides here](/d

<span data-heading-keywords="tool,tools"></span>

Tools are interfaces that an agent, chain, or LLM can use to interact with the world.
They combine a few things:
Tools are utilities designed to be called by a model: their inputs are designed to be generated by models, and their outputs are designed to be passed back to models.
Tools are needed whenever you want a model to control parts of your code or call out to external APIs.

1. The name of the tool
2. A description of what the tool is
3. JSON schema of what the inputs to the tool are
4. The function to call
5. Whether the result of a tool should be returned directly to the user
A tool consists of:

It is useful to have all this information because this information can be used to build action-taking systems! The name, description, and JSON schema can be used to prompt the LLM so it knows how to specify what action to take, and then the function to call is equivalent to taking that action.
1. The name of the tool.
2. A description of what the tool does.
3. A JSON schema defining the inputs to the tool.
4. A function.

The simpler the input to a tool is, the easier it is for an LLM to be able to use it.
Many agents will only work with tools that have a single string input.
When a tool is bound to a model, the name, description and JSON schema are provided as context to the model.

Importantly, the name, description, and JSON schema (if used) are all used in the prompt. Therefore, it is really important that they are clear and describe exactly how the tool should be used. You may need to change the default name, description, or JSON schema if the LLM is not understanding how to use the tool.
Given a list of tools and a set of instructions, a model can request to call one or more tools with specific inputs.
Typical usage may look like the following:

For specifics on how to use tools, see the [relevant how-to guides here](/docs/how_to/#tools).
```ts
// Define a list of tools
const tools = [...];
const llmWithTools = llm.bindTools([tool]);

const aiMessage = await llmWithTools.invoke("do xyz...");
// AIMessage(tool_calls=[ToolCall(...), ...], ...)
```

The `AIMessage` returned from the model MAY have `tool_calls` associated with it.
Read [this guide](/docs/concepts/#aimessage) for more information on what the response type may look like.

Once the tools are chosen, you will usually want to invoke them and then pass the results back to the model so that it can complete whatever task
it's performing.

There are generally two different ways to invoke the tool and pass back the response:

#### Invoke with just the arguments

When you invoke a tool with just the arguments, you will get back the raw tool output (usually a string).
Here's what this looks like:

```ts
import { ToolMessage } from "@langchain/core/messages";

const toolCall = aiMessage.tool_calls[0]; // ToolCall(args={...}, id=..., ...)
const toolOutput = await tool.invoke(toolCall.args);
const toolMessage = new ToolMessage({
content: toolOutput,
name: toolCall.name,
tool_call_id: toolCall.id,
});
```

Note that the `content` field will generally be passed back to the model.
If you do not want the raw tool response to be passed to the model, but you still want to keep it around,
you can transform the tool output but also pass it as an artifact (read more about [`ToolMessage.artifact` here](/docs/concepts/#toolmessage))

```ts
// Same code as above
const responseForModel = someTransformation(response);
const toolMessage = new ToolMessage({
content: responseForModel,
tool_call_id: toolCall.id,
name: toolCall.name,
artifact: response,
});
```

#### Invoke with `ToolCall`

The other way to invoke a tool is to call it with the full `ToolCall` that was generated by the model.
When you do this, the tool will return a `ToolMessage`.
The benefits of this are that you don't have to write the logic yourself to transform the tool output into a ToolMessage.
Here's what this looks like:

```ts
const toolCall = aiMessage.tool_calls[0];
const toolMessage = await tool.invoke(toolCall);
```

If you are invoking the tool this way and want to include an [artifact](/docs/concepts/#toolmessage) for the `ToolMessage`, you will need to have the tool return a tuple
with two items: the `content` and the `artifact`.
Read more about [defining tools that return artifacts here](/docs/how_to/tool_artifacts/).

#### Best practices

When designing tools to be used by a model, it is important to keep in mind that:

- Chat models that have explicit [tool-calling APIs](/docs/concepts/#functiontool-calling) will be better at tool calling than non-fine-tuned models.
- Models will perform better if the tools have well-chosen names, descriptions, and JSON schemas. This another form of prompt engineering.
- Simple, narrowly scoped tools are easier for models to use than complex tools.

#### Related

For specifics on how to use tools, see the [tools how-to guides](/docs/how_to/#tools).

To use a pre-built tool, see the [tool integration docs](/docs/integrations/tools/).

### Toolkits

<span data-heading-keywords="toolkit,toolkits"></span>

Toolkits are collections of tools that are designed to be used together for specific tasks. They have convenient loading methods.

All Toolkits expose a `getTools` method which returns an array of tools.
Expand Down Expand Up @@ -764,7 +851,8 @@ for await (const event of eventStream) {

You can roughly think of it as an iterator over callback events (though the format differs) - and you can use it on almost all LangChain components!

See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.streamEvents()`.
See [this guide](/docs/how_to/streaming/#using-stream-events) for more detailed information on how to use `.streamEvents()`,
or [this guide](/docs/how_to/callbacks_custom_events) for how to stream custom events from within a chain.

#### Tokens

Expand Down
3 changes: 3 additions & 0 deletions docs/core_docs/docs/how_to/agent_executor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,9 @@
"```shell\n",
"export LANGCHAIN_TRACING_V2=\"true\"\n",
"export LANGCHAIN_API_KEY=\"...\"\n",
"\n",
"# Reduce tracing latency if you are not in a serverless environment\n",
"# export LANGCHAIN_CALLBACKS_BACKGROUND=true\n",
"```\n"
]
},
Expand Down
Loading

0 comments on commit 0560863

Please sign in to comment.