Skip to content

Commit

Permalink
Merge branch 'main' into brace/api-refs-docs
Browse files Browse the repository at this point in the history
  • Loading branch information
bracesproul authored Nov 9, 2023
2 parents d09eb8b + 6942f10 commit 45814a6
Show file tree
Hide file tree
Showing 68 changed files with 2,427 additions and 257 deletions.
7 changes: 6 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,10 @@
"https://json.schemastore.org/github-workflow.json": "./.github/workflows/deploy.yml"
},
"typescript.tsdk": "node_modules/typescript/lib",
"cSpell.words": ["Upstash"]
"cSpell.words": [
"Upstash"
],
"cSpell.enableFiletypes": [
"mdx"
]
}
182 changes: 182 additions & 0 deletions cookbook/openai_vision_multimodal.ipynb

Large diffs are not rendered by default.

10 changes: 10 additions & 0 deletions docs/docs/expression_language/how_to/cancellation.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Cancelling requests

You can cancel a LCEL request by binding a `signal`.

import CodeBlock from "@theme/CodeBlock";
import CancellationExample from "@examples/guides/expression_language/how_to_cancellation.ts";

<CodeBlock language="typescript">{CancellationExample}</CodeBlock>

Note, this will only cancel the outgoing request if the underlying provider exposes that option. LangChain will cancel the underlying request if possible, otherwise it will cancel the processing of the response.
20 changes: 20 additions & 0 deletions docs/docs/integrations/chat/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,30 @@ sidebar_class_name: hidden

# Chat models

<!-- This file is autogenerated. Do not edit directly. -->
<!-- See `scripts/model-docs.table.js` for details -->

## Features (natively supported)

All ChatModels implement the Runnable interface, which comes with default implementations of all methods, ie. `invoke`, `batch`, `stream`. This gives all ChatModels basic support for invoking, streaming and batching, which by default is implemented as below:

- _Streaming_ support defaults to returning an `AsyncIterator` of a single value, the final result returned by the underlying ChatModel provider. This obviously doesn't give you token-by-token streaming, which requires native support from the ChatModel provider, but ensures your code that expects an iterator of tokens can work for any of our ChatModel integrations.
- _Batch_ support defaults to calling the underlying ChatModel in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.
- _Map_ support defaults to calling `.invoke` across all instances of the array which it was called on.

Each ChatModel integration can optionally provide native implementations to truly enable invoke, streaming or batching requests. The table shows, for each integration, which features have been implemented with native support.

| Model | Invoke | Stream | Batch |
| :---------------------- | :----: | :----: | :---: |
| ChatAnthropic | | | |
| ChatBaiduWenxin | | | |
| ChatCloudflareWorkersAI | | | |
| ChatFireworks | | | |
| ChatGooglePaLM | | | |
| ChatLlamaCpp | | | |
| ChatMinimax | | | |
| ChatOllama | | | |
| ChatOpenAI | | | |
| PromptLayerChatOpenAI | | | |
| PortkeyChat | | | |
| ChatYandexGPT | | | |
25 changes: 25 additions & 0 deletions docs/docs/integrations/chat/openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,31 @@ import OpenAI from "@examples/models/chat/integration_openai.ts";
If you're part of an organization, you can set `process.env.OPENAI_ORGANIZATION` with your OpenAI organization id, or pass it in as `organization` when
initializing the model.

## Multimodal messages

:::info
This feature is currently in preview. The message schema may change in future releases.
:::

OpenAI supports interleaving images with text in input messages with their `gpt-4-vision-preview`. Here's an example of how this looks:

import OpenAIVision from "@examples/models/chat/integration_openai_vision.ts";

<CodeBlock language="typescript">{OpenAIVision}</CodeBlock>

## Tool calling

:::info
This feature is currently only available for `gpt-3.5-turbo-1106` and `gpt-4-1106-preview` models.
:::

More recent OpenAI chat models support calling multiple functions to get all required data to answer a question.
Here's an example how a conversation turn with this functionality might look:

import OpenAITools from "@examples/models/chat/integration_openai_tool_calls.ts";

<CodeBlock language="typescript">{OpenAITools}</CodeBlock>

## Custom URLs

You can customize the base URL the SDK sends requests to by passing a `configuration` parameter like this:
Expand Down
28 changes: 27 additions & 1 deletion docs/docs/integrations/llms/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ sidebar_class_name: hidden

# LLMs

<!-- This file is autogenerated. Do not edit directly. -->
<!-- See `scripts/model-docs.table.js` for details -->

## Features (natively supported)

All LLMs implement the Runnable interface, which comes with default implementations of all methods, ie. `invoke`, `batch`, `stream`, `map`. This gives all LLMs basic support for invoking, streaming, batching and mapping requests, which by default is implemented as below:
Expand All @@ -13,4 +16,27 @@ All LLMs implement the Runnable interface, which comes with default implementati
- _Batch_ support defaults to calling the underlying LLM in parallel for each input. The concurrency can be controlled with the `maxConcurrency` key in `RunnableConfig`.
- _Map_ support defaults to calling `.invoke` across all instances of the array which it was called on.

Each LLM integration can optionally provide native implementations for streaming or batch, which, for providers that support it, can be more efficient.
Each LLM integration can optionally provide native implementations for invoke, streaming or batch, which, for providers that support it, can be more efficient. The table shows, for each integration, which features have been implemented with native support.

| Model | Invoke | Stream | Batch |
| :-------------------- | :----: | :----: | :---: |
| AI21 | | | |
| AlephAlpha | | | |
| CloudflareWorkersAI | | | |
| Cohere | | | |
| Fireworks | | | |
| GooglePaLM | | | |
| HuggingFaceInference | | | |
| LlamaCpp | | | |
| Ollama | | | |
| OpenAIChat | | | |
| PromptLayerOpenAIChat | | | |
| OpenAI | | | |
| OpenAIChat | | | |
| PromptLayerOpenAI | | | |
| PromptLayerOpenAIChat | | | |
| Portkey | | | |
| Replicate | | | |
| SageMakerEndpoint | | | |
| Writer | | | |
| YandexGPT | | | |
196 changes: 196 additions & 0 deletions docs/docs/modules/agents/agent_types/openai_assistant.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# OpenAI Assistant

:::info
The [OpenAI Assistant API](https://platform.openai.com/docs/assistants/overview) is still in beta.
:::

OpenAI released a new API for a conversational agent like system called Assistant.

You can interact with OpenAI Assistants using OpenAI tools or custom tools. When using exclusively OpenAI tools, you can just invoke the assistant directly and get final answers. When using custom tools, you can run the assistant and tool execution loop using the built-in `AgentExecutor` or write your own executor.
OpenAI assistants currently have access to two tools hosted by OpenAI: [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter), and [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval).

We've implemented the assistant API in LangChain with some helpful abstractions. In this guide we'll go over those, and show how to use them to create powerful assistants.

## Creating an assistant

Creating an assistant is easy. Use the `createAssistant` method and pass in a model ID, and optionally more parameters to further customize your assistant.

```typescript
import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";

const assistant = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-4-1106-preview",
});
const assistantResponse = await assistant.invoke({
content: "Hello world!",
});
console.log(assistantResponse);
/**
[
{
id: 'msg_OBH60nkVI40V9zY2PlxMzbEI',
thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5',
role: 'assistant',
content: [
{
type: 'text',
value: 'Hello there! What can I do for you?'
}
],
assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i',
run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6',
}
]
*/
```

If you have an existing assistant, you can pass it directly into the constructor:

```typescript
const assistant = new OpenAIAssistantRunnable({
assistantId: "asst_RtW03Vs6laTwqSSMCQpVND7i",
// asAgent: true
});
```

In this next example we'll show how you can turn your assistant into an agent.

## Assistant as an agent

```typescript
import { AgentExecutor } from "langchain/agents";
import { StructuredTool } from "langchain/tools";
import { OpenAIAssistantRunnable } from "langchain/experimental/openai_assistant";
```

The first step is to define a list of tools you want to pass to your assistant.
Here we'll only define one for simplicity's sake, however the assistant API allows for passing in a list of tools, and from there the model can use multiple tools at once.
Read more about the run steps lifecycle [here](https://platform.openai.com/docs/assistants/how-it-works/runs-and-run-steps)

:::note
Only models released >= 1106 are able to use multiple tools at once. See the full list of OpenAI models [here](https://platform.openai.com/docs/models).
:::

```typescript
function getCurrentWeather(location: string, _unit = "fahrenheit") {
if (location.toLowerCase().includes("tokyo")) {
return JSON.stringify({ location, temperature: "10", unit: "celsius" });
} else if (location.toLowerCase().includes("san francisco")) {
return JSON.stringify({ location, temperature: "72", unit: "fahrenheit" });
} else {
return JSON.stringify({ location, temperature: "22", unit: "celsius" });
}
}
class WeatherTool extends StructuredTool {
schema = z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).optional(),
});

name = "get_current_weather";

description = "Get the current weather in a given location";

constructor() {
super(...arguments);
}

async _call(input: { location: string; unit: string }) {
const { location, unit } = input;
const result = getCurrentWeather(location, unit);
return result;
}
}
const tools = [new WeatherTool()];
```

In the above code we've defined three things:

- A function for the agent to call if the model requests it.
- A tool class which we'll pass to the `AgentExecutor`
- The tool list we can use to pass to our `OpenAIAssistantRunnable` and `AgentExecutor`

Next, we construct the `OpenAIAssistantRunnable` and pass it to the `AgentExecutor`.

```typescript
const agent = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-3.5-turbo-1106",
instructions:
"You are a weather bot. Use the provided functions to answer questions.",
name: "Weather Assistant",
tools,
asAgent: true,
});
const agentExecutor = AgentExecutor.fromAgentAndTools({
agent,
tools,
});
```

Note how we're setting `asAgent` to `true`, this input parameter tells the `OpenAIAssistantRunnable` to return different, agent-acceptable outputs for actions or finished conversations.

Above we're also doing something a little different from the first example by passing in input parameters for `instructions` and `name`.
These are optional parameters, with the instructions being passed as extra context to the model, and the name being used to identify the assistant in the OpenAI dashboard.

Finally to invoke our executor we call the `.invoke` method in the exact same way as we did in the first example.

```typescript
const assistantResponse = await agentExecutor.invoke({
content: "What's the weather in Tokyo and San Francisco?",
});
console.log(assistantResponse);
/**
{
output: 'The current weather in San Francisco is 72°F, and in Tokyo, it is 10°C.'
}
*/
```

Here we asked a question which contains two sub questions inside: `What's the weather in Tokyo?` and `What's the weather in San Francisco?`.
In order for the `OpenAIAssistantRunnable` to answer that it returned two sets of function call arguments for each question, demonstrating it's ability to call multiple functions at once.

## Assistant tools

OpenAI currently offers two tools for the assistant API: a [code interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) and a [knowledge retrieval](https://platform.openai.com/docs/assistants/tools/knowledge-retrieval) tool.
You can offer these tools to the assistant simply by passing them in as part of the `tools` parameter when creating the assistant.

```typescript
const assistant = await OpenAIAssistantRunnable.createAssistant({
model: "gpt-3.5-turbo-1106",
instructions:
"You are a helpful assistant that provides answers to math problems.",
name: "Math Assistant",
tools: [{ type: "code_interpreter" }],
});
```

Since we're passing `code_interpreter` as a tool, the assistant will now be able to execute Python code, allowing for more complex tasks normal LLMs are not capable of doing well, like math.

```typescript
const assistantResponse = await assistant.invoke({
content: "What's 10 - 4 raised to the 2.7",
});
console.log(assistantResponse);
/**
[
{
id: 'msg_OBH60nkVI40V9zY2PlxMzbEI',
thread_id: 'thread_wKpj4cu1XaYEVeJlx4yFbWx5',
role: 'assistant',
content: [
{
type: 'text',
text: {
value: 'The result of 10 - 4 raised to the 2.7 is approximately -32.22.',
annotations: []
}
}
],
assistant_id: 'asst_RtW03Vs6laTwqSSMCQpVND7i',
run_id: 'run_4Ve5Y9fyKMcSxHbaNHOFvdC6',
}
]
*/
```

Here the assistant was able to utilize the `code_interpreter` tool to calculate the answer to our question.
1 change: 1 addition & 0 deletions environment_tests/test-exports-bun/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cf/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-cjs/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ const util_document = require("langchain/util/document");
const util_math = require("langchain/util/math");
const util_time = require("langchain/util/time");
const experimental_autogpt = require("langchain/experimental/autogpt");
const experimental_openai_assistant = require("langchain/experimental/openai_assistant");
const experimental_babyagi = require("langchain/experimental/babyagi");
const experimental_generative_agents = require("langchain/experimental/generative_agents");
const experimental_plan_and_execute = require("langchain/experimental/plan_and_execute");
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esbuild/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ import * as util_document from "langchain/util/document";
import * as util_math from "langchain/util/math";
import * as util_time from "langchain/util/time";
import * as experimental_autogpt from "langchain/experimental/autogpt";
import * as experimental_openai_assistant from "langchain/experimental/openai_assistant";
import * as experimental_babyagi from "langchain/experimental/babyagi";
import * as experimental_generative_agents from "langchain/experimental/generative_agents";
import * as experimental_plan_and_execute from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-esm/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ import * as util_document from "langchain/util/document";
import * as util_math from "langchain/util/math";
import * as util_time from "langchain/util/time";
import * as experimental_autogpt from "langchain/experimental/autogpt";
import * as experimental_openai_assistant from "langchain/experimental/openai_assistant";
import * as experimental_babyagi from "langchain/experimental/babyagi";
import * as experimental_generative_agents from "langchain/experimental/generative_agents";
import * as experimental_plan_and_execute from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vercel/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
1 change: 1 addition & 0 deletions environment_tests/test-exports-vite/src/entrypoints.js
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ export * from "langchain/util/document";
export * from "langchain/util/math";
export * from "langchain/util/time";
export * from "langchain/experimental/autogpt";
export * from "langchain/experimental/openai_assistant";
export * from "langchain/experimental/babyagi";
export * from "langchain/experimental/generative_agents";
export * from "langchain/experimental/plan_and_execute";
Expand Down
Binary file added examples/hotdog.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 45814a6

Please sign in to comment.