Skip to content

Commit

Permalink
docs: update concept docs (#726)
Browse files Browse the repository at this point in the history
  • Loading branch information
vbarda authored Dec 10, 2024
1 parent fd2f6df commit 67e962a
Show file tree
Hide file tree
Showing 2 changed files with 98 additions and 80 deletions.
55 changes: 1 addition & 54 deletions docs/docs/concepts/low_level.md
Original file line number Diff line number Diff line change
Expand Up @@ -406,59 +406,6 @@ const myNode = (state: typeof StateAnnotation.State) => {
};
```

A `Command` has the following properties:

| Property | Description |
| --- | --- |
| `graph` | Graph to send the command to. Supported values:<br>- `None`: the current graph (default)<br>- `Command.PARENT`: closest parent graph |
| `update` | Update to apply to the graph's state. |
| `resume` | Value to resume execution with. To be used together with [`interrupt()`](https://langchain-ai.github.io/langgraphjs/reference/functions/langgraph.interrupt-1.html). |
| `goto` | Can be one of the following:<br>- name of the node to navigate to next (any node that belongs to the specified `graph`)<br>- sequence of node names to navigate to next<br>- [`Send`](https://langchain-ai.github.io/langgraphjs/reference/classes/langgraph.Send.html) object (to execute a node with the input provided)<br>- sequence of `Send` objects<br>If `goto` is not specified and there are no other tasks left in the graph, the graph will halt after executing the current superstep. |

Here's a complete example:

```ts
import { StateGraph, Annotation, Command } from "@langchain/langgraph";

const StateAnnotation = Annotation.Root({
foo: Annotation<string>,
});

const myNode = async (state: typeof StateAnnotation.State) => {
return new Command({
// state update
update: {
foo: "bar",
},
// control flow
goto: "myOtherNode",
});
};

const myOtherNode = async (state: typeof StateAnnotation.State) => {
return {
foo: state.foo + "baz"
};
};

const graph = new StateGraph(StateAnnotation)
.addNode("myNode", myNode, {
// For compiling and validating the graph
ends: ["myOtherNode"],
})
.addNode("myOtherNode", myOtherNode)
.addEdge("__start__", "myNode")
.compile();

await graph.invoke({
foo: "",
});
```

```ts
{ foo: "barbaz" }
```

With `Command` you can also achieve dynamic control flow behavior (identical to [conditional edges](#conditional-edges)):

```ts
Expand All @@ -477,7 +424,7 @@ const myNode = async (state: typeof StateAnnotation.State) => {

!!! important

When returning `Command` in your node functions, you must also add an `ends` parameter with the list of node names the node is routing to, e.g. `.addNode("myNode", myNode, { ends: ["nodeA", "nodeB"] })`. This is necessary for graph compilation and validation, and indicates that `myNode` can navigate to `nodeA` and `nodeB`.
When returning `Command` in your node functions, you must also add an `ends` parameter with the list of node names the node is routing to, e.g. `.addNode("myNode", myNode, { ends: ["myOtherNode"] })`. This is necessary for graph compilation and validation, and indicates that `myNode` can navigate to `myOtherNode`.

Check out this [how-to guide](../how-tos/command.ipynb) for an end-to-end example of how to use `Command`.

Expand Down
123 changes: 97 additions & 26 deletions docs/docs/concepts/multi_agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,58 +27,129 @@ There are several ways to connect agents in a multi-agent system:

### Network

In this architecture, agents are defined as graph nodes. Each agent can communicate with every other agent (many-to-many connections) and can decide which agent to call next. While very flexible, this architecture doesn't scale well as the number of agents grows:
In this architecture, agents are defined as graph nodes. Each agent can communicate with every other agent (many-to-many connections) and can decide which agent to call next. This architecture is good for problems that do not have a clear hierarchy of agents or a specific sequence in which agents should be called.

- hard to enforce which agent should be called next
- hard to determine how much [information](#shared-message-list) should be passed between the agents
```ts
import {
StateGraph,
Annotation,
MessagesAnnotation,
Command
} from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
model: "gpt-4o-mini",
});

We recommend avoiding this architecture in production and using one of the below architectures instead.
const agent1 = async (state: typeof MessagesAnnotation.State) => {
// you can pass relevant parts of the state to the LLM (e.g., state.messages)
// to determine which agent to call next. a common pattern is to call the model
// with a structured output (e.g. force it to return an output with a "next_agent" field)
const response = await model.withStructuredOutput(...).invoke(...);
return new Command({
update: {
messages: [response.content],
},
goto: response.next_agent,
});
};

const agent2 = async (state: typeof MessagesAnnotation.State) => {
const response = await model.withStructuredOutput(...).invoke(...);
return new Command({
update: {
messages: [response.content],
},
goto: response.next_agent,
});
};

const agent3 = async (state: typeof MessagesAnnotation.State) => {
...
return new Command({
update: {
messages: [response.content],
},
goto: response.next_agent,
});
};

const graph = new StateGraph(MessagesAnnotation)
.addNode("agent1", agent1, {
ends: ["agent2", "agent3" "__end__"],
})
.addNode("agent2", agent2, {
ends: ["agent1", "agent3", "__end__"],
})
.addNode("agent3", agent3, {
ends: ["agent1", "agent2", "__end__"],
})
.addEdge("__start__", "agent1")
.compile();
```

### Supervisor

In this architecture, we define agents as nodes and add a supervisor node (LLM) that decides which agent nodes should be called next. We use [conditional edges](./low_level.md#conditional-edges) to route execution to the appropriate agent node based on supervisor's decision. This architecture also lends itself well to running multiple agents in parallel or using [map-reduce](../how-tos/map-reduce.ipynb) pattern.
In this architecture, we define agents as nodes and add a supervisor node (LLM) that decides which agent nodes should be called next. We use [`Command`](./low_level.md#command) to route execution to the appropriate agent node based on supervisor's decision. This architecture also lends itself well to running multiple agents in parallel or using [map-reduce](../how-tos/map-reduce.ipynb) pattern.

```ts
import {
StateGraph,
Annotation,
MessagesAnnotation,
Command,
} from "@langchain/langgraph";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({
model: "gpt-4o-mini",
});

const StateAnnotation = Annotation.Root({
...MessagesAnnotation.spec,
next: Annotation<"agent1" | "agent2">,
});

const supervisor = async (state: typeof StateAnnotation.State) => {
const supervisor = async (state: typeof MessagesAnnotation.State) => {
// you can pass relevant parts of the state to the LLM (e.g., state.messages)
// to determine which agent to call next. a common pattern is to call the model
// with a structured output (e.g. force it to return an output with a "next_agent" field)
const response = await model.withStructuredOutput(...).invoke(...);
return { next: response.next_agent };
// route to one of the agents or exit based on the supervisor's decision
// if the supervisor returns "__end__", the graph will finish execution
return new Command({
goto: response.next_agent,
});
};

const agent1 = async (state: typeof StateAnnotation.State) => {
const agent1 = async (state: typeof MessagesAnnotation.State) => {
// you can pass relevant parts of the state to the LLM (e.g., state.messages)
// and add any additional logic (different models, custom prompts, structured output, etc.)
const response = await model.invoke(...);
return { messages: [response] };
return new Command({
goto: "supervisor",
update: {
messages: [response],
},
});
};

const agent2 = async (state: typeof StateAnnotation.State) => {
const agent2 = async (state: typeof MessagesAnnotation.State) => {
const response = await model.invoke(...);
return { messages: [response] };
return new Command({
goto: "supervisor",
update: {
messages: [response],
},
});
};

const graph = new StateGraph(StateAnnotation)
.addNode("supervisor", supervisor)
.addNode("agent1", agent1)
.addNode("agent2", agent2)
const graph = new StateGraph(MessagesAnnotation)
.addNode("supervisor", supervisor, {
ends: ["agent1", "agent2", "__end__"],
})
.addNode("agent1", agent1, {
ends: ["supervisor"],
})
.addNode("agent2", agent2, {
ends: ["supervisor"],
})
.addEdge("__start__", "supervisor")
// route to one of the agents or exit based on the supervisor's decisiion
.addConditionalEdges("supervisor", async (state) => state.next)
.addEdge("agent1", "supervisor")
.addEdge("agent2", "supervisor")
.compile();
```

Expand All @@ -90,7 +161,7 @@ In this architecture we add individual agents as graph nodes and define the orde

- **Explicit control flow (normal edges)**: LangGraph allows you to explicitly define the control flow of your application (i.e. the sequence of how agents communicate) explicitly, via [normal graph edges](./low_level.md#normal-edges). This is the most deterministic variant of this architecture above — we always know which agent will be called next ahead of time.

- **Dynamic control flow (conditional edges)**: in LangGraph you can allow LLMs to decide parts of your application control flow. This can be achieved by using [conditional edges](./low_level.md#conditional-edges).
- **Dynamic control flow (conditional edges)**: in LangGraph you can allow LLMs to decide parts of your application control flow. This can be achieved by using [`Command`](./low_level.md#command).

```ts
import {
Expand Down

0 comments on commit 67e962a

Please sign in to comment.