Skip to content

Commit

Permalink
docs: migration guide nits (#25600)
Browse files Browse the repository at this point in the history
  • Loading branch information
efriis authored Aug 22, 2024
1 parent 3981d73 commit e958f76
Show file tree
Hide file tree
Showing 14 changed files with 175 additions and 177 deletions.
12 changes: 2 additions & 10 deletions docs/docs/versions/migrating_chains/constitutional_chain.ipynb
Original file line number Diff line number Diff line change
@@ -1,20 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConstitutionalChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"# Migrating from ConstitutionalChain\n",
"\n",
"[ConstitutionalChain](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.base.ConstitutionalChain.html) allowed for a LLM to critique and revise generations based on [principles](https://api.python.langchain.com/en/latest/chains/langchain.chains.constitutional_ai.models.ConstitutionalPrinciple.html), structured as combinations of critique and revision requests. For example, a principle might include a request to identify harmful content, and a request to rewrite the content.\n",
"\n",
"`Constitutional AI principles` are based on the [Constitutional AI: Harmlessness from AI Feedback](https://arxiv.org/pdf/2212.08073) paper.\n",
Expand Down
14 changes: 3 additions & 11 deletions docs/docs/versions/migrating_chains/conversation_chain.ipynb
Original file line number Diff line number Diff line change
@@ -1,21 +1,13 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "030d95bc-2f9d-492b-8245-b791b866936b",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConversationalChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "d20aeaad-b3ca-4a7d-b02d-3267503965af",
"metadata": {},
"source": [
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporates a memory of previous messages to sustain a stateful conversation.\n",
"# Migrating from ConversationalChain\n",
"\n",
"[`ConversationChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html) incorporated a memory of previous messages to sustain a stateful conversation.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,30 +1,29 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9e279999-6bf0-4a48-9e06-539b916dc705",
"metadata": {},
"source": [
"---\n",
"title: Migrating from ConversationalRetrievalChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "292a3c83-44a9-4426-bbec-f1a778d00d93",
"metadata": {},
"source": [
"# Migrating from ConversationalRetrievalChain\n",
"\n",
"The [`ConversationalRetrievalChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html) was an all-in one way that combined retrieval-augmented generation with chat history, allowing you to \"chat with\" your documents.\n",
"\n",
"Advantages of switching to the LCEL implementation are similar to the `RetrievalQA` section above:\n",
"Advantages of switching to the LCEL implementation are similar to the [`RetrievalQA` migration guide](./retrieval_qa.ipynb):\n",
"\n",
"- Clearer internals. The `ConversationalRetrievalChain` chain hides an entire question rephrasing step which dereferences the initial query against the chat history.\n",
" - This means the class contains two sets of configurable prompts, LLMs, etc.\n",
"- More easily return source documents.\n",
"- Support for runnable methods like streaming and async operations.\n",
"\n",
"Here are side-by-side implementations with custom prompts. We'll reuse the loaded documents and vector store from the previous section:"
"Here are equivalent implementations with custom prompts.\n",
"We'll use the following ingestion code to load a [blog post by Lilian Weng](https://lilianweng.github.io/posts/2023-06-23-agent/) on autonomous agents into a local vector store:\n",
"\n",
"## Shared setup\n",
"\n",
"For both versions, we'll need to load the data with the `WebBaseLoader` document loader, split it with `RecursiveCharacterTextSplitter`, and add it to an in-memory `FAISS` vector store.\n",
"\n",
"We will also instantiate a chat model to use."
]
},
{
Expand Down
89 changes: 89 additions & 0 deletions docs/docs/versions/migrating_chains/index.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
{
"cells": [
{
"cell_type": "raw",
"metadata": {
"vscode": {
"languageId": "raw"
}
},
"source": [
"---\n",
"sidebar_position: 1\n",
"---"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# How to migrate from v0.0 chains\n",
"\n",
"LangChain has evolved since its initial release, and many of the original \"Chain\" classes \n",
"have been deprecated in favor of the more flexible and powerful frameworks of LCEL and LangGraph. \n",
"\n",
"This guide will help you migrate your existing v0.0 chains to the new abstractions.\n",
"\n",
":::info How deprecated implementations work\n",
"Even though many of these implementations are deprecated, they are **still supported** in the codebase. \n",
"However, they are not recommended for new development, and we recommend re-implementing them using the following guides!\n",
"\n",
"To see the planned removal version for each deprecated implementation, check their API reference.\n",
":::\n",
"\n",
":::info Prerequisites\n",
"\n",
"These guides assume some familiarity with the following concepts:\n",
"- [LangChain Expression Language](/docs/concepts#langchain-expression-language-lcel)\n",
"- [LangGraph](https://langchain-ai.github.io/langgraph/)\n",
":::\n",
"\n",
"LangChain maintains a number of legacy abstractions. Many of these can be reimplemented via short combinations of LCEL and LangGraph primitives.\n",
"\n",
"### LCEL\n",
"[LCEL](/docs/concepts/#langchain-expression-language-lcel) is designed to streamline the process of building useful apps with LLMs and combining related components. It does this by providing:\n",
"\n",
"1. **A unified interface**: Every LCEL object implements the `Runnable` interface, which defines a common set of invocation methods (`invoke`, `batch`, `stream`, `ainvoke`, ...). This makes it possible to also automatically and consistently support useful operations like streaming of intermediate steps and batching, since every chain composed of LCEL objects is itself an LCEL object.\n",
"2. **Composition primitives**: LCEL provides a number of primitives that make it easy to compose chains, parallelize components, add fallbacks, dynamically configure chain internals, and more.\n",
"\n",
"### LangGraph\n",
"[LangGraph](https://langchain-ai.github.io/langgraph/), built on top of LCEL, allows for performant orchestrations of application components while maintaining concise and readable code. It includes built-in persistence, support for cycles, and prioritizes controllability.\n",
"If LCEL grows unwieldy for larger or more complex chains, they may benefit from a LangGraph implementation.\n",
"\n",
"### Advantages\n",
"Using these frameworks for existing v0.0 chains confers some advantages:\n",
"\n",
"- The resulting chains typically implement the full `Runnable` interface, including streaming and asynchronous support where appropriate;\n",
"- The chains may be more easily extended or modified;\n",
"- The parameters of the chain are typically surfaced for easier customization (e.g., prompts) over previous versions, which tended to be subclasses and had opaque parameters and internals.\n",
"- If using LangGraph, the chain supports built-in persistence, allowing for conversational experiences via a \"memory\" of the chat history.\n",
"- If using LangGraph, the steps of the chain can be streamed, allowing for greater control and customizability.\n",
"\n",
"\n",
"The below pages assist with migration from various specific chains to LCEL and LangGraph:\n",
"\n",
"- [LLMChain](./llm_chain.ipynb)\n",
"- [ConversationChain](./conversation_chain.ipynb)\n",
"- [RetrievalQA](./retrieval_qa.ipynb)\n",
"- [ConversationalRetrievalChain](./conversation_retrieval_chain.ipynb)\n",
"- [StuffDocumentsChain](./stuff_docs_chain.ipynb)\n",
"- [MapReduceDocumentsChain](./map_reduce_chain.ipynb)\n",
"- [MapRerankDocumentsChain](./map_rerank_docs_chain.ipynb)\n",
"- [RefineDocumentsChain](./refine_docs_chain.ipynb)\n",
"- [LLMRouterChain](./llm_router_chain.ipynb)\n",
"- [MultiPromptChain](./multi_prompt_chain.ipynb)\n",
"- [LLMMathChain](./llm_math_chain.ipynb)\n",
"- [ConstitutionalChain](./constitutional_chain.ipynb)\n",
"\n",
"Check out the [LCEL conceptual docs](/docs/concepts/#langchain-expression-language-lcel) and [LangGraph docs](https://langchain-ai.github.io/langgraph/) for more background information."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
51 changes: 0 additions & 51 deletions docs/docs/versions/migrating_chains/index.mdx

This file was deleted.

59 changes: 41 additions & 18 deletions docs/docs/versions/migrating_chains/llm_chain.ipynb
Original file line number Diff line number Diff line change
@@ -1,20 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"# Migrating from LLMChain\n",
"\n",
"[`LLMChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm.LLMChain.html) combined a prompt template, LLM, and output parser into a class.\n",
"\n",
"Some advantages of switching to the LCEL implementation are:\n",
Expand All @@ -36,15 +28,16 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"id": "717c8673",
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from getpass import getpass\n",
"\n",
"os.environ[\"OPENAI_API_KEY\"] = getpass()"
"if \"OPENAI_API_KEY\" not in os.environ:\n",
" os.environ[\"OPENAI_API_KEY\"] = getpass()"
]
},
{
Expand All @@ -59,7 +52,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 5,
"id": "f91c9809-8ee7-4e38-881d-0ace4f6ea883",
"metadata": {},
"outputs": [
Expand All @@ -70,7 +63,7 @@
" 'text': \"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\"}"
]
},
"execution_count": 2,
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
Expand All @@ -84,9 +77,39 @@
" [(\"user\", \"Tell me a {adjective} joke\")],\n",
")\n",
"\n",
"chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"legacy_chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)\n",
"\n",
"chain({\"adjective\": \"funny\"})"
"legacy_result = legacy_chain({\"adjective\": \"funny\"})\n",
"legacy_result"
]
},
{
"cell_type": "markdown",
"id": "9f89e97b",
"metadata": {},
"source": [
"Note that `LLMChain` by default returned a `dict` containing both the input and the output from `StrOutputParser`, so to extract the output, you need to access the `\"text\"` key."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "c7fa1618",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Why couldn't the bicycle stand up by itself?\\n\\nBecause it was two tired!\""
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"legacy_result[\"text\"]"
]
},
{
Expand Down Expand Up @@ -137,7 +160,7 @@
"id": "3c0b0513-77b8-4371-a20e-3e487cec7e7f",
"metadata": {},
"source": [
"Note that `LLMChain` by default returns a `dict` containing both the input and the output. If this behavior is desired, we can replicate it using another LCEL primitive, [`RunnablePassthrough`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html):"
"If you'd like to mimic the `dict` packaging of input and output in `LLMChain`, you can use a [`RunnablePassthrough.assign`](https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePassthrough.html) like:"
]
},
{
Expand Down Expand Up @@ -197,7 +220,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.4"
"version": "3.11.4"
}
},
"nbformat": 4,
Expand Down
12 changes: 2 additions & 10 deletions docs/docs/versions/migrating_chains/llm_math_chain.ipynb
Original file line number Diff line number Diff line change
@@ -1,20 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "b57124cc-60a0-4c18-b7ce-3e483d1024a2",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMMathChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "ce8457ed-c0b1-4a74-abbd-9d3d2211270f",
"metadata": {},
"source": [
"# Migrating from LLMMathChain\n",
"\n",
"[`LLMMathChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.llm_math.base.LLMMathChain.html) enabled the evaluation of mathematical expressions generated by a LLM. Instructions for generating the expressions were formatted into the prompt, and the expressions were parsed out of the string response before evaluation using the [numexpr](https://numexpr.readthedocs.io/en/latest/user_guide.html) library.\n",
"\n",
"This is more naturally achieved via [tool calling](/docs/concepts/#functiontool-calling). We can equip a chat model with a simple calculator tool leveraging `numexpr` and construct a simple chain around it using [LangGraph](https://langchain-ai.github.io/langgraph/). Some advantages of this approach include:\n",
Expand Down
12 changes: 2 additions & 10 deletions docs/docs/versions/migrating_chains/llm_router_chain.ipynb
Original file line number Diff line number Diff line change
@@ -1,20 +1,12 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "575befea-4d98-4941-8e55-1581b169a674",
"metadata": {},
"source": [
"---\n",
"title: Migrating from LLMRouterChain\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "14625d35-efca-41cf-b203-be9f4c375700",
"metadata": {},
"source": [
"# Migrating from LLMRouterChain\n",
"\n",
"The [`LLMRouterChain`](https://api.python.langchain.com/en/latest/chains/langchain.chains.router.llm_router.LLMRouterChain.html) routed an input query to one of multiple destinations-- that is, given an input query, it used a LLM to select from a list of destination chains, and passed its inputs to the selected chain.\n",
"\n",
"`LLMRouterChain` does not support common [chat model](/docs/concepts/#chat-models) features, such as message roles and [tool calling](/docs/concepts/#functiontool-calling). Under the hood, `LLMRouterChain` routes a query by instructing the LLM to generate JSON-formatted text, and parsing out the intended destination.\n",
Expand Down
Loading

0 comments on commit e958f76

Please sign in to comment.