diff --git a/.github/workflows/check_diffs.yml b/.github/workflows/check_diffs.yml
index 61c921c03b9e4..8e3bdadff8861 100644
--- a/.github/workflows/check_diffs.yml
+++ b/.github/workflows/check_diffs.yml
@@ -5,6 +5,7 @@ on:
push:
branches: [master]
pull_request:
+ merge_group:
# If another push to the same PR or branch happens while this workflow is still running,
# cancel the earlier run in favor of the next run.
diff --git a/README.md b/README.md
index 23528cacfb5a6..dd8643d4b0a40 100644
--- a/README.md
+++ b/README.md
@@ -39,14 +39,16 @@ conda install langchain -c conda-forge
For these applications, LangChain simplifies the entire application lifecycle:
-- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel), [components](https://python.langchain.com/docs/concepts/), and [third-party integrations](https://python.langchain.com/docs/integrations/providers/).
+- **Open-source libraries**: Build your applications using LangChain's open-source
+[components](https://python.langchain.com/docs/concepts/) and
+[third-party integrations](https://python.langchain.com/docs/integrations/providers/).
Use [LangGraph](https://langchain-ai.github.io/langgraph/) to build stateful agents with first-class streaming and human-in-the-loop support.
- **Productionization**: Inspect, monitor, and evaluate your apps with [LangSmith](https://docs.smith.langchain.com/) so that you can constantly optimize and deploy with confidence.
- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Platform](https://langchain-ai.github.io/langgraph/cloud/).
### Open-source libraries
-- **`langchain-core`**: Base abstractions and LangChain Expression Language.
+- **`langchain-core`**: Base abstractions.
- **Integration packages** (e.g. **`langchain-openai`**, **`langchain-anthropic`**, etc.): Important integrations have been split into lightweight packages that are co-maintained by the LangChain team and the integration developers.
- **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.
- **`langchain-community`**: Third-party integrations that are community maintained.
@@ -86,19 +88,12 @@ And much more! Head to the [Tutorials](https://python.langchain.com/docs/tutoria
The main value props of the LangChain libraries are:
-1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
-2. **Off-the-shelf chains**: built-in assemblages of components for accomplishing higher-level tasks
-
-Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
-
-## LangChain Expression Language (LCEL)
-
-LCEL is a key part of LangChain, allowing you to build and organize chains of processes in a straightforward, declarative manner. It was designed to support taking prototypes directly into production without needing to alter any code. This means you can use LCEL to set up everything from basic "prompt + LLM" setups to intricate, multi-step workflows.
-
-- **[Overview](https://python.langchain.com/docs/concepts/#langchain-expression-language-lcel)**: LCEL and its benefits
-- **[Interface](https://python.langchain.com/docs/concepts/#runnable-interface)**: The standard Runnable interface for LCEL objects
-- **[Primitives](https://python.langchain.com/docs/how_to/#langchain-expression-language-lcel)**: More on the primitives LCEL includes
-- **[Cheatsheet](https://python.langchain.com/docs/how_to/lcel_cheatsheet/)**: Quick overview of the most common usage patterns
+1. **Components**: composable building blocks, tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not.
+2. **Easy orchestration with LangGraph**: [LangGraph](https://langchain-ai.github.io/langgraph/),
+built on top of `langchain-core`, has built-in support for [messages](https://python.langchain.com/docs/concepts/messages/), [tools](https://python.langchain.com/docs/concepts/tools/),
+and other LangChain abstractions. This makes it easy to combine components into
+production-ready applications with persistence, streaming, and other key features.
+Check out the LangChain [tutorials page](https://python.langchain.com/docs/tutorials/#orchestration) for examples.
## Components
@@ -106,15 +101,19 @@ Components fall into the following **modules**:
**📃 Model I/O**
-This includes [prompt management](https://python.langchain.com/docs/concepts/#prompt-templates), [prompt optimization](https://python.langchain.com/docs/concepts/#example-selectors), a generic interface for [chat models](https://python.langchain.com/docs/concepts/#chat-models) and [LLMs](https://python.langchain.com/docs/concepts/#llms), and common utilities for working with [model outputs](https://python.langchain.com/docs/concepts/#output-parsers).
+This includes [prompt management](https://python.langchain.com/docs/concepts/prompt_templates/)
+and a generic interface for [chat models](https://python.langchain.com/docs/concepts/chat_models/), including a consistent interface for [tool-calling](https://python.langchain.com/docs/concepts/tool_calling/) and [structured output](https://python.langchain.com/docs/concepts/structured_outputs/) across model providers.
**📚 Retrieval**
-Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/#document-loaders) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/#text-splitters), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/#retrievers) it for use in the generation step.
+Retrieval Augmented Generation involves [loading data](https://python.langchain.com/docs/concepts/document_loaders/) from a variety of sources, [preparing it](https://python.langchain.com/docs/concepts/text_splitters/), then [searching over (a.k.a. retrieving from)](https://python.langchain.com/docs/concepts/retrievers/) it for use in the generation step.
**🤖 Agents**
-Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://python.langchain.com/docs/concepts/#agents), along with [LangGraph](https://github.com/langchain-ai/langgraph) for building custom agents.
+Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. [LangGraph](https://langchain-ai.github.io/langgraph/) makes it easy to use
+LangChain components to build both [custom](https://langchain-ai.github.io/langgraph/tutorials/)
+and [built-in](https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/)
+LLM agents.
## 📖 Documentation
diff --git a/docs/docs/contributing/how_to/integrations/index.mdx b/docs/docs/contributing/how_to/integrations/index.mdx
index 63b09faa70edb..3623f621b7f3d 100644
--- a/docs/docs/contributing/how_to/integrations/index.mdx
+++ b/docs/docs/contributing/how_to/integrations/index.mdx
@@ -12,7 +12,7 @@ LangChain provides standard interfaces for several different components (languag
## Why contribute an integration to LangChain?
- **Discoverability:** LangChain is the most used framework for building LLM applications, with over 20 million monthly downloads. LangChain integrations are discoverable by a large community of GenAI builders.
-- **Interoptability:** LangChain components expose a standard interface, allowing developers to easily swap them for each other. If you implement a LangChain integration, any developer using a different component will easily be able to swap yours in.
+- **Interoperability:** LangChain components expose a standard interface, allowing developers to easily swap them for each other. If you implement a LangChain integration, any developer using a different component will easily be able to swap yours in.
- **Best Practices:** Through their standard interface, LangChain components encourage and facilitate best practices (streaming, async, etc)
diff --git a/docs/docs/how_to/graph_mapping.ipynb b/docs/docs/how_to/graph_mapping.ipynb
deleted file mode 100644
index 146f479e27d32..0000000000000
--- a/docs/docs/how_to/graph_mapping.ipynb
+++ /dev/null
@@ -1,459 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "raw",
- "id": "5e61b0f2-15b9-4241-9ab5-ff0f3f732232",
- "metadata": {},
- "source": [
- "---\n",
- "sidebar_position: 1\n",
- "---"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "846ef4f4-ee38-4a42-a7d3-1a23826e4830",
- "metadata": {},
- "source": [
- "# How to map values to a graph database\n",
- "\n",
- "In this guide we'll go over strategies to improve graph database query generation by mapping values from user inputs to database.\n",
- "When using the built-in graph chains, the LLM is aware of the graph schema, but has no information about the values of properties stored in the database.\n",
- "Therefore, we can introduce a new step in graph database QA system to accurately map values.\n",
- "\n",
- "## Setup\n",
- "\n",
- "First, get required packages and set environment variables:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "18294435-182d-48da-bcab-5b8945b6d9cf",
- "metadata": {},
- "outputs": [],
- "source": [
- "%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "d86dd771-4001-4a34-8680-22e9b50e1e88",
- "metadata": {},
- "source": [
- "We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "9346f8e9-78bf-4667-b3d3-72807a73b718",
- "metadata": {},
- "outputs": [
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- " ········\n"
- ]
- }
- ],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
- "\n",
- "# Uncomment the below to use LangSmith. Not required.\n",
- "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
- "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
- ]
- },
- {
- "cell_type": "markdown",
- "id": "271c8a23-e51c-4ead-a76e-cf21107db47e",
- "metadata": {},
- "source": [
- "Next, we need to define Neo4j credentials.\n",
- "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "id": "a2a3bb65-05c7-4daf-bac2-b25ae7fe2751",
- "metadata": {},
- "outputs": [],
- "source": [
- "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
- "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
- "os.environ[\"NEO4J_PASSWORD\"] = \"password\""
- ]
- },
- {
- "cell_type": "markdown",
- "id": "50fa4510-29b7-49b6-8496-5e86f694e81f",
- "metadata": {},
- "source": [
- "The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "4ee9ef7a-eef9-4289-b9fd-8fbc31041688",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[]"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "from langchain_neo4j import Neo4jGraph\n",
- "\n",
- "graph = Neo4jGraph()\n",
- "\n",
- "# Import movie information\n",
- "\n",
- "movies_query = \"\"\"\n",
- "LOAD CSV WITH HEADERS FROM \n",
- "'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
- "AS row\n",
- "MERGE (m:Movie {id:row.movieId})\n",
- "SET m.released = date(row.released),\n",
- " m.title = row.title,\n",
- " m.imdbRating = toFloat(row.imdbRating)\n",
- "FOREACH (director in split(row.director, '|') | \n",
- " MERGE (p:Person {name:trim(director)})\n",
- " MERGE (p)-[:DIRECTED]->(m))\n",
- "FOREACH (actor in split(row.actors, '|') | \n",
- " MERGE (p:Person {name:trim(actor)})\n",
- " MERGE (p)-[:ACTED_IN]->(m))\n",
- "FOREACH (genre in split(row.genres, '|') | \n",
- " MERGE (g:Genre {name:trim(genre)})\n",
- " MERGE (m)-[:IN_GENRE]->(g))\n",
- "\"\"\"\n",
- "\n",
- "graph.query(movies_query)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "0cb0ea30-ca55-4f35-aad6-beb57453de66",
- "metadata": {},
- "source": [
- "## Detecting entities in the user input\n",
- "We have to extract the types of entities/values we want to map to a graph database. In this example, we are dealing with a movie graph, so we can map movies and people to the database."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "e1a19424-6046-40c2-81d1-f3b88193a293",
- "metadata": {},
- "outputs": [],
- "source": [
- "from typing import List, Optional\n",
- "\n",
- "from langchain_core.prompts import ChatPromptTemplate\n",
- "from langchain_openai import ChatOpenAI\n",
- "from pydantic import BaseModel, Field\n",
- "\n",
- "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
- "\n",
- "\n",
- "class Entities(BaseModel):\n",
- " \"\"\"Identifying information about entities.\"\"\"\n",
- "\n",
- " names: List[str] = Field(\n",
- " ...,\n",
- " description=\"All the person or movies appearing in the text\",\n",
- " )\n",
- "\n",
- "\n",
- "prompt = ChatPromptTemplate.from_messages(\n",
- " [\n",
- " (\n",
- " \"system\",\n",
- " \"You are extracting person and movies from the text.\",\n",
- " ),\n",
- " (\n",
- " \"human\",\n",
- " \"Use the given format to extract information from the following \"\n",
- " \"input: {question}\",\n",
- " ),\n",
- " ]\n",
- ")\n",
- "\n",
- "\n",
- "entity_chain = prompt | llm.with_structured_output(Entities)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "9c14084c-37a7-4a9c-a026-74e12961c781",
- "metadata": {},
- "source": [
- "We can test the entity extraction chain."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "bbfe0d8f-982e-46e6-88fb-8a4f0d850b07",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "Entities(names=['Casino'])"
- ]
- },
- "execution_count": 6,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "entities = entity_chain.invoke({\"question\": \"Who played in Casino movie?\"})\n",
- "entities"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "a8afbf13-05d0-4383-8050-f88b8c2f6fab",
- "metadata": {},
- "source": [
- "We will utilize a simple `CONTAINS` clause to match entities to database. In practice, you might want to use a fuzzy search or a fulltext index to allow for minor misspellings."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "6f92929f-74fb-4db2-b7e1-eb1e9d386a67",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'Casino maps to Casino Movie in database\\n'"
- ]
- },
- "execution_count": 7,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "match_query = \"\"\"MATCH (p:Person|Movie)\n",
- "WHERE p.name CONTAINS $value OR p.title CONTAINS $value\n",
- "RETURN coalesce(p.name, p.title) AS result, labels(p)[0] AS type\n",
- "LIMIT 1\n",
- "\"\"\"\n",
- "\n",
- "\n",
- "def map_to_database(entities: Entities) -> Optional[str]:\n",
- " result = \"\"\n",
- " for entity in entities.names:\n",
- " response = graph.query(match_query, {\"value\": entity})\n",
- " try:\n",
- " result += f\"{entity} maps to {response[0]['result']} {response[0]['type']} in database\\n\"\n",
- " except IndexError:\n",
- " pass\n",
- " return result\n",
- "\n",
- "\n",
- "map_to_database(entities)"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "f66c6756-6efb-4b1e-9b5d-87ed914a5212",
- "metadata": {},
- "source": [
- "## Custom Cypher generating chain\n",
- "\n",
- "We need to define a custom Cypher prompt that takes the entity mapping information along with the schema and the user question to construct a Cypher statement.\n",
- "We will be using the LangChain expression language to accomplish that."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "8ef3e21d-f1c2-45e2-9511-4920d1cf6e7e",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_core.output_parsers import StrOutputParser\n",
- "from langchain_core.runnables import RunnablePassthrough\n",
- "\n",
- "# Generate Cypher statement based on natural language input\n",
- "cypher_template = \"\"\"Based on the Neo4j graph schema below, write a Cypher query that would answer the user's question:\n",
- "{schema}\n",
- "Entities in the question map to the following database values:\n",
- "{entities_list}\n",
- "Question: {question}\n",
- "Cypher query:\"\"\"\n",
- "\n",
- "cypher_prompt = ChatPromptTemplate.from_messages(\n",
- " [\n",
- " (\n",
- " \"system\",\n",
- " \"Given an input question, convert it to a Cypher query. No pre-amble.\",\n",
- " ),\n",
- " (\"human\", cypher_template),\n",
- " ]\n",
- ")\n",
- "\n",
- "cypher_response = (\n",
- " RunnablePassthrough.assign(names=entity_chain)\n",
- " | RunnablePassthrough.assign(\n",
- " entities_list=lambda x: map_to_database(x[\"names\"]),\n",
- " schema=lambda _: graph.get_schema,\n",
- " )\n",
- " | cypher_prompt\n",
- " | llm.bind(stop=[\"\\nCypherResult:\"])\n",
- " | StrOutputParser()\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "id": "1f0011e3-9660-4975-af2a-486b1bc3b954",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'MATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor)\\nRETURN actor.name'"
- ]
- },
- "execution_count": 9,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "cypher = cypher_response.invoke({\"question\": \"Who played in Casino movie?\"})\n",
- "cypher"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "38095678-611f-4847-a4de-e51ef7ef727c",
- "metadata": {},
- "source": [
- "## Generating answers based on database results\n",
- "\n",
- "Now that we have a chain that generates the Cypher statement, we need to execute the Cypher statement against the database and send the database results back to an LLM to generate the final answer.\n",
- "Again, we will be using LCEL."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "id": "d1fa97c0-1c9c-41d3-9ee1-5f1905d17434",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_neo4j.chains.graph_qa.cypher_utils import (\n",
- " CypherQueryCorrector,\n",
- " Schema,\n",
- ")\n",
- "\n",
- "graph.refresh_schema()\n",
- "# Cypher validation tool for relationship directions\n",
- "corrector_schema = [\n",
- " Schema(el[\"start\"], el[\"type\"], el[\"end\"])\n",
- " for el in graph.structured_schema.get(\"relationships\")\n",
- "]\n",
- "cypher_validation = CypherQueryCorrector(corrector_schema)\n",
- "\n",
- "# Generate natural language response based on database results\n",
- "response_template = \"\"\"Based on the the question, Cypher query, and Cypher response, write a natural language response:\n",
- "Question: {question}\n",
- "Cypher query: {query}\n",
- "Cypher Response: {response}\"\"\"\n",
- "\n",
- "response_prompt = ChatPromptTemplate.from_messages(\n",
- " [\n",
- " (\n",
- " \"system\",\n",
- " \"Given an input question and Cypher response, convert it to a natural\"\n",
- " \" language answer. No pre-amble.\",\n",
- " ),\n",
- " (\"human\", response_template),\n",
- " ]\n",
- ")\n",
- "\n",
- "chain = (\n",
- " RunnablePassthrough.assign(query=cypher_response)\n",
- " | RunnablePassthrough.assign(\n",
- " response=lambda x: graph.query(cypher_validation(x[\"query\"])),\n",
- " )\n",
- " | response_prompt\n",
- " | llm\n",
- " | StrOutputParser()\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "id": "918146e5-7918-46d2-a774-53f9547d8fcb",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "'Robert De Niro, James Woods, Joe Pesci, and Sharon Stone played in the movie \"Casino\".'"
- ]
- },
- "execution_count": 11,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "chain.invoke({\"question\": \"Who played in Casino movie?\"})"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "c7ba75cd-8399-4e54-a6f8-8a411f159f56",
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.9.18"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}
diff --git a/docs/docs/how_to/graph_prompting.ipynb b/docs/docs/how_to/graph_prompting.ipynb
deleted file mode 100644
index db4922fb3a2da..0000000000000
--- a/docs/docs/how_to/graph_prompting.ipynb
+++ /dev/null
@@ -1,548 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "raw",
- "metadata": {},
- "source": [
- "---\n",
- "sidebar_position: 2\n",
- "---"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# How to best prompt for Graph-RAG\n",
- "\n",
- "In this guide we'll go over prompting strategies to improve graph database query generation. We'll largely focus on methods for getting relevant database-specific information in your prompt.\n",
- "\n",
- "## Setup\n",
- "\n",
- "First, get required packages and set environment variables:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Note: you may need to restart the kernel to use updated packages.\n"
- ]
- }
- ],
- "source": [
- "%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We default to OpenAI models in this guide, but you can swap them out for the model provider of your choice."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {},
- "outputs": [
- {
- "name": "stdin",
- "output_type": "stream",
- "text": [
- " ········\n"
- ]
- }
- ],
- "source": [
- "import getpass\n",
- "import os\n",
- "\n",
- "os.environ[\"OPENAI_API_KEY\"] = getpass.getpass()\n",
- "\n",
- "# Uncomment the below to use LangSmith. Not required.\n",
- "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()\n",
- "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Next, we need to define Neo4j credentials.\n",
- "Follow [these installation steps](https://neo4j.com/docs/operations-manual/current/installation/) to set up a Neo4j database."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {},
- "outputs": [],
- "source": [
- "os.environ[\"NEO4J_URI\"] = \"bolt://localhost:7687\"\n",
- "os.environ[\"NEO4J_USERNAME\"] = \"neo4j\"\n",
- "os.environ[\"NEO4J_PASSWORD\"] = \"password\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The below example will create a connection with a Neo4j database and will populate it with example data about movies and their actors."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[]"
- ]
- },
- "execution_count": 4,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "from langchain_neo4j import Neo4jGraph\n",
- "\n",
- "graph = Neo4jGraph()\n",
- "\n",
- "# Import movie information\n",
- "\n",
- "movies_query = \"\"\"\n",
- "LOAD CSV WITH HEADERS FROM \n",
- "'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies_small.csv'\n",
- "AS row\n",
- "MERGE (m:Movie {id:row.movieId})\n",
- "SET m.released = date(row.released),\n",
- " m.title = row.title,\n",
- " m.imdbRating = toFloat(row.imdbRating)\n",
- "FOREACH (director in split(row.director, '|') | \n",
- " MERGE (p:Person {name:trim(director)})\n",
- " MERGE (p)-[:DIRECTED]->(m))\n",
- "FOREACH (actor in split(row.actors, '|') | \n",
- " MERGE (p:Person {name:trim(actor)})\n",
- " MERGE (p)-[:ACTED_IN]->(m))\n",
- "FOREACH (genre in split(row.genres, '|') | \n",
- " MERGE (g:Genre {name:trim(genre)})\n",
- " MERGE (m)-[:IN_GENRE]->(g))\n",
- "\"\"\"\n",
- "\n",
- "graph.query(movies_query)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# Filtering graph schema\n",
- "\n",
- "At times, you may need to focus on a specific subset of the graph schema while generating Cypher statements.\n",
- "Let's say we are dealing with the following graph schema:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Node properties are the following:\n",
- "Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING},Genre {name: STRING}\n",
- "Relationship properties are the following:\n",
- "\n",
- "The relationships are the following:\n",
- "(:Movie)-[:IN_GENRE]->(:Genre),(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
- ]
- }
- ],
- "source": [
- "graph.refresh_schema()\n",
- "print(graph.schema)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let's say we want to exclude the _Genre_ node from the schema representation we pass to an LLM.\n",
- "We can achieve that using the `exclude` parameter of the GraphCypherQAChain chain."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_neo4j import GraphCypherQAChain\n",
- "from langchain_openai import ChatOpenAI\n",
- "\n",
- "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
- "chain = GraphCypherQAChain.from_llm(\n",
- " graph=graph,\n",
- " llm=llm,\n",
- " exclude_types=[\"Genre\"],\n",
- " verbose=True,\n",
- " allow_dangerous_requests=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "Node properties are the following:\n",
- "Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING}\n",
- "Relationship properties are the following:\n",
- "\n",
- "The relationships are the following:\n",
- "(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
- ]
- }
- ],
- "source": [
- "print(chain.graph_schema)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Few-shot examples\n",
- "\n",
- "Including examples of natural language questions being converted to valid Cypher queries against our database in the prompt will often improve model performance, especially for complex queries.\n",
- "\n",
- "Let's say we have the following examples:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "metadata": {},
- "outputs": [],
- "source": [
- "examples = [\n",
- " {\n",
- " \"question\": \"How many artists are there?\",\n",
- " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\",\n",
- " },\n",
- " {\n",
- " \"question\": \"Which actors played in the movie Casino?\",\n",
- " \"query\": \"MATCH (m:Movie {{title: 'Casino'}})<-[:ACTED_IN]-(a) RETURN a.name\",\n",
- " },\n",
- " {\n",
- " \"question\": \"How many movies has Tom Hanks acted in?\",\n",
- " \"query\": \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
- " },\n",
- " {\n",
- " \"question\": \"List all the genres of the movie Schindler's List\",\n",
- " \"query\": \"MATCH (m:Movie {{title: 'Schindler\\\\'s List'}})-[:IN_GENRE]->(g:Genre) RETURN g.name\",\n",
- " },\n",
- " {\n",
- " \"question\": \"Which actors have worked in movies from both the comedy and action genres?\",\n",
- " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
- " },\n",
- " {\n",
- " \"question\": \"Which directors have made movies with at least three different actors named 'John'?\",\n",
- " \"query\": \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
- " },\n",
- " {\n",
- " \"question\": \"Identify movies where directors also played a role in the film.\",\n",
- " \"query\": \"MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name\",\n",
- " },\n",
- " {\n",
- " \"question\": \"Find the actor with the highest number of movies in the database.\",\n",
- " \"query\": \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\",\n",
- " },\n",
- "]"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "We can create a few-shot prompt with them like so:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_core.prompts import FewShotPromptTemplate, PromptTemplate\n",
- "\n",
- "example_prompt = PromptTemplate.from_template(\n",
- " \"User input: {question}\\nCypher query: {query}\"\n",
- ")\n",
- "prompt = FewShotPromptTemplate(\n",
- " examples=examples[:5],\n",
- " example_prompt=example_prompt,\n",
- " prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
- " suffix=\"User input: {question}\\nCypher query: \",\n",
- " input_variables=[\"question\", \"schema\"],\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
- "\n",
- "Here is the schema information\n",
- "foo.\n",
- "\n",
- "Below are a number of examples of questions and their corresponding Cypher queries.\n",
- "\n",
- "User input: How many artists are there?\n",
- "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
- "\n",
- "User input: Which actors played in the movie Casino?\n",
- "Cypher query: MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.name\n",
- "\n",
- "User input: How many movies has Tom Hanks acted in?\n",
- "Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
- "\n",
- "User input: List all the genres of the movie Schindler's List\n",
- "Cypher query: MATCH (m:Movie {title: 'Schindler\\'s List'})-[:IN_GENRE]->(g:Genre) RETURN g.name\n",
- "\n",
- "User input: Which actors have worked in movies from both the comedy and action genres?\n",
- "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
- "\n",
- "User input: How many artists are there?\n",
- "Cypher query: \n"
- ]
- }
- ],
- "source": [
- "print(prompt.format(question=\"How many artists are there?\", schema=\"foo\"))"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Dynamic few-shot examples\n",
- "\n",
- "If we have enough examples, we may want to only include the most relevant ones in the prompt, either because they don't fit in the model's context window or because the long tail of examples distracts the model. And specifically, given any input we want to include the examples most relevant to that input.\n",
- "\n",
- "We can do just this using an ExampleSelector. In this case we'll use a [SemanticSimilarityExampleSelector](https://python.langchain.com/api_reference/core/example_selectors/langchain_core.example_selectors.semantic_similarity.SemanticSimilarityExampleSelector.html), which will store the examples in the vector database of our choosing. At runtime it will perform a similarity search between the input and our examples, and return the most semantically similar ones: "
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 11,
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
- "from langchain_neo4j import Neo4jVector\n",
- "from langchain_openai import OpenAIEmbeddings\n",
- "\n",
- "example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
- " examples,\n",
- " OpenAIEmbeddings(),\n",
- " Neo4jVector,\n",
- " k=5,\n",
- " input_keys=[\"question\"],\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[{'query': 'MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)',\n",
- " 'question': 'How many artists are there?'},\n",
- " {'query': \"MATCH (a:Person {{name: 'Tom Hanks'}})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
- " 'question': 'How many movies has Tom Hanks acted in?'},\n",
- " {'query': \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
- " 'question': 'Which actors have worked in movies from both the comedy and action genres?'},\n",
- " {'query': \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
- " 'question': \"Which directors have made movies with at least three different actors named 'John'?\"},\n",
- " {'query': 'MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1',\n",
- " 'question': 'Find the actor with the highest number of movies in the database.'}]"
- ]
- },
- "execution_count": 12,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "example_selector.select_examples({\"question\": \"how many artists are there?\"})"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To use it, we can pass the ExampleSelector directly in to our FewShotPromptTemplate:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {},
- "outputs": [],
- "source": [
- "prompt = FewShotPromptTemplate(\n",
- " example_selector=example_selector,\n",
- " example_prompt=example_prompt,\n",
- " prefix=\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\\n\\nHere is the schema information\\n{schema}.\\n\\nBelow are a number of examples of questions and their corresponding Cypher queries.\",\n",
- " suffix=\"User input: {question}\\nCypher query: \",\n",
- " input_variables=[\"question\", \"schema\"],\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 14,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
- "\n",
- "Here is the schema information\n",
- "foo.\n",
- "\n",
- "Below are a number of examples of questions and their corresponding Cypher queries.\n",
- "\n",
- "User input: How many artists are there?\n",
- "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\n",
- "\n",
- "User input: How many movies has Tom Hanks acted in?\n",
- "Cypher query: MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\n",
- "\n",
- "User input: Which actors have worked in movies from both the comedy and action genres?\n",
- "Cypher query: MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\n",
- "\n",
- "User input: Which directors have made movies with at least three different actors named 'John'?\n",
- "Cypher query: MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\n",
- "\n",
- "User input: Find the actor with the highest number of movies in the database.\n",
- "Cypher query: MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\n",
- "\n",
- "User input: how many artists are there?\n",
- "Cypher query: \n"
- ]
- }
- ],
- "source": [
- "print(prompt.format(question=\"how many artists are there?\", schema=\"foo\"))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "metadata": {},
- "outputs": [],
- "source": [
- "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
- "chain = GraphCypherQAChain.from_llm(\n",
- " graph=graph,\n",
- " llm=llm,\n",
- " cypher_prompt=prompt,\n",
- " verbose=True,\n",
- " allow_dangerous_requests=True,\n",
- ")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 16,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "\n",
- "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
- "Generated Cypher:\n",
- "\u001b[32;1m\u001b[1;3mMATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\u001b[0m\n",
- "Full Context:\n",
- "\u001b[32;1m\u001b[1;3m[{'count(DISTINCT a)': 967}]\u001b[0m\n",
- "\n",
- "\u001b[1m> Finished chain.\u001b[0m\n"
- ]
- },
- {
- "data": {
- "text/plain": [
- "{'query': 'How many actors are in the graph?',\n",
- " 'result': 'There are 967 actors in the graph.'}"
- ]
- },
- "execution_count": 16,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "chain.invoke(\"How many actors are in the graph?\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.1"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 4
-}
diff --git a/docs/docs/how_to/index.mdx b/docs/docs/how_to/index.mdx
index b432569bf66bc..1ce6cc2737a57 100644
--- a/docs/docs/how_to/index.mdx
+++ b/docs/docs/how_to/index.mdx
@@ -316,9 +316,7 @@ For a high-level tutorial, check out [this guide](/docs/tutorials/sql_qa/).
You can use an LLM to do question answering over graph databases.
For a high-level tutorial, check out [this guide](/docs/tutorials/graph/).
-- [How to: map values to a database](/docs/how_to/graph_mapping)
- [How to: add a semantic layer over the database](/docs/how_to/graph_semantic)
-- [How to: improve results with prompting](/docs/how_to/graph_prompting)
- [How to: construct knowledge graphs](/docs/how_to/graph_constructing)
### Summarization
diff --git a/docs/docs/how_to/output_parser_custom.ipynb b/docs/docs/how_to/output_parser_custom.ipynb
index d77e1ff9c6ae7..1949e3dd067b7 100644
--- a/docs/docs/how_to/output_parser_custom.ipynb
+++ b/docs/docs/how_to/output_parser_custom.ipynb
@@ -12,7 +12,7 @@
"There are two ways to implement a custom parser:\n",
"\n",
"1. Using `RunnableLambda` or `RunnableGenerator` in [LCEL](/docs/concepts/lcel/) -- we strongly recommend this for most use cases\n",
- "2. By inherting from one of the base classes for out parsing -- this is the hard way of doing things\n",
+ "2. By inheriting from one of the base classes for out parsing -- this is the hard way of doing things\n",
"\n",
"The difference between the two approaches are mostly superficial and are mainly in terms of which callbacks are triggered (e.g., `on_chain_start` vs. `on_parser_start`), and how a runnable lambda vs. a parser might be visualized in a tracing platform like LangSmith."
]
@@ -200,7 +200,7 @@
"id": "24067447-8a5a-4d6b-86a3-4b9cc4b4369b",
"metadata": {},
"source": [
- "## Inherting from Parsing Base Classes"
+ "## Inheriting from Parsing Base Classes"
]
},
{
@@ -208,7 +208,7 @@
"id": "9713f547-b2e4-48eb-807f-a0f6f6d0e7e0",
"metadata": {},
"source": [
- "Another approach to implement a parser is by inherting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
+ "Another approach to implement a parser is by inheriting from `BaseOutputParser`, `BaseGenerationOutputParser` or another one of the base parsers depending on what you need to do.\n",
"\n",
"In general, we **do not** recommend this approach for most use cases as it results in more code to write without significant benefits.\n",
"\n",
diff --git a/docs/docs/how_to/sql_large_db.ipynb b/docs/docs/how_to/sql_large_db.ipynb
index 53f4bf6224d8f..154bda4dc9b24 100644
--- a/docs/docs/how_to/sql_large_db.ipynb
+++ b/docs/docs/how_to/sql_large_db.ipynb
@@ -55,7 +55,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
- "Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
+ "Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven [SQLDatabase](https://python.langchain.com/api_reference/community/utilities/langchain_community.utilities.sql_database.SQLDatabase.html) class:"
]
},
{
diff --git a/docs/docs/how_to/sql_prompting.ipynb b/docs/docs/how_to/sql_prompting.ipynb
index 831a7bca13a51..5908ccd14dd57 100644
--- a/docs/docs/how_to/sql_prompting.ipynb
+++ b/docs/docs/how_to/sql_prompting.ipynb
@@ -51,7 +51,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
- "Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
+ "Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{
diff --git a/docs/docs/how_to/sql_query_checking.ipynb b/docs/docs/how_to/sql_query_checking.ipynb
index e15609d7ba4df..ab1a875fdf61e 100644
--- a/docs/docs/how_to/sql_query_checking.ipynb
+++ b/docs/docs/how_to/sql_query_checking.ipynb
@@ -54,7 +54,7 @@
"* Run `.read Chinook_Sqlite.sql`\n",
"* Test `SELECT * FROM Artist LIMIT 10;`\n",
"\n",
- "Now, `Chinhook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
+ "Now, `Chinook.db` is in our directory and we can interface with it using the SQLAlchemy-driven `SQLDatabase` class:"
]
},
{
diff --git a/docs/docs/integrations/providers/linkup.mdx b/docs/docs/integrations/providers/linkup.mdx
new file mode 100644
index 0000000000000..ee7f595321746
--- /dev/null
+++ b/docs/docs/integrations/providers/linkup.mdx
@@ -0,0 +1,39 @@
+# Linkup
+
+> [Linkup](https://www.linkup.so/) provides an API to connect LLMs to the web and the Linkup Premium Partner sources.
+
+## Installation and Setup
+
+To use the Linkup provider, you first need a valid API key, which you can find by signing-up [here](https://app.linkup.so/sign-up).
+You will also need the `langchain-linkup` package, which you can install using pip:
+
+```bash
+pip install langchain-linkup
+```
+
+## Retriever
+
+See a [usage example](/docs/integrations/retrievers/linkup_search).
+
+```python
+from langchain_linkup import LinkupSearchRetriever
+
+retriever = LinkupSearchRetriever(
+ depth="deep", # "standard" or "deep"
+ linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
+)
+```
+
+## Tools
+
+See a [usage example](/docs/integrations/tools/linkup_search).
+
+```python
+from langchain_linkup import LinkupSearchTool
+
+tool = LinkupSearchTool(
+ depth="deep", # "standard" or "deep"
+ output_type="searchResults", # "searchResults", "sourcedAnswer" or "structured"
+ linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable
+)
+```
diff --git a/docs/docs/integrations/providers/microsoft.mdx b/docs/docs/integrations/providers/microsoft.mdx
index a63c2fe898ffc..518d4869d47f2 100644
--- a/docs/docs/integrations/providers/microsoft.mdx
+++ b/docs/docs/integrations/providers/microsoft.mdx
@@ -343,6 +343,31 @@ See a [usage example](/docs/integrations/memory/postgres_chat_message_history/).
Since Azure Database for PostgreSQL is open-source Postgres, you can use the [LangChain's Postgres support](/docs/integrations/vectorstores/pgvector/) to connect to Azure Database for PostgreSQL.
+### Azure SQL Database
+
+>[Azure SQL Database](https://learn.microsoft.com/azure/azure-sql/database/sql-database-paas-overview?view=azuresql) is a robust service that combines scalability, security, and high availability, providing all the benefits of a modern database solution. It also provides a dedicated Vector data type & built-in functions that simplifies the storage and querying of vector embeddings directly within a relational database. This eliminates the need for separate vector databases and related integrations, increasing the security of your solutions while reducing the overall complexity.
+
+By leveraging your current SQL Server databases for vector search, you can enhance data capabilities while minimizing expenses and avoiding the challenges of transitioning to new systems.
+
+##### Installation and Setup
+
+See [detail configuration instructions](/docs/integrations/vectorstores/sqlserver).
+
+We need to install the `langchain-sqlserver` python package.
+
+```bash
+!pip install langchain-sqlserver==0.1.1
+```
+
+##### Deploy Azure SQL DB on Microsoft Azure
+
+[Sign Up](https://learn.microsoft.com/azure/azure-sql/database/free-offer?view=azuresql) for free to get started today.
+
+See a [usage example](/docs/integrations/vectorstores/sqlserver).
+
+```python
+from langchain_sqlserver import SQLServer_VectorStore
+```
### Azure AI Search
diff --git a/docs/docs/integrations/retrievers/linkup_search.ipynb b/docs/docs/integrations/retrievers/linkup_search.ipynb
new file mode 100644
index 0000000000000..4ca53214917ce
--- /dev/null
+++ b/docs/docs/integrations/retrievers/linkup_search.ipynb
@@ -0,0 +1,270 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "afaf8039",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: LinkupSearchRetriever\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "e49f1e0d",
+ "metadata": {},
+ "source": [
+ "# LinkupSearchRetriever\n",
+ "\n",
+ "> [Linkup](https://www.linkup.so/) provides an API to connect LLMs to the web and the Linkup Premium Partner sources.\n",
+ "\n",
+ "This will help you getting started with the LinkupSearchRetriever [retriever](/docs/concepts/retrievers/). For detailed documentation of all LinkupSearchRetriever features and configurations head to the [API reference](https://python.langchain.com/api_reference/linkup/retrievers/linkup_langchain.search_retriever.LinkupSearchRetriever.html).\n",
+ "\n",
+ "### Integration details\n",
+ "\n",
+ "| Retriever | Source | Package |\n",
+ "| :--- | :--- | :---: |\n",
+ "[LinkupSearchRetriever](https://python.langchain.com/api_reference/linkup/retrievers/linkup_langchain.search_retriever.LinkupSearchRetriever.html) | Web and partner sources | langchain-linkup |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To use the Linkup provider, you need a valid API key, which you can find by signing-up [here](https://app.linkup.so/sign-up). You can then set it up as the `LINKUP_API_KEY` environment variable. For the chain example below, you also need to set an OpenAI API key as `OPENAI_API_KEY` environment variable, which you can also do here:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "0c6cab32-8f55-473d-b5bc-72673ea4da61",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# import os\n",
+ "# os.environ[\"LINKUP_API_KEY\"] = \"\" # Fill with your API key\n",
+ "# os.environ[\"OPENAI_API_KEY\"] = \"\" # Fill with your API key"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "72ee0c4b-9764-423a-9dbf-95129e185210",
+ "metadata": {},
+ "source": [
+ "If you want to get automated tracing from individual queries, you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a15d341e-3e26-4ca3-830b-5aab30ed66de",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "0730d6a1-c893-4840-9817-5e5251676d5d",
+ "metadata": {},
+ "source": [
+ "### Installation\n",
+ "\n",
+ "This retriever lives in the `langchain-linkup` package:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "652d6238-1f87-422a-b135-f5abbb8652fc",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -qU langchain-linkup"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a38cde65-254d-4219-a441-068766c0d4b5",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Now we can instantiate our retriever:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "70cc8e65-2a02-408a-bbc6-8ef649057d82",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_linkup import LinkupSearchRetriever\n",
+ "\n",
+ "retriever = LinkupSearchRetriever(\n",
+ " depth=\"deep\", # \"standard\" or \"deep\"\n",
+ " linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "5c5f2839-4020-424e-9fc9-07777eede442",
+ "metadata": {},
+ "source": [
+ "## Usage"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "51a60dbe-9f2e-4e04-bb62-23968f17164a",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[Document(metadata={'name': 'US presidential election results 2024: Harris vs. Trump | Live maps ...', 'url': 'https://www.reuters.com/graphics/USA-ELECTION/RESULTS/zjpqnemxwvx/'}, page_content='Updated results from the 2024 election for the US president. Reuters live coverage of the 2024 US President, Senate, House and state governors races.'),\n",
+ " Document(metadata={'name': 'Election 2024: Presidential results - CNN', 'url': 'https://www.cnn.com/election/2024/results/president'}, page_content='View maps and real-time results for the 2024 US presidential election matchup between former President Donald Trump and Vice President Kamala Harris. For more ...'),\n",
+ " Document(metadata={'name': 'Presidential Election 2024 Live Results: Donald Trump wins - NBC News', 'url': 'https://www.nbcnews.com/politics/2024-elections/president-results'}, page_content='View live election results from the 2024 presidential race as Kamala Harris and Donald Trump face off. See the map of votes by state as results are tallied.'),\n",
+ " Document(metadata={'name': '2024 President Election - Live Results | RealClearPolitics', 'url': 'https://www.realclearpolitics.com/elections/live_results/2024/president/'}, page_content='Latest Election 2024 Results • President • United States • Tuesday November 3rd • Presidential Election Details'),\n",
+ " Document(metadata={'name': 'Live: Presidential Election Results 2024 : NPR', 'url': 'https://apps.npr.org/2024-election-results/'}, page_content='Presidential race ratings are based on NPR analysis. Maps do not shade in until 50% of the estimated vote is in for a given state, to mitigate flutuations in early returns . 2024 General Election Results'),\n",
+ " Document(metadata={'name': '2024 US Presidential Election Results: Live Map - Bloomberg.com', 'url': 'https://www.bloomberg.com/graphics/2024-us-election-results/'}, page_content='US Presidential Election Results November 5, 2024. Bloomberg News is reporting live election results in the presidential race between Democratic Vice President Kamala Harris and her Republican ...'),\n",
+ " Document(metadata={'name': 'Presidential Election Results 2024: Electoral Votes & Map by State ...', 'url': 'https://www.politico.com/2024-election/results/president/'}, page_content='Live 2024 Presidential election results, maps and electoral votes by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.'),\n",
+ " Document(metadata={'name': 'US Presidential Election Results 2024 - BBC News', 'url': 'https://www.bbc.com/news/election/2024/us/results'}, page_content='Kamala Harris of the Democrat party has 74,498,303 votes (48.3%) Donald Trump of the Republican party has 76,989,499 votes (49.9%) This map of the US states was filled in as presidential results ...'),\n",
+ " Document(metadata={'name': 'Election Results 2024: Live Map - Races by State - POLITICO', 'url': 'https://www.politico.com/2024-election/results/'}, page_content='Live 2024 election results and maps by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.'),\n",
+ " Document(metadata={'name': '2024 U.S. Presidential Election: Live Results and Maps - USA TODAY', 'url': 'https://www.usatoday.com/elections/results/2024-11-05/president'}, page_content='See who is winning in the Nov. 5, 2024 U.S. Presidential election nationwide with real-time results and state-by-state maps.'),\n",
+ " Document(metadata={'name': 'Presidential Election 2024 Live Results: Donald Trump winsNBC News LogoSearchSearchNBC News LogoMSNBC LogoToday Logo', 'url': 'https://www.nbcnews.com/politics/2024-elections/president-results'}, page_content=\"Profile\\n\\nSections\\n\\nLocal\\n\\ntv\\n\\nFeatured\\n\\nMore From NBC\\n\\nFollow NBC News\\n\\nnews Alerts\\n\\nThere are no new alerts at this time\\n\\n2024 President Results: Trump wins\\n==================================\\n\\nDonald Trump has secured more than the 270 Electoral College votes needed to secure the presidency, NBC News projects.\\n\\nRaces to watch\\n--------------\\n\\nAll Presidential races\\n----------------------\\n\\nElection Night Coverage\\n-----------------------\\n\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\n\\n### Jim Himes says 'truth and analysis are not what drive’ Gabbard and Gaetz\\n\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\n\\n### Trump announces North Dakota Gov. Doug Burgum as his pick for interior secretary\\n\\n### House Ethics Committee cancels meeting at which Gaetz probe was on the agenda\\n\\n### Trump picks former Rep. Doug Collins for veterans affairs secretary\\n\\n### Trump to nominate his criminal defense lawyer for deputy attorney general\\n\\n### From ‘brilliant’ to ‘dangerous’: Mixed reactions roll in after Trump picks RFK Jr. for top health post\\n\\n### Donald Trump Jr. says he played key role in RFK Jr., Tulsi Gabbard picks\\n\\n### Jared Polis offers surprising words of support for RFK Jr. pick for HHS secretary\\n\\nNational early voting\\n---------------------\\n\\n### 88,233,886 mail-in and early in-person votes cast nationally\\n\\n### 65,676,748 mail-in and early in-person votes requested nationally\\n\\nPast Presidential Elections\\n---------------------------\\n\\n### Vote Margin by State in the 2020 Presidential Election\\n\\nCircle size represents the number electoral votes in that state.\\n\\nThe expected vote is the total number of votes that are expected in a given race once all votes are counted. This number is an estimate and is based on several different factors, including information on the number of votes cast early as well as information provided to our vote reporters on Election Day from county election officials. The figure can change as NBC News gathers new information.\\n\\n**Source**: [National Election Pool (NEP)](https://www.nbcnews.com/politics/2024-elections/how-election-data-is-collected )\\n\\n2024 election results\\n---------------------\\n\\nElection Night Coverage\\n-----------------------\\n\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\n\\n### Jim Himes says 'truth and analysis are not what drive’ Gabbard and Gaetz\\n\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\n\\n©\\xa02024 NBCUniversal Media, LLC\")]"
+ ]
+ },
+ "execution_count": 6,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "query = \"Who won the latest US presidential elections?\"\n",
+ "\n",
+ "retriever.invoke(query)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "dfe8aad4-8626-4330-98a9-7ea1ca5d2e0e",
+ "metadata": {},
+ "source": [
+ "## Use within a chain\n",
+ "\n",
+ "Like other retrievers, LinkupSearchRetriever can be incorporated into LLM applications via [chains](/docs/how_to/sequence/).\n",
+ "\n",
+ "We will need a LLM or chat model:\n",
+ "\n",
+ "```{=mdx}\n",
+ "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
+ "\n",
+ "\n",
+ "```"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "25b647a3-f8f2-4541-a289-7a241e43f9df",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# | output: false\n",
+ "# | echo: false\n",
+ "\n",
+ "from langchain_openai import ChatOpenAI\n",
+ "\n",
+ "llm = ChatOpenAI(model=\"gpt-3.5-turbo-0125\", temperature=0)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "23e11cc9-abd6-4855-a7eb-799f45ca01ae",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.output_parsers import StrOutputParser\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from langchain_core.runnables import RunnablePassthrough\n",
+ "\n",
+ "prompt = ChatPromptTemplate.from_template(\n",
+ " \"\"\"Answer the question based only on the context provided.\n",
+ "\n",
+ "Context: {context}\n",
+ "\n",
+ "Question: {question}\"\"\"\n",
+ ")\n",
+ "\n",
+ "\n",
+ "def format_docs(docs):\n",
+ " return \"\\n\\n\".join(doc.page_content for doc in docs)\n",
+ "\n",
+ "\n",
+ "chain = (\n",
+ " {\"context\": retriever | format_docs, \"question\": RunnablePassthrough()}\n",
+ " | prompt\n",
+ " | llm\n",
+ " | StrOutputParser()\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "d47c37dd-5c11-416c-a3b6-bec413cd70e8",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'The 3 latest US presidential elections were won by Joe Biden in 2020, Donald Trump in 2016, and Barack Obama in 2012.'"
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "chain.invoke(\"Who won the 3 latest US presidential elections?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all LinkupSearchRetriever features and configurations head to the [API reference](https://python.langchain.com/api_reference/linkup/retrievers/linkup_langchain.search_retriever.LinkupSearchRetriever.html)."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.7"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/docs/integrations/retrievers/weaviate-hybrid.ipynb b/docs/docs/integrations/retrievers/weaviate-hybrid.ipynb
deleted file mode 100644
index 9592435b918b1..0000000000000
--- a/docs/docs/integrations/retrievers/weaviate-hybrid.ipynb
+++ /dev/null
@@ -1,297 +0,0 @@
-{
- "cells": [
- {
- "cell_type": "markdown",
- "id": "ce0f17b9",
- "metadata": {},
- "source": [
- "# Weaviate Hybrid Search\n",
- "\n",
- ">[Weaviate](https://weaviate.io/developers/weaviate) is an open-source vector database.\n",
- "\n",
- ">[Hybrid search](https://weaviate.io/blog/hybrid-search-explained) is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.\n",
- "\n",
- ">The `Hybrid search in Weaviate` uses sparse and dense vectors to represent the meaning and context of search queries and documents.\n",
- "\n",
- "This notebook shows how to use `Weaviate hybrid search` as a LangChain retriever."
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "c307b082",
- "metadata": {},
- "source": [
- "Set up the retriever:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "id": "bba863a2-977c-4add-b5f4-bfc33a80eae5",
- "metadata": {
- "tags": []
- },
- "outputs": [],
- "source": [
- "%pip install --upgrade --quiet weaviate-client"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "id": "c10dd962",
- "metadata": {},
- "outputs": [],
- "source": [
- "import os\n",
- "\n",
- "import weaviate\n",
- "\n",
- "WEAVIATE_URL = os.getenv(\"WEAVIATE_URL\")\n",
- "auth_client_secret = (weaviate.AuthApiKey(api_key=os.getenv(\"WEAVIATE_API_KEY\")),)\n",
- "client = weaviate.Client(\n",
- " url=WEAVIATE_URL,\n",
- " additional_headers={\n",
- " \"X-Openai-Api-Key\": os.getenv(\"OPENAI_API_KEY\"),\n",
- " },\n",
- ")\n",
- "\n",
- "# client.schema.delete_all()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "id": "f47a2bfe",
- "metadata": {},
- "outputs": [],
- "source": [
- "from langchain_community.retrievers import (\n",
- " WeaviateHybridSearchRetriever,\n",
- ")\n",
- "from langchain_core.documents import Document"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 4,
- "id": "f2eff08e",
- "metadata": {},
- "outputs": [],
- "source": [
- "retriever = WeaviateHybridSearchRetriever(\n",
- " client=client,\n",
- " index_name=\"LangChain\",\n",
- " text_key=\"text\",\n",
- " attributes=[],\n",
- " create_schema_if_missing=True,\n",
- ")"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "b68debff",
- "metadata": {},
- "source": [
- "Add some data:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 5,
- "id": "cd8a7b17",
- "metadata": {},
- "outputs": [],
- "source": [
- "docs = [\n",
- " Document(\n",
- " metadata={\n",
- " \"title\": \"Embracing The Future: AI Unveiled\",\n",
- " \"author\": \"Dr. Rebecca Simmons\",\n",
- " },\n",
- " page_content=\"A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.\",\n",
- " ),\n",
- " Document(\n",
- " metadata={\n",
- " \"title\": \"Symbiosis: Harmonizing Humans and AI\",\n",
- " \"author\": \"Prof. Jonathan K. Sterling\",\n",
- " },\n",
- " page_content=\"Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.\",\n",
- " ),\n",
- " Document(\n",
- " metadata={\"title\": \"AI: The Ethical Quandary\", \"author\": \"Dr. Rebecca Simmons\"},\n",
- " page_content=\"In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.\",\n",
- " ),\n",
- " Document(\n",
- " metadata={\n",
- " \"title\": \"Conscious Constructs: The Search for AI Sentience\",\n",
- " \"author\": \"Dr. Samuel Cortez\",\n",
- " },\n",
- " page_content=\"Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.\",\n",
- " ),\n",
- " Document(\n",
- " metadata={\n",
- " \"title\": \"Invisible Routines: Hidden AI in Everyday Life\",\n",
- " \"author\": \"Prof. Jonathan K. Sterling\",\n",
- " },\n",
- " page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\",\n",
- " ),\n",
- "]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 6,
- "id": "3c5970db",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "['3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be',\n",
- " 'eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907',\n",
- " '7ebbdae7-1061-445f-a046-1989f2343d8f',\n",
- " 'c2ab315b-3cab-467f-b23a-b26ed186318d',\n",
- " 'b83765f2-e5d2-471f-8c02-c3350ade4c4f']"
- ]
- },
- "execution_count": 6,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "retriever.add_documents(docs)"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "6e030694",
- "metadata": {},
- "source": [
- "Do a hybrid search:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 7,
- "id": "bf7dbb98",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),\n",
- " Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),\n",
- " Document(page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\", metadata={}),\n",
- " Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]"
- ]
- },
- "execution_count": 7,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "retriever.invoke(\"the ethical implications of AI\")"
- ]
- },
- {
- "attachments": {},
- "cell_type": "markdown",
- "id": "d0c5bb4d",
- "metadata": {},
- "source": [
- "Do a hybrid search with where filter:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 8,
- "id": "b2bc87c1",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),\n",
- " Document(page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\", metadata={})]"
- ]
- },
- "execution_count": 8,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "retriever.invoke(\n",
- " \"AI integration in society\",\n",
- " where_filter={\n",
- " \"path\": [\"author\"],\n",
- " \"operator\": \"Equal\",\n",
- " \"valueString\": \"Prof. Jonathan K. Sterling\",\n",
- " },\n",
- ")"
- ]
- },
- {
- "cell_type": "markdown",
- "id": "5ae2899e",
- "metadata": {},
- "source": [
- "Do a hybrid search with scores:"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 9,
- "id": "4fffd0af",
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={'_additional': {'explainScore': '(bm25)\\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score\\n(hybrid) Document eeb9fd9b-a3ac-4d60-a55b-a63a25d3b907 contributed 0.00819672131147541 to the score', 'score': '0.016393442'}}),\n",
- " Document(page_content=\"In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.\", metadata={'_additional': {'explainScore': '(bm25)\\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.0078125 to the score\\n(hybrid) Document b83765f2-e5d2-471f-8c02-c3350ade4c4f contributed 0.008064516129032258 to the score', 'score': '0.015877016'}}),\n",
- " Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={'_additional': {'explainScore': '(bm25)\\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.008064516129032258 to the score\\n(hybrid) Document 7ebbdae7-1061-445f-a046-1989f2343d8f contributed 0.0078125 to the score', 'score': '0.015877016'}}),\n",
- " Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={'_additional': {'explainScore': '(vector) [-0.0071824766 -0.0006682752 0.001723625 -0.01897258 -0.0045127636 0.0024410256 -0.020503938 0.013768672 0.009520169 -0.037972264]... \\n(hybrid) Document 3a27b0a5-8dbb-4fee-9eba-8b6bc2c252be contributed 0.007936507936507936 to the score', 'score': '0.007936508'}})]"
- ]
- },
- "execution_count": 9,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "retriever.invoke(\n",
- " \"AI integration in society\",\n",
- " score=True,\n",
- ")"
- ]
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "Python 3 (ipykernel)",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.10.12"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 5
-}
diff --git a/docs/docs/integrations/tools/linkup_search.ipynb b/docs/docs/integrations/tools/linkup_search.ipynb
new file mode 100644
index 0000000000000..83126f0e3cbde
--- /dev/null
+++ b/docs/docs/integrations/tools/linkup_search.ipynb
@@ -0,0 +1,303 @@
+{
+ "cells": [
+ {
+ "cell_type": "raw",
+ "id": "10238e62-3465-4973-9279-606cbb7ccf16",
+ "metadata": {},
+ "source": [
+ "---\n",
+ "sidebar_label: LinkupSearchTool\n",
+ "---"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "a6f91f20",
+ "metadata": {},
+ "source": [
+ "# LinkupSearchTool\n",
+ "\n",
+ "> [Linkup](https://www.linkup.so/) provides an API to connect LLMs to the web and the Linkup Premium Partner sources.\n",
+ "\n",
+ "This notebook provides a quick overview for getting started with LinkupSearchTool [tool](/docs/concepts/tools/). For detailed documentation of all LinkupSearchTool features and configurations head to the [API reference](https://python.langchain.com/api_reference/linkup/tools/linkup_langchain.search_tool.LinkupSearchTool.html).\n",
+ "\n",
+ "## Overview\n",
+ "\n",
+ "### Integration details\n",
+ "\n",
+ "| Class | Package | Serializable | [JS support](https://js.langchain.com/docs/integrations/tools/linkup_search) | Package latest |\n",
+ "| :--- | :--- | :---: | :---: | :---: |\n",
+ "| [LinkupSearchTool](https://python.langchain.com/api_reference/linkup/tools/linkup_langchain.search_tool.LinkupSearchTool.html) | [langchain-linkup](https://python.langchain.com/api_reference/linkup/index.html) | ❌ | ❌ | ![PyPI - Version](https://img.shields.io/pypi/v/langchain-linkup?style=flat-square&label=%20) |\n",
+ "\n",
+ "## Setup\n",
+ "\n",
+ "To use the Linkup provider, you need a valid API key, which you can find by signing-up [here](https://app.linkup.so/sign-up). To run the following examples you will also need an OpenAI API key."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "fa3318d2-108e-41d1-81b3-01ba4f47e952",
+ "metadata": {},
+ "source": [
+ "### Installation\n",
+ "\n",
+ "This tool lives in the `langchain-linkup` package:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "f85b4089",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "%pip install -qU langchain-linkup"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "b15e9266",
+ "metadata": {},
+ "source": [
+ "### Credentials"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "e0b178a2-8816-40ca-b57c-ccdd86dde9c9",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import getpass\n",
+ "import os\n",
+ "\n",
+ "# if not os.environ.get(\"LINKUP_API_KEY\"):\n",
+ "# os.environ[\"LINKUP_API_KEY\"] = getpass.getpass(\"LINKUP API key:\\n\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "bc5ab717-fd27-4c59-b912-bdd099541478",
+ "metadata": {},
+ "source": [
+ "It's also helpful (but not needed) to set up [LangSmith](https://smith.langchain.com/) for best-in-class observability:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a6c2f136-6367-4f1f-825d-ae741e1bf281",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"\n",
+ "# os.environ[\"LANGCHAIN_API_KEY\"] = getpass.getpass()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "1c97218f-f366-479d-8bf7-fe9f2f6df73f",
+ "metadata": {},
+ "source": [
+ "## Instantiation\n",
+ "\n",
+ "Here we show how to instantiate an instance of the LinkupSearchTool tool, with "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "8b3ddfe9-ca79-494c-a7ab-1f56d9407a64",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_linkup import LinkupSearchTool\n",
+ "\n",
+ "tool = LinkupSearchTool(\n",
+ " depth=\"deep\", # \"standard\" or \"deep\"\n",
+ " output_type=\"searchResults\", # \"searchResults\", \"sourcedAnswer\" or \"structured\"\n",
+ " linkup_api_key=None, # API key can be passed here or set as the LINKUP_API_KEY environment variable\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "74147a1a",
+ "metadata": {},
+ "source": [
+ "## Invocation\n",
+ "\n",
+ "### Invoke directly with args\n",
+ "\n",
+ "The tool simply accepts a `query`, which is a string."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "65310a8b-eb0c-4d9e-a618-4f4abe2414fc",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinkupSearchResults(results=[LinkupSearchResult(name='US presidential election results 2024: Harris vs. Trump | Live maps ...', url='https://www.reuters.com/graphics/USA-ELECTION/RESULTS/zjpqnemxwvx/', content='Updated results from the 2024 election for the US president. Reuters live coverage of the 2024 US President, Senate, House and state governors races.'), LinkupSearchResult(name='Election 2024: Presidential results - CNN', url='https://www.cnn.com/election/2024/results/president', content='View maps and real-time results for the 2024 US presidential election matchup between former President Donald Trump and Vice President Kamala Harris. For more ...'), LinkupSearchResult(name='Presidential Election 2024 Live Results: Donald Trump wins - NBC News', url='https://www.nbcnews.com/politics/2024-elections/president-results', content='View live election results from the 2024 presidential race as Kamala Harris and Donald Trump face off. See the map of votes by state as results are tallied.'), LinkupSearchResult(name='Live: Presidential Election Results 2024 : NPR', url='https://apps.npr.org/2024-election-results/', content='Presidential race ratings are based on NPR analysis. Maps do not shade in until 50% of the estimated vote is in for a given state, to mitigate flutuations in early returns . 2024 General Election Results'), LinkupSearchResult(name='2024 US Presidential Election Results: Live Map - Bloomberg.com', url='https://www.bloomberg.com/graphics/2024-us-election-results/', content='US Presidential Election Results November 5, 2024. Bloomberg News is reporting live election results in the presidential race between Democratic Vice President Kamala Harris and her Republican ...'), LinkupSearchResult(name='US Presidential Election Results 2024 - BBC News', url='https://www.bbc.com/news/election/2024/us/results', content='Kamala Harris of the Democrat party has 74,470,899 votes (48.3%) Donald Trump of the Republican party has 76,971,602 votes (49.9%) This map of the US states was filled in as presidential results ...'), LinkupSearchResult(name='Election Results 2024: Live Map - Races by State - POLITICO', url='https://www.politico.com/2024-election/results/', content='Live 2024 election results and maps by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.'), LinkupSearchResult(name='Presidential Election Results 2024: Electoral Votes & Map by State ...', url='https://www.politico.com/2024-election/results/president/', content='Live 2024 Presidential election results, maps and electoral votes by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.'), LinkupSearchResult(name='2024 US Presidential Election Results: Live Map - ABC News', url='https://abcnews.go.com/Elections/2024-us-presidential-election-results-live-map/', content='View live updates on electoral votes by state for presidential candidates Joe Biden and Donald Trump on ABC News. Senate, House, and Governor Election results also available at ABCNews.com'), LinkupSearchResult(name='US Presidential Election Results 2024 - BBC News', url='https://www.bbc.co.uk/news/election/2024/us/results', content='Follow the 2024 US presidential election results as they come in with BBC News. Find out if Trump or Harris is ahead as well as detailed state-by-state results.'), LinkupSearchResult(name='Presidential Election 2024 Live Results: Donald Trump winsNBC News LogoSearchSearchNBC News LogoMSNBC LogoToday Logo', url='https://www.nbcnews.com/politics/2024-elections/president-results', content=\"Profile\\n\\nSections\\n\\nLocal\\n\\ntv\\n\\nFeatured\\n\\nMore From NBC\\n\\nFollow NBC News\\n\\nnews Alerts\\n\\nThere are no new alerts at this time\\n\\n2024 President Results: Trump wins\\n==================================\\n\\nDonald Trump has secured more than the 270 Electoral College votes needed to secure the presidency, NBC News projects.\\n\\nRaces to watch\\n--------------\\n\\nAll Presidential races\\n----------------------\\n\\nElection Night Coverage\\n-----------------------\\n\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\n\\n### Jim Himes says 'truth and analysis are not what drive’ Gabbard and Gaetz\\n\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\n\\n### Trump announces North Dakota Gov. Doug Burgum as his pick for interior secretary\\n\\n### House Ethics Committee cancels meeting at which Gaetz probe was on the agenda\\n\\n### Trump picks former Rep. Doug Collins for veterans affairs secretary\\n\\n### Trump to nominate his criminal defense lawyer for deputy attorney general\\n\\n### From ‘brilliant’ to ‘dangerous’: Mixed reactions roll in after Trump picks RFK Jr. for top health post\\n\\n### Donald Trump Jr. says he played key role in RFK Jr., Tulsi Gabbard picks\\n\\n### Jared Polis offers surprising words of support for RFK Jr. pick for HHS secretary\\n\\nNational early voting\\n---------------------\\n\\n### 88,233,886 mail-in and early in-person votes cast nationally\\n\\n### 65,676,748 mail-in and early in-person votes requested nationally\\n\\nPast Presidential Elections\\n---------------------------\\n\\n### Vote Margin by State in the 2020 Presidential Election\\n\\nCircle size represents the number electoral votes in that state.\\n\\nThe expected vote is the total number of votes that are expected in a given race once all votes are counted. This number is an estimate and is based on several different factors, including information on the number of votes cast early as well as information provided to our vote reporters on Election Day from county election officials. The figure can change as NBC News gathers new information.\\n\\n**Source**: [National Election Pool (NEP)](https://www.nbcnews.com/politics/2024-elections/how-election-data-is-collected )\\n\\n2024 election results\\n---------------------\\n\\nElection Night Coverage\\n-----------------------\\n\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\n\\n### Jim Himes says 'truth and analysis are not what drive’ Gabbard and Gaetz\\n\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\n\\n©\\xa02024 NBCUniversal Media, LLC\")])"
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "tool.invoke({\"query\": \"Who won the latest US presidential elections?\"})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "d6e73897",
+ "metadata": {},
+ "source": [
+ "### Invoke with ToolCall\n",
+ "\n",
+ "We can also invoke the tool with a model-generated ToolCall, in which case a ToolMessage will be returned:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "id": "f90e33a7",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "ToolMessage(content='results=[LinkupSearchResult(name=\\'US presidential election results 2024: Harris vs. Trump | Live maps ...\\', url=\\'https://www.reuters.com/graphics/USA-ELECTION/RESULTS/zjpqnemxwvx/\\', content=\\'Updated results from the 2024 election for the US president. Reuters live coverage of the 2024 US President, Senate, House and state governors races.\\'), LinkupSearchResult(name=\\'Election 2024: Presidential results - CNN\\', url=\\'https://www.cnn.com/election/2024/results/president\\', content=\\'View maps and real-time results for the 2024 US presidential election matchup between former President Donald Trump and Vice President Kamala Harris. For more ...\\'), LinkupSearchResult(name=\\'Presidential Election 2024 Live Results: Donald Trump wins - NBC News\\', url=\\'https://www.nbcnews.com/politics/2024-elections/president-results\\', content=\\'View live election results from the 2024 presidential race as Kamala Harris and Donald Trump face off. See the map of votes by state as results are tallied.\\'), LinkupSearchResult(name=\\'2024 US Presidential Election Results: Live Map - Bloomberg.com\\', url=\\'https://www.bloomberg.com/graphics/2024-us-election-results/\\', content=\\'US Presidential Election Results November 5, 2024. Bloomberg News is reporting live election results in the presidential race between Democratic Vice President Kamala Harris and her Republican ...\\'), LinkupSearchResult(name=\\'US Presidential Election Results 2024 - BBC News\\', url=\\'https://www.bbc.com/news/election/2024/us/results\\', content=\\'Kamala Harris of the Democrat party has 74,498,303 votes (48.3%) Donald Trump of the Republican party has 76,989,499 votes (49.9%) This map of the US states was filled in as presidential results ...\\'), LinkupSearchResult(name=\\'Presidential Election Results 2024: Electoral Votes & Map by State ...\\', url=\\'https://www.politico.com/2024-election/results/president/\\', content=\\'Live 2024 Presidential election results, maps and electoral votes by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.\\'), LinkupSearchResult(name=\\'2024 U.S. Election: Live Results and Maps - USA TODAY\\', url=\\'https://www.usatoday.com/elections/results/2024-11-05\\', content=\\'See who is winning races in the Nov. 5, 2024 U.S. Election with real-time results and state-by-state maps.\\'), LinkupSearchResult(name=\\'Donald Trump wins US presidency - US election 2024 complete results map\\', url=\\'https://www.aljazeera.com/us-election-2024/results/\\', content=\\'Complete, state-by-state breakdown of the 2024 US presidential, Senate, House and Governor results\\'), LinkupSearchResult(name=\\'US Presidential Election Results 2024 - BBC News\\', url=\\'https://www.bbc.co.uk/news/election/2024/us/results\\', content=\\'Follow the 2024 US presidential election results as they come in with BBC News. Find out if Trump or Harris is ahead as well as detailed state-by-state results.\\'), LinkupSearchResult(name=\\'Election Results 2024: Live Map - Races by State - POLITICO\\', url=\\'https://www.politico.com/2024-election/results/\\', content=\\'Live 2024 election results and maps by state. POLITICO’s real-time coverage of 2024 races for President, Senate, House and Governor.\\'), LinkupSearchResult(name=\\'Presidential Election 2024 Live Results: Donald Trump winsNBC News LogoSearchSearchNBC News LogoMSNBC LogoToday Logo\\', url=\\'https://www.nbcnews.com/politics/2024-elections/president-results\\', content=\"Profile\\\\n\\\\nSections\\\\n\\\\nLocal\\\\n\\\\ntv\\\\n\\\\nFeatured\\\\n\\\\nMore From NBC\\\\n\\\\nFollow NBC News\\\\n\\\\nnews Alerts\\\\n\\\\nThere are no new alerts at this time\\\\n\\\\n2024 President Results: Trump wins\\\\n==================================\\\\n\\\\nDonald Trump has secured more than the 270 Electoral College votes needed to secure the presidency, NBC News projects.\\\\n\\\\nRaces to watch\\\\n--------------\\\\n\\\\nAll Presidential races\\\\n----------------------\\\\n\\\\nElection Night Coverage\\\\n-----------------------\\\\n\\\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\\\n\\\\n### Jim Himes says \\'truth and analysis are not what drive’ Gabbard and Gaetz\\\\n\\\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\\\n\\\\n### Trump announces North Dakota Gov. Doug Burgum as his pick for interior secretary\\\\n\\\\n### House Ethics Committee cancels meeting at which Gaetz probe was on the agenda\\\\n\\\\n### Trump picks former Rep. Doug Collins for veterans affairs secretary\\\\n\\\\n### Trump to nominate his criminal defense lawyer for deputy attorney general\\\\n\\\\n### From ‘brilliant’ to ‘dangerous’: Mixed reactions roll in after Trump picks RFK Jr. for top health post\\\\n\\\\n### Donald Trump Jr. says he played key role in RFK Jr., Tulsi Gabbard picks\\\\n\\\\n### Jared Polis offers surprising words of support for RFK Jr. pick for HHS secretary\\\\n\\\\nNational early voting\\\\n---------------------\\\\n\\\\n### 88,233,886 mail-in and early in-person votes cast nationally\\\\n\\\\n### 65,676,748 mail-in and early in-person votes requested nationally\\\\n\\\\nPast Presidential Elections\\\\n---------------------------\\\\n\\\\n### Vote Margin by State in the 2020 Presidential Election\\\\n\\\\nCircle size represents the number electoral votes in that state.\\\\n\\\\nThe expected vote is the total number of votes that are expected in a given race once all votes are counted. This number is an estimate and is based on several different factors, including information on the number of votes cast early as well as information provided to our vote reporters on Election Day from county election officials. The figure can change as NBC News gathers new information.\\\\n\\\\n**Source**: [National Election Pool (NEP)](https://www.nbcnews.com/politics/2024-elections/how-election-data-is-collected )\\\\n\\\\n2024 election results\\\\n---------------------\\\\n\\\\nElection Night Coverage\\\\n-----------------------\\\\n\\\\n### China competition should be top priority for Trump, Sullivan says, as Biden and Xi prepare for final meeting\\\\n\\\\n### Jim Himes says \\'truth and analysis are not what drive’ Gabbard and Gaetz\\\\n\\\\n### Trump praises RFK Jr. in Mar-a-Lago remarks\\\\n\\\\n©\\\\xa02024 NBCUniversal Media, LLC\")]', name='linkup', tool_call_id='1')"
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# This is usually generated by a model, but we'll create a tool call directly for demo purposes.\n",
+ "model_generated_tool_call = {\n",
+ " \"args\": {\"query\": \"Who won the latest US presidential elections?\"},\n",
+ " \"id\": \"1\",\n",
+ " \"name\": tool.name,\n",
+ " \"type\": \"tool_call\",\n",
+ "}\n",
+ "tool.invoke(model_generated_tool_call)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "659f9fbd-6fcf-445f-aa8c-72d8e60154bd",
+ "metadata": {},
+ "source": [
+ "## Chaining\n",
+ "\n",
+ "We can use our tool in a chain by first binding it to a [tool-calling model](/docs/how_to/tool_calling/) and then calling it:\n",
+ "\n",
+ "import ChatModelTabs from \"@theme/ChatModelTabs\";\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "af3123ad-7a02-40e5-b58e-7d56e23e5830",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# | output: false\n",
+ "# | echo: false\n",
+ "\n",
+ "# !pip install -qU langchain langchain-openai\n",
+ "from langchain.chat_models import init_chat_model\n",
+ "\n",
+ "llm = init_chat_model(model=\"gpt-4o\", model_provider=\"openai\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "id": "fdbf35b5-3aaf-4947-9ec6-48c21533fb95",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_JcHj0XLARWRnwrrLhUoBjOV1', 'function': {'arguments': '{\"query\":\"2016 US presidential election winner\"}', 'name': 'linkup'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 1037, 'total_tokens': 1047, 'completion_tokens_details': {'audio_tokens': 0, 'reasoning_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_831e067d82', 'finish_reason': 'stop', 'logprobs': None}, id='run-cd7642ed-4509-4c96-8934-20bd0b986c3f-0', tool_calls=[{'name': 'linkup', 'args': {'query': '2016 US presidential election winner'}, 'id': 'call_JcHj0XLARWRnwrrLhUoBjOV1', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1037, 'output_tokens': 10, 'total_tokens': 1047, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from langchain_core.runnables import RunnableConfig, chain\n",
+ "\n",
+ "prompt = ChatPromptTemplate(\n",
+ " [\n",
+ " (\"system\", \"You are a helpful assistant.\"),\n",
+ " (\"human\", \"{user_input}\"),\n",
+ " (\"placeholder\", \"{messages}\"),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "# specifying tool_choice will force the model to call this tool.\n",
+ "llm_with_tools = llm.bind_tools([tool], tool_choice=tool.name)\n",
+ "\n",
+ "llm_chain = prompt | llm_with_tools\n",
+ "\n",
+ "\n",
+ "@chain\n",
+ "def tool_chain(user_input: str, config: RunnableConfig):\n",
+ " input_ = {\"user_input\": user_input}\n",
+ " ai_msg = llm_chain.invoke(input_, config=config)\n",
+ " tool_msgs = tool.batch(ai_msg.tool_calls, config=config)\n",
+ " return llm_chain.invoke({**input_, \"messages\": [ai_msg, *tool_msgs]}, config=config)\n",
+ "\n",
+ "\n",
+ "tool_chain.invoke(\"Who won the 2016 US presidential elections?\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "id": "4ac8146c",
+ "metadata": {},
+ "source": [
+ "## API reference\n",
+ "\n",
+ "For detailed documentation of all LinkupSearchTool features and configurations head to the [API reference](https://python.langchain.com/api_reference/linkup/tools/linkup_langchain.search_tool.LinkupSearchTool.html)."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.12.7"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/docs/docs/integrations/vectorstores/sqlserver.ipynb b/docs/docs/integrations/vectorstores/sqlserver.ipynb
new file mode 100644
index 0000000000000..2e6ee2a33c950
--- /dev/null
+++ b/docs/docs/integrations/vectorstores/sqlserver.ipynb
@@ -0,0 +1,959 @@
+{
+ "cells": [
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "3fe4f4a9-8810-428c-90cb-147ad8563025",
+ "language": "python"
+ },
+ "source": [
+ "# SQLServer "
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "f791e7da-9710-4f15-93f0-6ea61840a25f",
+ "language": "python"
+ },
+ "source": [
+ ">Azure SQL provides a dedicated [Vector data type](https:\\learn.microsoft.com\\sql\\t-sql\\data-types\\vector-data-type?view=azuresqldb-current&viewFallbackFrom=sql-server-ver16&tabs=csharp-sample) that simplifies the creation, storage, and querying of vector embeddings directly within a relational database. This eliminates the need for separate vector databases and related integrations, increasing the security of your solutions while reducing the overall complexity.\n",
+ "\n",
+ "Azure SQL is a robust service that combines scalability, security, and high availability, providing all the benefits of a modern database solution. It leverages a sophisticated query optimizer and enterprise features to perform vector similarity searches alongside traditional SQL queries, enhancing data analysis and decision-making. \n",
+ " \n",
+ "Read more on using [Intelligent applications with Azure SQL Database](https://learn.microsoft.com/azure/azure-sql/database/ai-artificial-intelligence-intelligent-applications?view=azuresql)\n",
+ "\n",
+ "This notebook shows you how to leverage this integrated SQL [vector database](https://devblogs.microsoft.com/azure-sql/exciting-announcement-public-preview-of-native-vector-support-in-azure-sql-database/) to store documents and perform vector search queries using Cosine (cosine distance), L2 (Euclidean distance), and IP (inner product) to locate documents close to the query vectors"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "320f08b1-2fac-46fe-8e3a-273b6bf6ca8d",
+ "language": "python"
+ },
+ "source": [
+ "## Setup\n",
+ " \n",
+ "Install the `langchain-sqlserver` python package.\n",
+ "\n",
+ "The code lives in an integration package called:[langchain-sqlserver](https:\\github.com\\langchain-ai\\langchain-azure\\tree\\main\\libs\\sqlserver)."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "5fa6ff09-79d5-4023-9005-91a217f91a5b",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install langchain-sqlserver==0.1.1"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Credentials\n",
+ "\n",
+ "There are no credentials needed to run this notebook, just make sure you downloaded the `langchain_sqlserver` package\n",
+ "If you want to get best in-class automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# os.environ[\"LANGSMITH_API_KEY\"] = getpass.getpass(\"Enter your LangSmith API key: \")\n",
+ "# os.environ[\"LANGSMITH_TRACING\"] = \"true\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Initialization"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "4113da9c-b0fe-4e01-bc06-cafe05634fb6",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "from langchain_sqlserver import SQLServer_VectorStore"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "458deaef-f985-4efe-957c-7840509fdfa3",
+ "language": "python"
+ },
+ "source": [
+ "Find your Azure SQL DB connection string in the Azure portal under your database settings\n",
+ "\n",
+ "For more info: [Connect to Azure SQL DB - Python](https:\\learn.microsoft.com\\en-us\\azure\\azure-sql\\database\\connect-query-python?view=azuresql)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "azdata_cell_guid": "d3439463-899e-48aa-88a1-ba6bdedbdc9d",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "\n",
+ "import pyodbc\n",
+ "\n",
+ "# Define your SQLServer Connection String\n",
+ "_CONNECTION_STRING = (\n",
+ " \"Driver={ODBC Driver 18 for SQL Server};\"\n",
+ " \"Server=.database.windows.net,1433;\"\n",
+ " \"Database=test;\"\n",
+ " \"TrustServerCertificate=yes;\"\n",
+ " \"Connection Timeout=60;\"\n",
+ " \"LongAsMax=yes;\"\n",
+ ")\n",
+ "\n",
+ "# Connection string can vary:\n",
+ "# \"mssql+pyodbc://:/?driver=ODBC+Driver+18+for+SQL+Server\" -> With Username and Password specified\n",
+ "# \"mssql+pyodbc:///?driver=ODBC+Driver+18+for+SQL+Server&Trusted_connection=yes\" -> Uses Trusted connection\n",
+ "# \"mssql+pyodbc:///?driver=ODBC+Driver+18+for+SQL+Server\" -> Uses EntraID connection\n",
+ "# \"mssql+pyodbc:///?driver=ODBC+Driver+18+for+SQL+Server&Trusted_connection=no\" -> Uses EntraID connection"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "dcbdafc3-71ec-4e73-b768-ffb49dae2aee",
+ "language": "python"
+ },
+ "source": [
+ "In this example we use Azure OpenAI to generate embeddings , however you can use different embeddings provided in LangChain.\n",
+ "\n",
+ "You can deploy a version of Azure OpenAI instance on Azure Portal following this [guide](https:\\learn.microsoft.com\\en-us\\azure\\ai-services\\openai\\how-to\\create-resource?pivots=web-portal). Once you have your instance running, make sure you have the name of your instance and key. You can find the key in the Azure Portal, under the \"Keys and Endpoint\" section of your instance."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "a65110ff-cfa4-498c-bb7a-d937c04872c0",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "!pip install langchain-openai"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "3bd306b1-f346-4c01-93f4-039827e4f2e6",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "# Import the necessary Libraries\n",
+ "from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings\n",
+ "\n",
+ "# Set your AzureOpenAI details\n",
+ "azure_endpoint = \"https://.openai.azure.com/\"\n",
+ "azure_deployment_name_embedding = \"text-embedding-3-small\"\n",
+ "azure_deployment_name_chatcompletion = \"chatcompletion\"\n",
+ "azure_api_version = \"2023-05-15\"\n",
+ "azure_api_key = \"YOUR_KEY\"\n",
+ "\n",
+ "\n",
+ "# Use AzureChatOpenAI for chat completions\n",
+ "llm = AzureChatOpenAI(\n",
+ " azure_endpoint=azure_endpoint,\n",
+ " azure_deployment=azure_deployment_name_chatcompletion,\n",
+ " openai_api_version=azure_api_version,\n",
+ " openai_api_key=azure_api_key,\n",
+ ")\n",
+ "\n",
+ "# Use AzureOpenAIEmbeddings for embeddings\n",
+ "embeddings = AzureOpenAIEmbeddings(\n",
+ " azure_endpoint=azure_endpoint,\n",
+ " azure_deployment=azure_deployment_name_embedding,\n",
+ " openai_api_version=azure_api_version,\n",
+ " openai_api_key=azure_api_key,\n",
+ ")"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "f1f10145-06db-4cab-853f-9eb3b6fa8ada",
+ "language": "python"
+ },
+ "source": [
+ "## Manage vector store "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "azdata_cell_guid": "c4033f67-bea2-4859-af4d-b41f3b929978",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "from langchain_community.vectorstores.utils import DistanceStrategy\n",
+ "from langchain_sqlserver import SQLServer_VectorStore\n",
+ "\n",
+ "# Initialize the vector store\n",
+ "vector_store = SQLServer_VectorStore(\n",
+ " connection_string=_CONNECTION_STRING,\n",
+ " distance_strategy=DistanceStrategy.COSINE, # optional, if not provided, defaults to COSINE\n",
+ " embedding_function=embeddings, # you can use different embeddings provided in LangChain\n",
+ " embedding_length=1536,\n",
+ " table_name=\"langchain_test_table\", # using table with a custom name\n",
+ ")"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "525f611b-2bd5-4fd4-9192-93d588c5ad0b",
+ "language": "python"
+ },
+ "source": [
+ "### Add items to vector store"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "azdata_cell_guid": "6410813d-0ff1-44dd-b6bb-32fd74772e4f",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "## we will use some artificial data for this example\n",
+ "query = [\n",
+ " \"I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.\",\n",
+ " \"The candy is just red , No flavor . Just plan and chewy . I would never buy them again\",\n",
+ " \"Arrived in 6 days and were so stale i could not eat any of the 6 bags!!\",\n",
+ " \"Got these on sale for roughly 25 cents per cup, which is half the price of my local grocery stores, plus they rarely stock the spicy flavors. These things are a GREAT snack for my office where time is constantly crunched and sometimes you can't escape for a real meal. This is one of my favorite flavors of Instant Lunch and will be back to buy every time it goes on sale.\",\n",
+ " \"If you are looking for a less messy version of licorice for the children, then be sure to try these! They're soft, easy to chew, and they don't get your hands all sticky and gross in the car, in the summer, at the beach, etc. We love all the flavos and sometimes mix these in with the chocolate to have a very nice snack! Great item, great price too, highly recommend!\",\n",
+ " \"We had trouble finding this locally - delivery was fast, no more hunting up and down the flour aisle at our local grocery stores.\",\n",
+ " \"Too much of a good thing? We worked this kibble in over time, slowly shifting the percentage of Felidae to national junk-food brand until the bowl was all natural. By this time, the cats couldn't keep it in or down. What a mess. We've moved on.\",\n",
+ " \"Hey, the description says 360 grams - that is roughly 13 ounces at under $4.00 per can. No way - that is the approximate price for a 100 gram can.\",\n",
+ " \"The taste of these white cheddar flat breads is like a regular cracker - which is not bad, except that I bought them because I wanted a cheese taste.
What was a HUGE disappointment? How misleading the packaging of the box is. The photo on the box (I bought these in store) makes it look like it is full of long flatbreads (expanding the length and width of the box). Wrong! The plastic tray that holds the crackers is about 2\"\n",
+ " \" smaller all around - leaving you with about 15 or so small flatbreads.
What is also bad about this is that the company states they use biodegradable and eco-friendly packaging. FAIL! They used a HUGE box for a ridiculously small amount of crackers. Not ecofriendly at all.
Would I buy these again? No - I feel ripped off. The other crackers (like Sesame Tarragon) give you a little
more bang for your buck and have more flavor.\",\n",
+ " \"I have used this product in smoothies for my son and he loves it. Additionally, I use this oil in the shower as a skin conditioner and it has made my skin look great. Some of the stretch marks on my belly has disappeared quickly. Highly recommend!!!\",\n",
+ " \"Been taking Coconut Oil for YEARS. This is the best on the retail market. I wish it was in glass, but this is the one.\",\n",
+ "]\n",
+ "\n",
+ "query_metadata = [\n",
+ " {\"id\": 1, \"summary\": \"Good Quality Dog Food\"},\n",
+ " {\"id\": 8, \"summary\": \"Nasty No flavor\"},\n",
+ " {\"id\": 4, \"summary\": \"stale product\"},\n",
+ " {\"id\": 11, \"summary\": \"Great value and convenient ramen\"},\n",
+ " {\"id\": 5, \"summary\": \"Great for the kids!\"},\n",
+ " {\"id\": 2, \"summary\": \"yum falafel\"},\n",
+ " {\"id\": 9, \"summary\": \"Nearly killed the cats\"},\n",
+ " {\"id\": 6, \"summary\": \"Price cannot be correct\"},\n",
+ " {\"id\": 3, \"summary\": \"Taste is neutral, quantity is DECEITFUL!\"},\n",
+ " {\"id\": 7, \"summary\": \"This stuff is great\"},\n",
+ " {\"id\": 10, \"summary\": \"The reviews don't lie\"},\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "azdata_cell_guid": "03e8161a-6cdd-415d-8261-b6b99982726c",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[1, 8, 4, 11, 5, 2, 9, 6, 3, 7, 10]"
+ ]
+ },
+ "execution_count": 19,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "vector_store.add_texts(texts=query, metadatas=query_metadata)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "a2838ad1-64a1-409e-b97d-7883b42a0b33",
+ "language": "python"
+ },
+ "source": [
+ "## Query vector store\n",
+ "Once your vector store has been created and the relevant documents have been added you will most likely wish to query it during the running of your chain or agent.\n",
+ "\n",
+ "Performing a simple similarity search can be done as follows:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "metadata": {
+ "azdata_cell_guid": "1baa2857-167e-4873-ad9c-e67649ef39bf",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[Document(metadata={'id': 1, 'summary': 'Good Quality Dog Food'}, page_content='I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.'), Document(metadata={'id': 7, 'summary': 'This stuff is great'}, page_content='I have used this product in smoothies for my son and he loves it. Additionally, I use this oil in the shower as a skin conditioner and it has made my skin look great. Some of the stretch marks on my belly has disappeared quickly. Highly recommend!!!'), Document(metadata={'id': 5, 'summary': 'Great for the kids!'}, page_content=\"If you are looking for a less messy version of licorice for the children, then be sure to try these! They're soft, easy to chew, and they don't get your hands all sticky and gross in the car, in the summer, at the beach, etc. We love all the flavos and sometimes mix these in with the chocolate to have a very nice snack! Great item, great price too, highly recommend!\")]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Perform a similarity search between the embedding of the query and the embeddings of the documents\n",
+ "simsearch_result = vector_store.similarity_search(\"Good reviews\", k=3)\n",
+ "print(simsearch_result)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "f92f0a1b-19aa-46d1-ad1a-c2e52f9114d0",
+ "language": "python"
+ },
+ "source": [
+ "### Filtering Support:\n",
+ "\n",
+ "The vectorstore supports a set of filters that can be applied against the metadata fields of the documents.This feature enables developers and data analysts to refine their queries, ensuring that the search results are accurately aligned with their needs. By applying filters based on specific metadata attributes, users can limit the scope of their searches, concentrating only on the most relevant data subsets."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "metadata": {
+ "azdata_cell_guid": "24fabd60-0b29-4ed9-9d5e-38c68fe05dfa",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[Document(metadata={'id': 7, 'summary': 'This stuff is great'}, page_content='I have used this product in smoothies for my son and he loves it. Additionally, I use this oil in the shower as a skin conditioner and it has made my skin look great. Some of the stretch marks on my belly has disappeared quickly. Highly recommend!!!'), Document(metadata={'id': 5, 'summary': 'Great for the kids!'}, page_content=\"If you are looking for a less messy version of licorice for the children, then be sure to try these! They're soft, easy to chew, and they don't get your hands all sticky and gross in the car, in the summer, at the beach, etc. We love all the flavos and sometimes mix these in with the chocolate to have a very nice snack! Great item, great price too, highly recommend!\"), Document(metadata={'id': 3, 'summary': 'Taste is neutral, quantity is DECEITFUL!'}, page_content='The taste of these white cheddar flat breads is like a regular cracker - which is not bad, except that I bought them because I wanted a cheese taste.
What was a HUGE disappointment? How misleading the packaging of the box is. The photo on the box (I bought these in store) makes it look like it is full of long flatbreads (expanding the length and width of the box). Wrong! The plastic tray that holds the crackers is about 2 smaller all around - leaving you with about 15 or so small flatbreads.
What is also bad about this is that the company states they use biodegradable and eco-friendly packaging. FAIL! They used a HUGE box for a ridiculously small amount of crackers. Not ecofriendly at all.
Would I buy these again? No - I feel ripped off. The other crackers (like Sesame Tarragon) give you a little
more bang for your buck and have more flavor.')]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# hybrid search -> filter for cases where id not equal to 1.\n",
+ "hybrid_simsearch_result = vector_store.similarity_search(\n",
+ " \"Good reviews\", k=3, filter={\"id\": {\"$ne\": 1}}\n",
+ ")\n",
+ "print(hybrid_simsearch_result)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "449c4cde-e303-4856-8deb-6e6ad56f9501",
+ "language": "python"
+ },
+ "source": [
+ "### Similarity Search with Score:\n",
+ "If you want to execute a similarity search and receive the corresponding scores you can run:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "metadata": {
+ "azdata_cell_guid": "382fa5d4-6da1-46c1-987f-6d0ec050be99",
+ "language": "python",
+ "tags": [
+ "hide_input"
+ ]
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[(Document(metadata={'id': 3, 'summary': 'Taste is neutral, quantity is DECEITFUL!'}, page_content='The taste of these white cheddar flat breads is like a regular cracker - which is not bad, except that I bought them because I wanted a cheese taste.
What was a HUGE disappointment? How misleading the packaging of the box is. The photo on the box (I bought these in store) makes it look like it is full of long flatbreads (expanding the length and width of the box). Wrong! The plastic tray that holds the crackers is about 2 smaller all around - leaving you with about 15 or so small flatbreads.
What is also bad about this is that the company states they use biodegradable and eco-friendly packaging. FAIL! They used a HUGE box for a ridiculously small amount of crackers. Not ecofriendly at all.
Would I buy these again? No - I feel ripped off. The other crackers (like Sesame Tarragon) give you a little
more bang for your buck and have more flavor.'), 0.651870006770711), (Document(metadata={'id': 8, 'summary': 'Nasty No flavor'}, page_content='The candy is just red , No flavor . Just plan and chewy . I would never buy them again'), 0.6908952973052638), (Document(metadata={'id': 4, 'summary': 'stale product'}, page_content='Arrived in 6 days and were so stale i could not eat any of the 6 bags!!'), 0.7360955776468822), (Document(metadata={'id': 1, 'summary': 'Good Quality Dog Food'}, page_content='I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.'), 0.7408823529514486), (Document(metadata={'id': 9, 'summary': 'Nearly killed the cats'}, page_content=\"Too much of a good thing? We worked this kibble in over time, slowly shifting the percentage of Felidae to national junk-food brand until the bowl was all natural. By this time, the cats couldn't keep it in or down. What a mess. We've moved on.\"), 0.782995248991772), (Document(metadata={'id': 7, 'summary': 'This stuff is great'}, page_content='I have used this product in smoothies for my son and he loves it. Additionally, I use this oil in the shower as a skin conditioner and it has made my skin look great. Some of the stretch marks on my belly has disappeared quickly. Highly recommend!!!'), 0.7912681479906212), (Document(metadata={'id': 2, 'summary': 'yum falafel'}, page_content='We had trouble finding this locally - delivery was fast, no more hunting up and down the flour aisle at our local grocery stores.'), 0.809213468778896), (Document(metadata={'id': 10, 'summary': \"The reviews don't lie\"}, page_content='Been taking Coconut Oil for YEARS. This is the best on the retail market. I wish it was in glass, but this is the one.'), 0.8281482301097155), (Document(metadata={'id': 5, 'summary': 'Great for the kids!'}, page_content=\"If you are looking for a less messy version of licorice for the children, then be sure to try these! They're soft, easy to chew, and they don't get your hands all sticky and gross in the car, in the summer, at the beach, etc. We love all the flavos and sometimes mix these in with the chocolate to have a very nice snack! Great item, great price too, highly recommend!\"), 0.8283754326400574), (Document(metadata={'id': 6, 'summary': 'Price cannot be correct'}, page_content='Hey, the description says 360 grams - that is roughly 13 ounces at under $4.00 per can. No way - that is the approximate price for a 100 gram can.'), 0.8323967822635847), (Document(metadata={'id': 11, 'summary': 'Great value and convenient ramen'}, page_content=\"Got these on sale for roughly 25 cents per cup, which is half the price of my local grocery stores, plus they rarely stock the spicy flavors. These things are a GREAT snack for my office where time is constantly crunched and sometimes you can't escape for a real meal. This is one of my favorite flavors of Instant Lunch and will be back to buy every time it goes on sale.\"), 0.8387189489406939)]\n"
+ ]
+ }
+ ],
+ "source": [
+ "simsearch_with_score_result = vector_store.similarity_search_with_score(\n",
+ " \"Not a very good product\", k=12\n",
+ ")\n",
+ "print(simsearch_with_score_result)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "620e29bd-02f8-4dc7-91a2-52537cb08886",
+ "language": "python"
+ },
+ "source": [
+ "For a full list of the different searches you can execute on a Azure SQL vector store, please refer to the [API reference](https://python.langchain.com/api_reference/sqlserver/index.html)."
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "ff48b371-b94f-4a3a-bd66-cce856baf6c4",
+ "language": "python"
+ },
+ "source": [
+ "### Similarity Search when you already have embeddings you want to search on"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "35afb4cd-0682-4525-9ba8-625fecc59bb4",
+ "language": "python",
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[Document(metadata={'id': 8, 'summary': 'Nasty No flavor'}, page_content='The candy is just red , No flavor . Just plan and chewy . I would never buy them again'), Document(metadata={'id': 4, 'summary': 'stale product'}, page_content='Arrived in 6 days and were so stale i could not eat any of the 6 bags!!'), Document(metadata={'id': 3, 'summary': 'Taste is neutral, quantity is DECEITFUL!'}, page_content='The taste of these white cheddar flat breads is like a regular cracker - which is not bad, except that I bought them because I wanted a cheese taste.
What was a HUGE disappointment? How misleading the packaging of the box is. The photo on the box (I bought these in store) makes it look like it is full of long flatbreads (expanding the length and width of the box). Wrong! The plastic tray that holds the crackers is about 2 smaller all around - leaving you with about 15 or so small flatbreads.
What is also bad about this is that the company states they use biodegradable and eco-friendly packaging. FAIL! They used a HUGE box for a ridiculously small amount of crackers. Not ecofriendly at all.
Would I buy these again? No - I feel ripped off. The other crackers (like Sesame Tarragon) give you a little
more bang for your buck and have more flavor.'), Document(metadata={'id': 6, 'summary': 'Price cannot be correct'}, page_content='Hey, the description says 360 grams - that is roughly 13 ounces at under $4.00 per can. No way - that is the approximate price for a 100 gram can.')]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# if you already have embeddings you want to search on\n",
+ "simsearch_by_vector = vector_store.similarity_search_by_vector(\n",
+ " [-0.0033353185281157494, -0.017689190804958344, -0.01590404286980629, ...]\n",
+ ")\n",
+ "print(simsearch_by_vector)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "8a7083fd-ddb2-4187-a315-744b7a623178",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[(Document(metadata={'id': 8, 'summary': 'Nasty No flavor'}, page_content='The candy is just red , No flavor . Just plan and chewy . I would never buy them again'), 0.9648153551769503), (Document(metadata={'id': 4, 'summary': 'stale product'}, page_content='Arrived in 6 days and were so stale i could not eat any of the 6 bags!!'), 0.9655108580341948), (Document(metadata={'id': 3, 'summary': 'Taste is neutral, quantity is DECEITFUL!'}, page_content='The taste of these white cheddar flat breads is like a regular cracker - which is not bad, except that I bought them because I wanted a cheese taste.
What was a HUGE disappointment? How misleading the packaging of the box is. The photo on the box (I bought these in store) makes it look like it is full of long flatbreads (expanding the length and width of the box). Wrong! The plastic tray that holds the crackers is about 2 smaller all around - leaving you with about 15 or so small flatbreads.
What is also bad about this is that the company states they use biodegradable and eco-friendly packaging. FAIL! They used a HUGE box for a ridiculously small amount of crackers. Not ecofriendly at all.
Would I buy these again? No - I feel ripped off. The other crackers (like Sesame Tarragon) give you a little
more bang for your buck and have more flavor.'), 0.9840511208615808), (Document(metadata={'id': 6, 'summary': 'Price cannot be correct'}, page_content='Hey, the description says 360 grams - that is roughly 13 ounces at under $4.00 per can. No way - that is the approximate price for a 100 gram can.'), 0.9915737524649991)]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Similarity Search with Score if you already have embeddings you want to search on\n",
+ "simsearch_by_vector_with_score = vector_store.similarity_search_by_vector_with_score(\n",
+ " [-0.0033353185281157494, -0.017689190804958344, -0.01590404286980629, ...]\n",
+ ")\n",
+ "print(simsearch_by_vector_with_score)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Delete items from vector store"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "01f30a69-76cb-4137-bb80-1061abc095be",
+ "language": "python"
+ },
+ "source": [
+ "### Delete Row by ID"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 35,
+ "metadata": {
+ "azdata_cell_guid": "1b42828c-0850-4d89-a1b5-a463bae0f143",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "True"
+ ]
+ },
+ "execution_count": 35,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# delete row by id\n",
+ "vector_store.delete([\"3\", \"7\"])"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "51b9a47e-a17a-4427-8abe-90d87fd63389",
+ "language": "python"
+ },
+ "source": [
+ "### Drop Vector Store"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "cc9a281a-d204-4830-83d0-fcdd890c7f9c",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "# drop vectorstore\n",
+ "vector_store.drop()"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "2d1b942b-f1ca-4fb5-abb7-bb2855631962",
+ "language": "python"
+ },
+ "source": [
+ "## Load a Document from Azure Blob Storage"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "cab89a29-e5e3-44b6-8f29-b4470d26f5d4",
+ "language": "python"
+ },
+ "source": [
+ "Below is example of loading a file from Azure Blob Storage container into the SQL Vector store after splitting the document into chunks.\n",
+ "[Azure Blog Storage](https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blobs-introduction) is Microsoft's object storage solution for the cloud. Blob Storage is optimized for storing massive amounts of unstructured data. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {
+ "azdata_cell_guid": "6cff6a17-89b6-4d73-a92d-cf289dea4294",
+ "language": "python"
+ },
+ "outputs": [],
+ "source": [
+ "pip install azure-storage-blob"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 36,
+ "metadata": {
+ "azdata_cell_guid": "d9127900-0942-48f1-bd4d-081c7fa3fcae",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Number of split documents: 528\n"
+ ]
+ }
+ ],
+ "source": [
+ "from langchain.document_loaders import AzureBlobStorageFileLoader\n",
+ "from langchain.text_splitter import RecursiveCharacterTextSplitter\n",
+ "from langchain_core.documents import Document\n",
+ "\n",
+ "# Define your connection string and blob details\n",
+ "conn_str = \"DefaultEndpointsProtocol=https;AccountName=;AccountKey===;EndpointSuffix=core.windows.net\"\n",
+ "container_name = \" 100\n",
+ " else doc.page_content\n",
+ " for doc in response[\"context\"]\n",
+ " ],\n",
+ " }\n",
+ "\n",
+ " # Create a DataFrame\n",
+ " df = pd.DataFrame(data)\n",
+ "\n",
+ " # Print the table\n",
+ " print(\"\\nSources:\")\n",
+ " print(df.to_markdown(index=False))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "azdata_cell_guid": "3cab0661-2351-4164-952f-67670addd99b",
+ "language": "python",
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Answer: When Harry first learned that he was a wizard, he felt quite sure there had been a horrible mistake. He struggled to believe it because he had spent his life being bullied and mistreated by the Dursleys. If he was really a wizard, he wondered why he hadn't been able to use magic to defend himself. This disbelief and surprise were evident when he gasped, “I’m a what?”\n",
+ "\n",
+ "Sources:\n",
+ "| Doc ID | Content |\n",
+ "|:--------------------------------------------|:------------------------------------------------------|\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | Harry was wondering what a wizard did once he’d fi... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | Harry realized his mouth was open and closed it qu... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | “Most of us reckon he’s still out there somewhere ... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | “Ah, go boil yer heads, both of yeh,” said Hagrid.... |\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the user query\n",
+ "user_query = \"How did Harry feel when he first learnt that he was a Wizard?\"\n",
+ "\n",
+ "# Call the function to get the answer and sources\n",
+ "get_answer_and_sources(user_query)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "azdata_cell_guid": "1e1939d8-671f-4063-906c-89ee6813f12b",
+ "language": "python"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Yes, Harry had a pet owl named Hedwig. He decided to call her Hedwig after finding the name in a book titled *A History of Magic*.\n",
+ "\n",
+ "Sources:\n",
+ "| Doc ID | Content |\n",
+ "|:--------------------------------------------|:------------------------------------------------------|\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | Harry sank down next to the bowl of peas. “What di... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | Harry kept to his room, with his new owl for compa... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | As the snake slid swiftly past him, Harry could ha... |\n",
+ "| 01 Harry Potter and the Sorcerers Stone.txt | Ron reached inside his jacket and pulled out a fat... |\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the user query\n",
+ "user_query = \"Did Harry have a pet? What was it\"\n",
+ "\n",
+ "# Call the function to get the answer and sources\n",
+ "get_answer_and_sources(user_query)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "d1f01a01-1e1d-4af6-95a3-82bad34419fe"
+ },
+ "source": [
+ "## API reference \n",
+ "\n",
+ "For detailed documentation of SQLServer Vectorstore features and configurations head to the API reference: [https://python.langchain.com/api\\_reference/sqlserver/index.html](https:\\python.langchain.com\\api_reference\\sqlserver\\index.html)"
+ ]
+ },
+ {
+ "attachments": {},
+ "cell_type": "markdown",
+ "metadata": {
+ "azdata_cell_guid": "f04dd9d6-d4f2-4425-9c6c-2275ff65c594"
+ },
+ "source": [
+ "## Related\n",
+ "- Vector store [conceptual guide](https://python.langchain.com/docs/concepts/vectorstores/)\n",
+ "- Vector store [how-to guides](https://python.langchain.com/docs/how_to/#vector-stores)"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.11.9"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/docs/docs/tutorials/graph.ipynb b/docs/docs/tutorials/graph.ipynb
index 4130bae5a84f3..41960e0186b47 100644
--- a/docs/docs/tutorials/graph.ipynb
+++ b/docs/docs/tutorials/graph.ipynb
@@ -15,7 +15,7 @@
"source": [
"# Build a Question Answering application over a Graph Database\n",
"\n",
- "In this guide we'll go over the basic ways to create a Q&A chain over a graph database. These systems will allow us to ask a question about the data in a graph database and get back a natural language answer.\n",
+ "In this guide we'll go over the basic ways to create a Q&A chain over a graph database. These systems will allow us to ask a question about the data in a graph database and get back a natural language answer. First, we will show a simple out-of-the-box option and then implement a more sophisticated version with LangGraph.\n",
"\n",
"## ⚠️ Security note ⚠️\n",
"\n",
@@ -45,7 +45,7 @@
"metadata": {},
"outputs": [],
"source": [
- "%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai neo4j"
+ "%pip install --upgrade --quiet langchain langchain-neo4j langchain-openai langgraph"
]
},
{
@@ -57,14 +57,14 @@
},
{
"cell_type": "code",
- "execution_count": 1,
+ "execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
- " ········\n"
+ "Enter your OpenAI API key: ········\n"
]
}
],
@@ -90,7 +90,7 @@
},
{
"cell_type": "code",
- "execution_count": 2,
+ "execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
},
{
"cell_type": "code",
- "execution_count": 3,
+ "execution_count": 4,
"metadata": {},
"outputs": [
{
@@ -117,7 +117,7 @@
"[]"
]
},
- "execution_count": 3,
+ "execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
@@ -162,19 +162,24 @@
},
{
"cell_type": "code",
- "execution_count": 4,
+ "execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
- "Node properties are the following:\n",
- "Movie {imdbRating: FLOAT, id: STRING, released: DATE, title: STRING},Person {name: STRING},Genre {name: STRING},Chunk {id: STRING, question: STRING, query: STRING, text: STRING, embedding: LIST}\n",
- "Relationship properties are the following:\n",
+ "Node properties:\n",
+ "Person {name: STRING}\n",
+ "Movie {id: STRING, released: DATE, title: STRING, imdbRating: FLOAT}\n",
+ "Genre {name: STRING}\n",
+ "Chunk {id: STRING, embedding: LIST, text: STRING, question: STRING, query: STRING}\n",
+ "Relationship properties:\n",
"\n",
- "The relationships are the following:\n",
- "(:Movie)-[:IN_GENRE]->(:Genre),(:Person)-[:DIRECTED]->(:Movie),(:Person)-[:ACTED_IN]->(:Movie)\n"
+ "The relationships:\n",
+ "(:Person)-[:DIRECTED]->(:Movie)\n",
+ "(:Person)-[:ACTED_IN]->(:Movie)\n",
+ "(:Movie)-[:IN_GENRE]->(:Genre)\n"
]
}
],
@@ -187,11 +192,65 @@
"cell_type": "markdown",
"metadata": {},
"source": [
+ "For more involved schema information, you can use `enhanced_schema` option."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Received notification from DBMS server: {severity: WARNING} {code: Neo.ClientNotification.Statement.FeatureDeprecationWarning} {category: DEPRECATION} {title: This feature is deprecated and will be removed in future versions.} {description: The procedure has a deprecated field. ('config' used by 'apoc.meta.graphSample' is deprecated.)} {position: line: 1, column: 1, offset: 0} for query: \"CALL apoc.meta.graphSample() YIELD nodes, relationships RETURN nodes, [rel in relationships | {name:apoc.any.property(rel, 'type'), count: apoc.any.property(rel, 'count')}] AS relationships\"\n"
+ ]
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Node properties:\n",
+ "- **Person**\n",
+ " - `name`: STRING Example: \"John Lasseter\"\n",
+ "- **Movie**\n",
+ " - `id`: STRING Example: \"1\"\n",
+ " - `released`: DATE Min: 1964-12-16, Max: 1996-09-15\n",
+ " - `title`: STRING Example: \"Toy Story\"\n",
+ " - `imdbRating`: FLOAT Min: 2.4, Max: 9.3\n",
+ "- **Genre**\n",
+ " - `name`: STRING Example: \"Adventure\"\n",
+ "- **Chunk**\n",
+ " - `id`: STRING Available options: ['d66006059fd78d63f3df90cc1059639a', '0e3dcb4502853979d12357690a95ec17', 'c438c6bcdcf8e4fab227f29f8e7ff204', '97fe701ec38057594464beaa2df0710e', 'b54f9286e684373498c4504b4edd9910', '5b50a72c3a4954b0ff7a0421be4f99b9', 'fb28d41771e717255f0d8f6c799ede32', '58e6f14dd2e6c6702cf333f2335c499c']\n",
+ " - `text`: STRING Available options: ['How many artists are there?', 'Which actors played in the movie Casino?', 'How many movies has Tom Hanks acted in?', \"List all the genres of the movie Schindler's List\", 'Which actors have worked in movies from both the c', 'Which directors have made movies with at least thr', 'Identify movies where directors also played a role', 'Find the actor with the highest number of movies i']\n",
+ " - `question`: STRING Available options: ['How many artists are there?', 'Which actors played in the movie Casino?', 'How many movies has Tom Hanks acted in?', \"List all the genres of the movie Schindler's List\", 'Which actors have worked in movies from both the c', 'Which directors have made movies with at least thr', 'Identify movies where directors also played a role', 'Find the actor with the highest number of movies i']\n",
+ " - `query`: STRING Available options: ['MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN coun', \"MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a)\", \"MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->\", \"MATCH (m:Movie {title: 'Schindler's List'})-[:IN_G\", 'MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]', 'MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_I', 'MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACT', 'MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.na']\n",
+ "Relationship properties:\n",
+ "\n",
+ "The relationships:\n",
+ "(:Person)-[:DIRECTED]->(:Movie)\n",
+ "(:Person)-[:ACTED_IN]->(:Movie)\n",
+ "(:Movie)-[:IN_GENRE]->(:Genre)\n"
+ ]
+ }
+ ],
+ "source": [
+ "enhanced_graph = Neo4jGraph(enhanced_schema=True)\n",
+ "print(enhanced_graph.schema)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The `enhanced_schema` option enriches property information by including details such as minimum and maximum values for floats and dates, as well as example values for string properties. This additional context helps guide the LLM toward generating more accurate and effective queries.\n",
+ "\n",
"Great! We've got a graph database that we can query. Now let's try hooking it up to an LLM.\n",
"\n",
- "## Chain\n",
+ "## GraphQACypherChain\n",
"\n",
- "Let's use a simple chain that takes a question, turns it into a Cypher query, executes the query, and uses the result to answer the original question.\n",
+ "Let's use a simple out-of-the-box chain that takes a question, turns it into a Cypher query, executes the query, and uses the result to answer the original question.\n",
"\n",
"![graph_chain.webp](../../static/img/graph_chain.webp)\n",
"\n",
@@ -201,7 +260,7 @@
},
{
"cell_type": "code",
- "execution_count": 5,
+ "execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -212,10 +271,12 @@
"\n",
"\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
"Generated Cypher:\n",
- "\u001b[32;1m\u001b[1;3mMATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor:Person)\n",
- "RETURN actor.name\u001b[0m\n",
+ "\u001b[32;1m\u001b[1;3mcypher\n",
+ "MATCH (p:Person)-[:ACTED_IN]->(m:Movie {title: \"Casino\"})\n",
+ "RETURN p.name\n",
+ "\u001b[0m\n",
"Full Context:\n",
- "\u001b[32;1m\u001b[1;3m[{'actor.name': 'Joe Pesci'}, {'actor.name': 'Robert De Niro'}, {'actor.name': 'Sharon Stone'}, {'actor.name': 'James Woods'}]\u001b[0m\n",
+ "\u001b[32;1m\u001b[1;3m[{'p.name': 'Robert De Niro'}, {'p.name': 'Joe Pesci'}, {'p.name': 'Sharon Stone'}, {'p.name': 'James Woods'}]\u001b[0m\n",
"\n",
"\u001b[1m> Finished chain.\u001b[0m\n"
]
@@ -224,10 +285,10 @@
"data": {
"text/plain": [
"{'query': 'What was the cast of the Casino?',\n",
- " 'result': 'The cast of Casino included Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.'}"
+ " 'result': 'Robert De Niro, Joe Pesci, Sharon Stone, and James Woods were the cast of Casino.'}"
]
},
- "execution_count": 5,
+ "execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
@@ -236,9 +297,9 @@
"from langchain_neo4j import GraphCypherQAChain\n",
"from langchain_openai import ChatOpenAI\n",
"\n",
- "llm = ChatOpenAI(model=\"gpt-3.5-turbo\", temperature=0)\n",
+ "llm = ChatOpenAI(model=\"gpt-4o\", temperature=0)\n",
"chain = GraphCypherQAChain.from_llm(\n",
- " graph=graph, llm=llm, verbose=True, allow_dangerous_requests=True\n",
+ " graph=enhanced_graph, llm=llm, verbose=True, allow_dangerous_requests=True\n",
")\n",
"response = chain.invoke({\"query\": \"What was the cast of the Casino?\"})\n",
"response"
@@ -248,54 +309,754 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "# Validating relationship direction\n",
+ "## Advanced implementation with LangGraph\n",
+ "\n",
+ "While the GraphCypherQAChain is effective for quick demonstrations, it may face challenges in production environments. Transitioning to LangGraph can enhance the workflow, but implementing natural language to query flows in production remains a complex task. Nevertheless, there are several strategies to significantly improve accuracy and reliability, which we will explore next.\n",
+ "\n",
+ "Here is the visualized LangGraph flow we will implement:\n",
+ "\n",
+ "![langgraph_text2cypher](../../static/img/langgraph_text2cypher.webp)\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We will begin by defining the Input, Output, and Overall state of the LangGraph application."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from operator import add\n",
+ "from typing import Annotated, List\n",
+ "\n",
+ "from typing_extensions import TypedDict\n",
"\n",
- "LLMs can struggle with relationship directions in generated Cypher statement. Since the graph schema is predefined, we can validate and optionally correct relationship directions in the generated Cypher statements by using the `validate_cypher` parameter."
+ "\n",
+ "class InputState(TypedDict):\n",
+ " question: str\n",
+ "\n",
+ "\n",
+ "class OverallState(TypedDict):\n",
+ " question: str\n",
+ " next_action: str\n",
+ " cypher_statement: str\n",
+ " cypher_errors: List[str]\n",
+ " database_records: List[dict]\n",
+ " steps: Annotated[List[str], add]\n",
+ "\n",
+ "\n",
+ "class OutputState(TypedDict):\n",
+ " answer: str\n",
+ " steps: List[str]\n",
+ " cypher_statement: str"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The first step is a simple `guardrails` step, where we validate whether the question pertains to movies or their cast. If it doesn't, we notify the user that we cannot answer any other questions. Otherwise, we move on to the Cypher generation step."
]
},
{
"cell_type": "code",
- "execution_count": 6,
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from typing import Literal\n",
+ "\n",
+ "from langchain_core.prompts import ChatPromptTemplate\n",
+ "from pydantic import BaseModel, Field\n",
+ "\n",
+ "guardrails_system = \"\"\"\n",
+ "As an intelligent assistant, your primary objective is to decide whether a given question is related to movies or not. \n",
+ "If the question is related to movies, output \"movie\". Otherwise, output \"end\".\n",
+ "To make this decision, assess the content of the question and determine if it refers to any movie, actor, director, film industry, \n",
+ "or related topics. Provide only the specified output: \"movie\" or \"end\".\n",
+ "\"\"\"\n",
+ "guardrails_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " guardrails_system,\n",
+ " ),\n",
+ " (\n",
+ " \"human\",\n",
+ " (\"{question}\"),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "\n",
+ "class GuardrailsOutput(BaseModel):\n",
+ " decision: Literal[\"movie\", \"end\"] = Field(\n",
+ " description=\"Decision on whether the question is related to movies\"\n",
+ " )\n",
+ "\n",
+ "\n",
+ "guardrails_chain = guardrails_prompt | llm.with_structured_output(GuardrailsOutput)\n",
+ "\n",
+ "\n",
+ "def guardrails(state: InputState) -> OverallState:\n",
+ " \"\"\"\n",
+ " Decides if the question is related to movies or not.\n",
+ " \"\"\"\n",
+ " guardrails_output = guardrails_chain.invoke({\"question\": state.get(\"question\")})\n",
+ " database_records = None\n",
+ " if guardrails_output.decision == \"end\":\n",
+ " database_records = \"This questions is not about movies or their cast. Therefore I cannot answer this question.\"\n",
+ " return {\n",
+ " \"next_action\": guardrails_output.decision,\n",
+ " \"database_records\": database_records,\n",
+ " \"steps\": [\"guardrail\"],\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Few-shot prompting"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Converting natural language into accurate queries is challenging. One way to enhance this process is by providing relevant few-shot examples to guide the LLM in query generation. To achieve this, we will use the `SemanticSimilarityExampleSelector` to dynamically select the most relevant examples."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.example_selectors import SemanticSimilarityExampleSelector\n",
+ "from langchain_neo4j import Neo4jVector\n",
+ "from langchain_openai import OpenAIEmbeddings\n",
+ "\n",
+ "examples = [\n",
+ " {\n",
+ " \"question\": \"How many artists are there?\",\n",
+ " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie) RETURN count(DISTINCT a)\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"Which actors played in the movie Casino?\",\n",
+ " \"query\": \"MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a) RETURN a.name\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"How many movies has Tom Hanks acted in?\",\n",
+ " \"query\": \"MATCH (a:Person {name: 'Tom Hanks'})-[:ACTED_IN]->(m:Movie) RETURN count(m)\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"List all the genres of the movie Schindler's List\",\n",
+ " \"query\": \"MATCH (m:Movie {title: 'Schindler's List'})-[:IN_GENRE]->(g:Genre) RETURN g.name\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"Which actors have worked in movies from both the comedy and action genres?\",\n",
+ " \"query\": \"MATCH (a:Person)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g1:Genre), (a)-[:ACTED_IN]->(:Movie)-[:IN_GENRE]->(g2:Genre) WHERE g1.name = 'Comedy' AND g2.name = 'Action' RETURN DISTINCT a.name\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"Which directors have made movies with at least three different actors named 'John'?\",\n",
+ " \"query\": \"MATCH (d:Person)-[:DIRECTED]->(m:Movie)<-[:ACTED_IN]-(a:Person) WHERE a.name STARTS WITH 'John' WITH d, COUNT(DISTINCT a) AS JohnsCount WHERE JohnsCount >= 3 RETURN d.name\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"Identify movies where directors also played a role in the film.\",\n",
+ " \"query\": \"MATCH (p:Person)-[:DIRECTED]->(m:Movie), (p)-[:ACTED_IN]->(m) RETURN m.title, p.name\",\n",
+ " },\n",
+ " {\n",
+ " \"question\": \"Find the actor with the highest number of movies in the database.\",\n",
+ " \"query\": \"MATCH (a:Actor)-[:ACTED_IN]->(m:Movie) RETURN a.name, COUNT(m) AS movieCount ORDER BY movieCount DESC LIMIT 1\",\n",
+ " },\n",
+ "]\n",
+ "\n",
+ "example_selector = SemanticSimilarityExampleSelector.from_examples(\n",
+ " examples, OpenAIEmbeddings(), Neo4jVector, k=5, input_keys=[\"question\"]\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, we implement the Cypher generation chain, also known as **text2cypher**. The prompt includes an enhanced graph schema, dynamically selected few-shot examples, and the user’s question. This combination enables the generation of a Cypher query to retrieve relevant information from the database."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_core.output_parsers import StrOutputParser\n",
+ "\n",
+ "text2cypher_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " (\n",
+ " \"Given an input question, convert it to a Cypher query. No pre-amble.\"\n",
+ " \"Do not wrap the response in any backticks or anything else. Respond with a Cypher statement only!\"\n",
+ " ),\n",
+ " ),\n",
+ " (\n",
+ " \"human\",\n",
+ " (\n",
+ " \"\"\"You are a Neo4j expert. Given an input question, create a syntactically correct Cypher query to run.\n",
+ "Do not wrap the response in any backticks or anything else. Respond with a Cypher statement only!\n",
+ "Here is the schema information\n",
+ "{schema}\n",
+ "\n",
+ "Below are a number of examples of questions and their corresponding Cypher queries.\n",
+ "\n",
+ "{fewshot_examples}\n",
+ "\n",
+ "User input: {question}\n",
+ "Cypher query:\"\"\"\n",
+ " ),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "text2cypher_chain = text2cypher_prompt | llm | StrOutputParser()\n",
+ "\n",
+ "\n",
+ "def generate_cypher(state: OverallState) -> OverallState:\n",
+ " \"\"\"\n",
+ " Generates a cypher statement based on the provided schema and user input\n",
+ " \"\"\"\n",
+ " NL = \"\\n\"\n",
+ " fewshot_examples = (NL * 2).join(\n",
+ " [\n",
+ " f\"Question: {el['question']}{NL}Cypher:{el['query']}\"\n",
+ " for el in example_selector.select_examples(\n",
+ " {\"question\": state.get(\"question\")}\n",
+ " )\n",
+ " ]\n",
+ " )\n",
+ " generated_cypher = text2cypher_chain.invoke(\n",
+ " {\n",
+ " \"question\": state.get(\"question\"),\n",
+ " \"fewshot_examples\": fewshot_examples,\n",
+ " \"schema\": enhanced_graph.schema,\n",
+ " }\n",
+ " )\n",
+ " return {\"cypher_statement\": generated_cypher, \"steps\": [\"generate_cypher\"]}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Query validation\n",
+ "\n",
+ "The next step is to validate the generated Cypher statement and ensuring that all property values are accurate. While numbers and dates typically don’t require validation, strings such as movie titles or people’s names do. In this example, we’ll use a basic `CONTAINS` clause for validation, though more advanced mapping and validation techniques can be implemented if needed.\n",
+ "\n",
+ "First, we will create a chain that detects any errors in the Cypher statement and extracts the property values it references."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from typing import List, Optional\n",
+ "\n",
+ "validate_cypher_system = \"\"\"\n",
+ "You are a Cypher expert reviewing a statement written by a junior developer.\n",
+ "\"\"\"\n",
+ "\n",
+ "validate_cypher_user = \"\"\"You must check the following:\n",
+ "* Are there any syntax errors in the Cypher statement?\n",
+ "* Are there any missing or undefined variables in the Cypher statement?\n",
+ "* Are any node labels missing from the schema?\n",
+ "* Are any relationship types missing from the schema?\n",
+ "* Are any of the properties not included in the schema?\n",
+ "* Does the Cypher statement include enough information to answer the question?\n",
+ "\n",
+ "Examples of good errors:\n",
+ "* Label (:Foo) does not exist, did you mean (:Bar)?\n",
+ "* Property bar does not exist for label Foo, did you mean baz?\n",
+ "* Relationship FOO does not exist, did you mean FOO_BAR?\n",
+ "\n",
+ "Schema:\n",
+ "{schema}\n",
+ "\n",
+ "The question is:\n",
+ "{question}\n",
+ "\n",
+ "The Cypher statement is:\n",
+ "{cypher}\n",
+ "\n",
+ "Make sure you don't make any mistakes!\"\"\"\n",
+ "\n",
+ "validate_cypher_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " validate_cypher_system,\n",
+ " ),\n",
+ " (\n",
+ " \"human\",\n",
+ " (validate_cypher_user),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "\n",
+ "class Property(BaseModel):\n",
+ " \"\"\"\n",
+ " Represents a filter condition based on a specific node property in a graph in a Cypher statement.\n",
+ " \"\"\"\n",
+ "\n",
+ " node_label: str = Field(\n",
+ " description=\"The label of the node to which this property belongs.\"\n",
+ " )\n",
+ " property_key: str = Field(description=\"The key of the property being filtered.\")\n",
+ " property_value: str = Field(\n",
+ " description=\"The value that the property is being matched against.\"\n",
+ " )\n",
+ "\n",
+ "\n",
+ "class ValidateCypherOutput(BaseModel):\n",
+ " \"\"\"\n",
+ " Represents the validation result of a Cypher query's output,\n",
+ " including any errors and applied filters.\n",
+ " \"\"\"\n",
+ "\n",
+ " errors: Optional[List[str]] = Field(\n",
+ " description=\"A list of syntax or semantical errors in the Cypher statement. Always explain the discrepancy between schema and Cypher statement\"\n",
+ " )\n",
+ " filters: Optional[List[Property]] = Field(\n",
+ " description=\"A list of property-based filters applied in the Cypher statement.\"\n",
+ " )\n",
+ "\n",
+ "\n",
+ "validate_cypher_chain = validate_cypher_prompt | llm.with_structured_output(\n",
+ " ValidateCypherOutput\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "LLMs often struggle with correctly determining relationship directions in generated Cypher statements. Since we have access to the schema, we can deterministically correct these directions using the **CypherQueryCorrector**. \n",
+ "\n",
+ "*Note: The `CypherQueryCorrector` is an experimental feature and doesn't support all the newest Cypher syntax.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from langchain_neo4j.chains.graph_qa.cypher_utils import CypherQueryCorrector, Schema\n",
+ "\n",
+ "# Cypher query corrector is experimental\n",
+ "corrector_schema = [\n",
+ " Schema(el[\"start\"], el[\"type\"], el[\"end\"])\n",
+ " for el in enhanced_graph.structured_schema.get(\"relationships\")\n",
+ "]\n",
+ "cypher_query_corrector = CypherQueryCorrector(corrector_schema)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now we can implement the Cypher validation step. First, we use the `EXPLAIN` method to detect any syntax errors. Next, we leverage the LLM to identify potential issues and extract the properties used for filtering. For string properties, we validate them against the database using a simple `CONTAINS` clause.\n",
+ "\n",
+ "Based on the validation results, the process can take the following paths:\n",
+ "\n",
+ "- If value mapping fails, we end the conversation and inform the user that we couldn't identify a specific property value (e.g., a person or movie title). \n",
+ "- If errors are found, we route the query for correction. \n",
+ "- If no issues are detected, we proceed to the Cypher execution step. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from neo4j.exceptions import CypherSyntaxError\n",
+ "\n",
+ "\n",
+ "def validate_cypher(state: OverallState) -> OverallState:\n",
+ " \"\"\"\n",
+ " Validates the Cypher statements and maps any property values to the database.\n",
+ " \"\"\"\n",
+ " errors = []\n",
+ " mapping_errors = []\n",
+ " # Check for syntax errors\n",
+ " try:\n",
+ " enhanced_graph.query(f\"EXPLAIN {state.get('cypher_statement')}\")\n",
+ " except CypherSyntaxError as e:\n",
+ " errors.append(e.message)\n",
+ " # Experimental feature for correcting relationship directions\n",
+ " corrected_cypher = cypher_query_corrector(state.get(\"cypher_statement\"))\n",
+ " if not corrected_cypher:\n",
+ " errors.append(\"The generated Cypher statement doesn't fit the graph schema\")\n",
+ " if not corrected_cypher == state.get(\"cypher_statement\"):\n",
+ " print(\"Relationship direction was corrected\")\n",
+ " # Use LLM to find additional potential errors and get the mapping for values\n",
+ " llm_output = validate_cypher_chain.invoke(\n",
+ " {\n",
+ " \"question\": state.get(\"question\"),\n",
+ " \"schema\": enhanced_graph.schema,\n",
+ " \"cypher\": state.get(\"cypher_statement\"),\n",
+ " }\n",
+ " )\n",
+ " if llm_output.errors:\n",
+ " errors.extend(llm_output.errors)\n",
+ " if llm_output.filters:\n",
+ " for filter in llm_output.filters:\n",
+ " # Do mapping only for string values\n",
+ " if (\n",
+ " not [\n",
+ " prop\n",
+ " for prop in enhanced_graph.structured_schema[\"node_props\"][\n",
+ " filter.node_label\n",
+ " ]\n",
+ " if prop[\"property\"] == filter.property_key\n",
+ " ][0][\"type\"]\n",
+ " == \"STRING\"\n",
+ " ):\n",
+ " pass\n",
+ " mapping = enhanced_graph.query(\n",
+ " f\"MATCH (n:{filter.node_label}) WHERE toLower(n.`{filter.property_key}`) = toLower($value) RETURN 'yes' LIMIT 1\",\n",
+ " {\"value\": filter.property_value},\n",
+ " )\n",
+ " if not mapping:\n",
+ " print(\n",
+ " f\"Missing value mapping for {filter.node_label} on property {filter.property_key} with value {filter.property_value}\"\n",
+ " )\n",
+ " mapping_errors.append(\n",
+ " f\"Missing value mapping for {filter.node_label} on property {filter.property_key} with value {filter.property_value}\"\n",
+ " )\n",
+ " if mapping_errors:\n",
+ " next_action = \"end\"\n",
+ " elif errors:\n",
+ " next_action = \"correct_cypher\"\n",
+ " else:\n",
+ " next_action = \"execute_cypher\"\n",
+ "\n",
+ " return {\n",
+ " \"next_action\": next_action,\n",
+ " \"cypher_statement\": corrected_cypher,\n",
+ " \"cypher_errors\": errors,\n",
+ " \"steps\": [\"validate_cypher\"],\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The Cypher correction step takes the existing Cypher statement, any identified errors, and the original question to generate a corrected version of the query."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "correct_cypher_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " (\n",
+ " \"You are a Cypher expert reviewing a statement written by a junior developer. \"\n",
+ " \"You need to correct the Cypher statement based on the provided errors. No pre-amble.\"\n",
+ " \"Do not wrap the response in any backticks or anything else. Respond with a Cypher statement only!\"\n",
+ " ),\n",
+ " ),\n",
+ " (\n",
+ " \"human\",\n",
+ " (\n",
+ " \"\"\"Check for invalid syntax or semantics and return a corrected Cypher statement.\n",
+ "\n",
+ "Schema:\n",
+ "{schema}\n",
+ "\n",
+ "Note: Do not include any explanations or apologies in your responses.\n",
+ "Do not wrap the response in any backticks or anything else.\n",
+ "Respond with a Cypher statement only!\n",
+ "\n",
+ "Do not respond to any questions that might ask anything else than for you to construct a Cypher statement.\n",
+ "\n",
+ "The question is:\n",
+ "{question}\n",
+ "\n",
+ "The Cypher statement is:\n",
+ "{cypher}\n",
+ "\n",
+ "The errors are:\n",
+ "{errors}\n",
+ "\n",
+ "Corrected Cypher statement: \"\"\"\n",
+ " ),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "correct_cypher_chain = correct_cypher_prompt | llm | StrOutputParser()\n",
+ "\n",
+ "\n",
+ "def correct_cypher(state: OverallState) -> OverallState:\n",
+ " \"\"\"\n",
+ " Correct the Cypher statement based on the provided errors.\n",
+ " \"\"\"\n",
+ " corrected_cypher = correct_cypher_chain.invoke(\n",
+ " {\n",
+ " \"question\": state.get(\"question\"),\n",
+ " \"errors\": state.get(\"cypher_errors\"),\n",
+ " \"cypher\": state.get(\"cypher_statement\"),\n",
+ " \"schema\": enhanced_graph.schema,\n",
+ " }\n",
+ " )\n",
+ "\n",
+ " return {\n",
+ " \"next_action\": \"validate_cypher\",\n",
+ " \"cypher_statement\": corrected_cypher,\n",
+ " \"steps\": [\"correct_cypher\"],\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We need to add a step that executes the given Cypher statement. If no results are returned, we should explicitly handle this scenario, as leaving the context empty can sometimes lead to LLM hallucinations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "no_results = \"I couldn't find any relevant information in the database\"\n",
+ "\n",
+ "\n",
+ "def execute_cypher(state: OverallState) -> OverallState:\n",
+ " \"\"\"\n",
+ " Executes the given Cypher statement.\n",
+ " \"\"\"\n",
+ "\n",
+ " records = enhanced_graph.query(state.get(\"cypher_statement\"))\n",
+ " return {\n",
+ " \"database_records\": records if records else no_results,\n",
+ " \"next_action\": \"end\",\n",
+ " \"steps\": [\"execute_cypher\"],\n",
+ " }"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The final step is to generate the answer. This involves combining the initial question with the database output to produce a relevant response."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "generate_final_prompt = ChatPromptTemplate.from_messages(\n",
+ " [\n",
+ " (\n",
+ " \"system\",\n",
+ " \"You are a helpful assistant\",\n",
+ " ),\n",
+ " (\n",
+ " \"human\",\n",
+ " (\n",
+ " \"\"\"Use the following results retrieved from a database to provide\n",
+ "a succinct, definitive answer to the user's question.\n",
+ "\n",
+ "Respond as if you are answering the question directly.\n",
+ "\n",
+ "Results: {results}\n",
+ "Question: {question}\"\"\"\n",
+ " ),\n",
+ " ),\n",
+ " ]\n",
+ ")\n",
+ "\n",
+ "generate_final_chain = generate_final_prompt | llm | StrOutputParser()\n",
+ "\n",
+ "\n",
+ "def generate_final_answer(state: OverallState) -> OutputState:\n",
+ " \"\"\"\n",
+ " Decides if the question is related to movies.\n",
+ " \"\"\"\n",
+ " final_answer = generate_final_chain.invoke(\n",
+ " {\"question\": state.get(\"question\"), \"results\": state.get(\"database_records\")}\n",
+ " )\n",
+ " return {\"answer\": final_answer, \"steps\": [\"generate_final_answer\"]}"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Next, we will implement the LangGraph workflow, starting with defining the conditional edge functions."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def guardrails_condition(\n",
+ " state: OverallState,\n",
+ ") -> Literal[\"generate_cypher\", \"generate_final_answer\"]:\n",
+ " if state.get(\"next_action\") == \"end\":\n",
+ " return \"generate_final_answer\"\n",
+ " elif state.get(\"next_action\") == \"movie\":\n",
+ " return \"generate_cypher\"\n",
+ "\n",
+ "\n",
+ "def validate_cypher_condition(\n",
+ " state: OverallState,\n",
+ ") -> Literal[\"generate_final_answer\", \"correct_cypher\", \"execute_cypher\"]:\n",
+ " if state.get(\"next_action\") == \"end\":\n",
+ " return \"generate_final_answer\"\n",
+ " elif state.get(\"next_action\") == \"correct_cypher\":\n",
+ " return \"correct_cypher\"\n",
+ " elif state.get(\"next_action\") == \"execute_cypher\":\n",
+ " return \"execute_cypher\""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's put it all together now."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
"metadata": {},
"outputs": [
{
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\n",
- "\n",
- "\u001b[1m> Entering new GraphCypherQAChain chain...\u001b[0m\n",
- "Generated Cypher:\n",
- "\u001b[32;1m\u001b[1;3mMATCH (:Movie {title: \"Casino\"})<-[:ACTED_IN]-(actor:Person)\n",
- "RETURN actor.name\u001b[0m\n",
- "Full Context:\n",
- "\u001b[32;1m\u001b[1;3m[{'actor.name': 'Joe Pesci'}, {'actor.name': 'Robert De Niro'}, {'actor.name': 'Sharon Stone'}, {'actor.name': 'James Woods'}]\u001b[0m\n",
- "\n",
- "\u001b[1m> Finished chain.\u001b[0m\n"
- ]
- },
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAeIAAAJ2CAIAAAAMlBY8AAAAAXNSR0IArs4c6QAAIABJREFUeJzs3XdAE+f/B/AnAwgQ9h4iIg5QERS3qCg4EPdWHFWr1lVna111r7oFtNXiAvcWB7iroDhxb3GwCRAgQAIJ+f1x/VG+iIBZl4T3669w3F3ehPDhyeeeu2NIpVICAADqikl3AAAAqAzKNACAWkOZBgBQayjTAABqDWUaAECtoUwDAKg1Nt0BALRH+ieRIFecnyuWFEtFhSV0x6kWPX2mjh7T0JhlaKJj5ahLdxyoAMo0gLzePhJ8eCr48Cy/jruhRCI1NGGbW+swWXTHqh4pIemfhfm5Yl0O6/PrfJfGXJcmXOdGBnTngv8wcHoLgMxexOXGnuXVdjN0djOs08SQrcOgO5FchPmSD0/zkxOEqQmFbXtZujQxpDsREJRpABnxM4qj9qVaOui162XBMdSQkXO1ZacXx57lMRkM/5E2mv6/RwugTAN8t3fxgjvnM3tNsDex1KE7ixKlfxEd35bYf4qDTW0O3VlqNJRpgO+T+KbwWWxO9zG2dAdRkaObvvgH2ZpaafM/JDWHMg3wHZ7czEl8WxAw1o7uICp1dHNiy27mtd1wXJEemDcNUF3J7wvfxefVtBpNCBk0w/HqobT8HAndQWoolGmAahEWlNy/nN1/miPdQegx/DfnywfT6E5RQ6FMA1TLrVMZ9by4dKegjR6HYeOkd/9SNt1BaiKUaYCqZacVp30SurU0pjsInVoHWMRdzCzRjJMrtQrKNEDVnt7K8elnrZrnEggEr169omvzyvkOsn54BQNqVUOZBqiCVEqexPCdGuqr5umGDh16+vRpujavnGM9/RdxOUraOXwLyjRAFRKe5rs0Vt1p00VFRbJtSE2ulXnz6jC20GHrMLNSlfgU8DWUaYAqJH0orN/MSBl73rNnT0BAQPv27ceNG3f37l1CSGBgYFZW1tGjR729vQMDA6myGxIS0rt371atWvXs2TM0NFQi+Xdi3Nq1a7t27frPP//069fP29v73r17X2+ucA28jb+8LlDGnuFbcIU8gCqkfRLW81T8HI+7d+8GBwd37969bdu2sbGxBQUFhJB169ZNnTq1efPmI0aM0NXVJYSwWKy4uLgOHTo4Ojq+fv06LCzM2Ng4KCiI2olAIAgNDZ03b15hYWGLFi2+3lzhDLjM5A9CZewZvgVlGqAK+TliQ2PF/6UkJycTQgYPHuzh4REQEEAtdHd3Z7PZlpaWnp6e1BIWi7V3714G49/rHyUmJl69erW0TBcVFS1cuLBx48bf2lzhDE3Y+TliJe0cKoQyDVCF/FyxoYnir4HXvn17Y2PjRYsWzZ07t3379pWsmZWVtXPnzjt37uTm5hJCjIz+68BwOJzSGq0aBsbs/FyUaZVCbxqgUlKiy2ExmYq/mKelpWVYWFjt2rVnzJgxbty49PT0ClfLzMwcMWLE3bt3f/rpp23btrm5uZX2pgkhBgaqvs4Gm81g66BuqBReboBKMQiLTZQ0fnR2dt66dev27dvfvXu3ZMmS0uVlL4h2/PjxrKys0NDQbt26NWrUyNa26ivzKfV6agK+WEcPV6BWKZRpgCoYGrPzc5Vy1SFq8lyLFi18fHxKz0nR19fn8Xil6/D5fDMzs9LqzOfzK6/C5TZXuPxcpXTqoRJ4uQGqYOusX5in+DL9/PnzX3/9dfDgwQYGBrGxse7u7tRyLy+vixcv7tmzx9jY2MPDw9vb+8iRI9u3b2/atOnVq1djYmJKSkr4fL6pqWmFuy23uaurq2JjFwlLLOz1FLtPqByr7EctAPhaYZ7k44t8lyYKnpOXk5Pz5s2b6Ojou3fvNmvWbP78+VwulxDi4eHx+vXr8+fPv3r1qlGjRp07dy4pKTl69OiVK1dq1aq1aNGiR48eFRQUeHt7x8TEJCQkjBw5suxuy21ep04dxcb+5wSvcRtjrilGeKqD2wIAVEFUWLJ32ccJq13oDkI/Yb4kfPWn8SvwUqgU/iUCVEFPn+nShJv2SVjJLQHXr18fGRn59XI3N7eXL19WuMnu3bsVPtQt59atWwsXLqzwW46OjomJid+b6stboXtrE4VmhKphNA1QtaR3hXcvZvWb6vCtFfh8PnUaYTkMxjf/xKytrdls5Y6ThEJhVlZWhd/6VrDKU+1e8nHQDEd0PFQMLzdA1Rxc9Vk6jE8vC751P0BTU9NvHdOjEYfDsbe3V9TentzMcWliiBqtepiQB1At7Xpbvr6fR3cKOiU8z2/Xy5LuFDURyjRAtVjY6TrW179yqOJzBbXeiW2JLfzN2Lo4sYUGKNMA1eXeylhXj3k7MpPuIKoWvT/N1dPIvq6KbowA5eAQIsD3eXyDX5hf0jrAnO4gKnIpPK1eMyNnd1VfPARKYTQN8H2adjRlMMj53Sl0B1E6cZH0yMYvDq76qNH0wmgaQBbvn+RfP5bevLOZZye1m+ChEHfOZ35+VdBpoLW1E04NpxnKNICMJBJy+yzv9YM8z46mzo0MLeyUcrcUFUv7JEx8W3jnQmar7hbefmYEhwzVAMo0gFwK8iRPb+W8fyIQF5e4ehgxWMTQmG1szhaLNeMvi8lg5GYVF+RJGAzyIi7X2Jzt6mnUtKMpEw1RtYEyDaAYuZnFyQkiQXZxQZ6YwWAIFH0nqo8fP3I4nOpcb/q7GJqwmAyGgTHLyEzHwVXfwEjx96kBOeGEIgDFMLbQMbbQUd7+163bZ167do8hyrrJIagtfLABAFBrKNMAAGoNZRpAMxgbG3M437ySKmgxlGkAzZCbmysUCulOATRAmQbQDHp6esq+PjWoJ5RpAM0gEonEYgVP8gONgDINoBn09fUxmq6ZUKYBNENhYSFG0zUTyjSAZjA1NdXXxxWfayKUaQDNwOfzCwsL6U4BNECZBtAMLBaLwcAF62oilGkAzSCRSHChtJoJZRoAQK2hTANoBjMzMxxCrJlQpgE0Q3Z2Ng4h1kwo0wAAag1lGkAzcDgcFgu3VqmJUKYBNINQKJRIJHSnABqgTANoBhMTExxCrJlQpgE0Q05ODg4h1kwo0wAAag1lGkAz4LYANRbKNIBmwG0BaiyUaQAAtYYyDQCg1lCmATQDbgtQY6FMA2gG3BagxkKZBgBQayjTAABqDWUaQDPo6uri0ks1E8o0gGYoKirCpZdqJpRpAAC1hjINoBm4XK6enh7dKYAGKNMAmkEgEIhEIrpTAA1QpgEA1BrKNIBmYDLx11pD4RcPoBlKSkrojgD0QJkG0AxmZmYcDofuFEADlGkAzZCdnS0UCulOATRAmQbQDLhlbY3FkEqldGcAgG/q3bs39Ueak5Ojo6NjYGBACGEwGGfOnKE7GqgIbq0GoNasrKzi4+MZDAb1ZU5OTklJiZ+fH925QHXQ9ABQayNGjDAzMyu7xNLSctSoUfQlAlVDmQZQa507d3Z2di79UiqVNm3atHHjxrSGApVCmQZQd8OGDTM2NqYeW1hYjB07lu5EoFIo0wDqrkuXLi4uLlKpVCqVenh4uLm50Z0IVAplGkADDB06lMvlYihdM2GmB0Bl8rLFmSlF4mKaT9R2tmrduI6fhYWFbrHTu8cCesPo6jEtHfQMjHArGRXBvGmAimWnFd86zeMli2q7cfPzxHTHUSN6+swvr/Lt6uj7DbfW5eATudKhTANUIDdTfHpHkn+Qo6EpxowV4yWKbkem95/qwDFEpVYuvL4A5YmLpBFrP/WdWhs1uhKWjnpdhtsdWPeJ7iDaD6NpgPJuneaZWHKcG3PpDqIBnt7K5hozPXxM6A6izTCaBigv6V2hkbkO3Sk0A9eEnZJQSHcKLYcyDfAVKYOLMl09xha6RUJ8IlculGmA8vL4RdISlJ5qKSmRFuZL6E6h5VCmAQDUGso0AIBaQ5kGAFBrKNMAAGoNZRoAQK2hTAMAqDWUaQAAtYYyDQCg1lCmAQDUGso0AIBaQ5kGAFBrKNMAGmbFqoWjxgyQYcM1a5dM+mkk9fiHcYOXLf9N0dFAKVCmAWoKA0NDAwNDulPAd8MtawEULCeHz2AyjY2M5d+VVCplMBiKWnn61LnyRwLVQ5kGUICoqMiIg7vT01PrONdlMJm2NnaLF63+Oyz08JH90RdvU+u8ev3ip8mj1qze2qpl26dP4/eH73r6LJ4Q0rBBo0mTZjSo70aV+L79/SZN/Pntu9cxMdfr1Wu4dfMuQsjVa9F79/2VlpbiXNulpOS/25z/MG5wHee6zs51T5w8JBIJjx6+ePPW1VOnjnxIeKevb9CyRZupU+aYmpoRQoYOD0xLS23cuOm2LX+XCy8UCjdvXRMb+w8hxMPDa+rkOba2dqp9/aAyKNMA8roVc33NuiWBPfu1atnuyLHwp0/jp06eXfkmqanJoiLRyKDxTCbz9Omj836bfjDiLIfDob4bHv53nz6DNqzfwWKxCCGXr1xcuWqhl6f34EFBqanJBw7ucXCoVbqre/duC0XCVSs2FRQWcLncFy+eOjk5+/sHZGdnnTh5KL8gf/XKzYSQ2bMW7ty5rcIwBw7ujoqK/GHMJAsLy6joSH19fYW+PCAvlGkAeZ0+fdTZ2WX2rAWEkIYNGw0a0uNO3C139yaVbOLn18PfP4B63KCB+6zZk54+i2/h3Zpa4u7eZPy4KdRjkUgUHLLew8Prj3UhVNVOSvry7v2b0l2x2OxFC1aV1tZZM+eXtj7YbHZ4RJhIJNLT02vh3fro0fBCYQU3xEpJTdbX1x8+bAybze4Z0FdBrwooDMo0gLzSM9IcHZ2ox5aWVhwOJy8vt/JNGAzGzVvXjhwN//QpwcDAgBCSnZVZ+t1mzVqWPn76LD4nhz9wwHCqRhNCmKz/ud+5m1vjsuPf4uLiEycPXbp8Pj09VU+PU1JSwudn29jYVhLGr0uPK1cu/jpv2pTJs11cXL/zpwelw0wPAHnZ2zu+fv2iqKiIEPLhwzuhUOjq2qDyTfbt37X497kN6ruvXL5x0sQZhJAS6X8dZw7nv7Kbnp5KCLG1tf/WrvTLrCyVSucvmBFxIKxH995r1wT7+wWU23OFWrVsu3rVlqzszHE/Dl2/YYVYLK7ezw0qgtE0gLyGDRk9a86kWXMmNW/W8tKl8w0buHfrGkgNmStcXyQSHTi4u2dA36lTZhNC0tPTKtm5qYkZIYTPz65OksePHz54eHfB/BV+XboTQpISP1fzR2jVsm0L79bHTxwM3b7JxsZuZNC4am4IKoDRNIC8GjduOqD/sJKSkuTkxCFDRm3etJPNZhNCTEzMiouLc3JzqNVSU5OpB0JhoUgkql/fjfoyJ5dPCCk7f6OsunXrM5nMy1cuVCcJtav69RpWuWddHd3Szgz1OYDJZA4aOMLS0urt21ff/xqAEmE0DSCvo8ciHj26N3jwSAaDwWazExM/161bjxDi3bwVg8EIDlk/cMDwjwnv/9y5lVrfxMTUxcX1xMlD5uYW+QLB3n1/MZnMDx/eVbhzGxvbHt17nzt/qkgkatmybWYmLy7ulpmZRYUru7s10dXV3bkruGfPfh8+vD1wcDchJOHDOwd7x3Jruro2OH/hdEjoxgk/Tjtx8lBM7A1/v4DMzAweL6NBA3dFv0IgF4ymAeTVoL57VnbmylULV6xcsGTpr+MnDNu4aRUhpHbtOvN+WfLyxdOfZ4y/cvXixB+nl26yaMEqfY7+suW/HT66/6efZo4MGhcVdba4uLjC/U+bOrdf38EPHt4N3b7x+YsndevW/1YSKyvrhQtWvn33asnSXx48iNu44c/WrdufOHno6zXHj5vi09734sUzIpHI3t6xuKho+45N586f6t9/6JDBIxX0woBiMKRSKd0ZANTLroUf+kypzTFgVWPdf0kkEmomRlFR0Z87t546dSTqQizV+tBuGYnC+9G8wTPLj9ZBgbT/bQSgbNHR53aFhfh26mpn55CdnXnz5lVnZ5eaUKNBNfBOApBXbWeXJo09L1+5kJubY2Fh2a5tx6ARmCkBCoMyDSCvBvXdFi1cRXcK0Fo4hAgAoNZQpgEA1BrKNACAWkOZBgBQayjTAABqDWUaAECtoUwDAKg1lGkAALWGMg0AoNZQpgEA1BrKNEB5Vg6cqu5LBf9PyjCz0qE7hJZDmQYoj8EkmSkiulNohozEQo7hd1zxFWSAMg1QXl0PblaykO4UmiGHV+Tsbkh3Ci2HMg1QXqM2xrlZRc9v8+kOou7uXuBxTVi1GuhXY12QHe7eAlCxc3+nmFjqGVvqWtrr0Z1FvZRICC9ZmP650Nic3TrAnO442g9lGuCbXt3N/fiyoERCeEnf16qWSMRCocjQUJO6Afn5Ah0dXV1d3SrXtLDT1dVn1mtq5NzYQCXRajqUaQDF+/HHH3fu3El3iu82Y8aMzZs3050CykOZBlCkx48fN23alO4Ucrlx40bHjh3pTgH/wSFEAIWJiIhITEykO4W8GjZs2LFjR7FYTHcQ+BfKNIDCSCSSnj170p1CXjY2NufOnePxeAkJCXRnAYIyDaAYISEhhJBRo0bRHUQxuFyura1tZmbmypUr6c4CKNMActu4cWObNm3oTqF43t7ebm5ueXl5dAep6XAIEUBeHz9+dHZ2pjuFskgkkocPH9rb2zs4ONCdpYbCaBpARmKxeMmSJYQQLa7RhBAWi+Xt7f3TTz9lZ2fTnaWGwmgaQEZTpkzZtGlTdc4H0Q7Pnj2ztra2tramO0iNgzIN8N1SUlLs7OzoTkGDjx8/Xr9+fcyYMXQHqVnQ9AD4PklJSZs2baI7BT2cnZ0FAkF6ejrdQWoWjKYBvs+mTZtmzpxJdwo68Xg8S0tLulPUICjTANUlFouFQiGXy6U7CP1OnDiRn58/cuRIuoPUCGh6AFRLfHz8xIkTUaMp/fv3r1Wr1s2bN+kOUiNgNA1QNT6f/+bNm5YtW9IdBGoijKYBqlBQUJCZmYkaXaFff/319u3bdKfQcijTAJX5+PHjyJEj69atS3cQNbV27dqXL1/izBelQtMD4JvEYvG7d+8aNmxIdxCo0TCaBvimV69eoUZXR3x8/IoVK+hOobVQpgEq1qdPH1NTU7pTaAZPT09nZ+ezZ8/SHUQ7oekBUIH4+HgnJydzc9w2G+iH0TRAeTweDzVaBpmZmXv27KE7hRZCmQb4H3fv3t2+fTtqtAwsLCzy8vJQqRUOTQ+A/7Fr166xY8cymRjByCgtLc3S0pLFYtEdRHugTAOAIhUUFBQWFlpYWNAdRHtgyADwrw8fPixbtozuFBrPwMBgypQp7969ozuI9kCZBvhXWFhY37596U6hDZYsWRIbG0t3Cu2BpgcAgFrDaBqAUPdkycnJoTuF9nj79m1MTAzdKbQEyjQAIYQEBQUxGAy6U2iPevXqzZw5UyKR0B1EG6BMA5AvX74MGDDA2NiY7iBaZd++fSkpKXSn0AboTQMAqDWMpgFISkoKn8+nO4UWmjx5Ml5Y+aFMA5DNmzffv3+f7hRayNHR8cqVK3Sn0HhsugMA0M/Ozs7BwYHuFFpozpw5BQUFdKfQeOhNAwCoNTQ9AEhcXFxmZibdKbTT1KlTnz17RncKzYYyDUDCwsISEhLoTqGd6tev/+DBA7pTaDb0pgGIq6srl8ulO4V2mjx5cnFxMd0pNBt60wAAag1NDwD0ppWrc+fOuF6KPND0gJqrS5cuLBaLyWTy+XwDAwM2m81kMk1NTQ8dOkR3NK3i4eGRkJDg6elJdxBNhTINNZe5uXnpkcPc3FxCCIPB6NatG925tM3mzZvpjqDZ0PSAmqtNmzblrorn7Ow8YMAA+hJpp8LCwry8PLpTaDCUaai5Bg0a5OzsXPolg8Fo27atk5MTraG00P379xctWkR3Cg2GMg01V61atVq3bl32y0GDBtGaSDvVqlWrqKiI7hQaDGUaarRBgwZRV/OQSqVt2rRxdHSkO5EWcnZ2Dg0NpTuFBkOZhhrNycmpbdu2UqnUwcFh6NChdMfRWjk5OThFQ2aY6QHfLYenVSeV9QkYdjfmWZs2bYz1bbXpR+MYsPQM1GUcNmbMmC1btqDvLxuUaaiuvGxxbGTm+8eCWvUNs1JFdMdRpN7eK0gxORWaTHcQRWKySIlE6uFj6uVrSncW4uDgkJ2djTItG5wsDtXC54lPbP3SeZi9qbUui41bu2oGQbb49b0cBqOk40ArurOA7FCmoWoCvvjwxsTBs52rsS6onSf/ZAvzi7sMtaYxQ05ODpvNNjQ0pDGD5lKX1hWos9uRWV2G2tGdAmTk0cFMWsJIfCekMUN4ePjhw4dpDKDRUKahau+e5JlY69KdAmTHZDN4iXSWaScnJ11dvIVkhEOIUIW8LLGjqwFbB/1oDWZpzxFk03nUt1evXjQ+u6bDaBqqlpmiVfM6aiBxcYmwQEJjAD6f/+XLFxoDaDSUaQBQuvj4+C1bttCdQlOhTAOA0llZWVEn5YMM0JsGAKVr1KhRo0aN6E6hqTCaBgClKygoePPmDd0pNBXKNAAoXWJi4u+//053Ck2FMg0ASmdkZOTq6kp3Ck2FMg0ASmdnZ7d8+XK6U2gqlGkAULri4uJXr17RnUJToUwDgNLx+fwZM2bQnUJToUwDgNLp6OigNy0zlGkAUDpTU9Pg4GC6U2gqlGmoKVJTU1JS1eX+LMeOH/Dt4l1QUEB3EBWRSCQfP36kO4WmQpmGGiEpOXF4UO/Xr1/QHaSGysvLGzduHN0pNBXKNChdTg4/Ny9X2c9S+X2IJGKxlt2oSLN+HBaLZWtrS3cKTYVreoBSREVFRhzcnZ6eWse5LoPJtLWxW7xoNSEkJTU5NHTjg4dxurp69es1HDt2csMG7oSQhYtn13KszWazI8+dFBcXt27d/ufp87hcLrW302eOHTkazuOl29rad+ncfcjgkXp6etdvXF66bN7ypesPH93/6tXzYUNHB40Yt2//zqtXo9Iz0iwsLLv69xwzeiKLxUpJTR79w0BCyNJl85YS0q1b4LxfllQSphJCoXB/+K5r16IzeOk2NnZd/Xt6eXlP/3n86pWbW7duT61z7vyp9RtWHIw4eyvmWkjoxv79h964cVkgyHN3azJx4s8N6ruV7u3mzasHDu3JyEhr0thzzuxFVlb/3gfrUfz9nbuC379/Y2Zm7uXZYvy4KRYWloSQH8YNruNc19m57omTh0Qi4dnT19lszfgTNjIyioiIoDuFpsJoGhTvVsz1NeuWNPVotnD+Sh1d3Zcvnw0cMJwQkpnJmzZ9bG5eztQpcyZOmF5cXPzzjPEJCe+prY4cDU9NTV61cvPUKXOu37gcHvE3tXzP3r/+2rm1s2/XuXMWd+rod/jIvg2bVpY+15ZtawMD+q1bG9wrcACLxXrwIK5N2w4/TZrZzKtleETY8RMHCSEW5pYL5q8ghPwwZtLWzbuCho+tMkyFJBLJ/AUzjhwN9/Hp/MucxR07dPmS+KlJY08nJ+eo6MjS1f7550rjxk1tbf+9LVlxUdHypevn/7acn5M9a/bEsv3xfft39u83dMzoic9fPFm9ZjG18MHDu7/8OtW5tsuc2YsGDwx68uThrDmThMJ/771y797tV6+fr1qxafmyDZpSo6mxf1ZWFt0pNJXG/JpBg5w+fdTZ2WX2rAWEkIYNGw0a0uNO3C139yb7w3eZmZpv+GM7VV/8/QKCRvWNPH9y2pQ5hBBHR6f5vy1nMBhuDRv9c+vqvfu3J038mcfLiDgQtnDByo4dulA7t7Cw2rR59dQpc6gv+/Ud0q1bYOlTh4bsZTD+vdFMckriPzevDh4UpKurW79eQ0KIk5Nzkyae1HcrD1OhG/9ceRR/f+6cRQE9+pRd3qN777Dd23Pzco2NjHPzch8+ujdl8uzS706aOMPAwMCNkAb13YNG9T158vDkn2ZS39qwfgdVzcVi8c5dwTk5fBMT023Bf/QK7D992i/UOt7erUf/MPDe/ds+7X0JISw2e9GCVfr6+gr6XalIbm7uwIEDr169SncQjYQyDYqXnpHm6OhEPba0tOJwOHl5uYSQuLiY9Iy0gECf0jWLi4sz0tOoxxw9TmmFtbGxe/bsMSHkwYM4sVi8ctXClasWUt+ierK8jHTqy2bNWpZ96uzsrH37d967f4d6RiOu0bdCVh6mQnfvxerp6XXrGlhuub9fwK6/Q65di+7Te2BMzHWpVOrbyf/rzW1sbJ2cnF++ela6xNjYhHrgUseVet0KCws/fUpISvoSee7k/7yk/x/Mza2xxtVoqjfN4XDoTqGpUKZB8eztHV+/flFUVKSrq/vhwzuhUOjq2oAQkpWd2aaNz4Tx08qubGjI/XoPOmydkhIJISQzi0cIWbVys7WVTbmn+PzlIyHEQN+gdGFWVuaESSP09Q3G/vCTvb1jWFjol8RP3wpZ/TClsrMyLS2sWCxWueUWFpYtWrSJio7s03vg9RuXmzdvZWJiWuEejIyM8yo6mspgMqmmSnZ2JiFk9KgJHXw6l13B3NySeqDP0bwaTQjhcrnnz5+nO4WmQpkGxRs2ZPSsOZNmzZnUvFnLS5fON2zgTo1AjYyMc3L4Tk7O1d+VkZEx9aA6W505ezw7Oytk2x4bG1tCiLW1bSVlWoYwXK5RVnZmhd8K6NFn8e9zX7x4+vDh3V/mLP7WHngZ6bUqfUYu14gQIhIJvyuYRigsLNTEzwHqAIcQQfEaN246oP+wkpKS5OTEIUNGbd60k+r/NmvW8tmzx6/fvCxds7CwsPJdeXm1YDAYJ08drs4mubl8U1MzqkYTQnJy+aWz1vT0OISQTF5G6cqyhSksLLxyNap0iVgsph60ae1jYmK6cvUiNpvdrl2nCjePj3+QlJzYyN2jkqdwdHSysbG9cPFMaRixWFxcXFx5MPUnEAh69OhBdwpNhdE0KN7RYxGPHt0bPHgkg8Fgs9mJiZ/r1q1HfZa/c+fW3F+mDB4UZGZmfvdurKREsmLZhkp25ehQq3+/ocdPHJy/cGb7dp0yM3mnTh9ZvWoLdUj4fBNoAAAgAElEQVSwHE9P75OnjoTt3t6oUdObN6/GxcWUlJRQx+WsrW3s7RyOHAvn6Ovn5ub07zdUhjD+fgGnTh9Zs/b3V6+eu9at/yHh3YOHcX/tiGAymWw2u1NHv9Nnjvl28jcwMCi71abNq5o3b5WcnHj8xEFzc4t+fYdU8hQMBmPK5NmLf587ZdqY3r0GlkgkUdGR/v4B1FQZjYahtMxQpkHxGtR3P3osovSgHyGkV2D/WTPnO9g7Bm8N2/7n5ogDYQwGo169hpXXLMqUybOsrW1Onjx8795tCwtLn/a+VpbWFa7ZwafzqJHjT546curUkTZtO4QE71m9ZvHJU4fHjJ7IYDAWLly17o+lwSHrra1tfTt1lSGMnp7ehvU7du7cduny+chzJ2xt7X07dRWLxbq6uoQQt4aNT5851qVz93JbicXiHX9uKSoSNW3a/KeJMwwNDSt/Fp/2vqtXbt69Z0dI6AZDQ65HEy8Pj2ZVvkpqjsvlXrhwge4UmoqhWecygerlZYmPb0scMOP7WqUSiYQ61FZUVPTnzq2nTh2JuhCrQfN8ZXDixKE9e/88fixaR0eHWnLs+IGQ0I3nzv5Tbnytem8f5vLThZ2HVPzvTTXQm5aZNv/ZAF2io8/tCgvx7dTVzs4hOzvz5s2rzs4umlKjp88Yn5Dw7uvlbdt2/O3XpRVu8vRpfFR0ZFR0ZNCIcaU1GsoSCASBgYHXr1+nO4hG0oy/HNAstZ1dmjT2vHzlQm5ujoWFZbu2HYNGaMxldxYvXF0sruCQXSUz4e7dv/30WfykiTP696u6h1NjYSgtMzQ9oAqyNT1ArahD0wNkhgl5AKAKVc53hG9BmQYApcO8aXmgTAOAKqA3LTOUaQBQOsyblgfKNACoAnrTMkOZBgClQ29aHijTAKAK6E3LDGUaAJQOvWl5oEwDgCqgNy0zlGkAUDr0puWBMg1Vs7DXozsCyIWlw9Dnlr83mIqhNy0zlGmogpE5O+VDYZGwhO4gIDteotDAiM4yjd60PFCmoWquntzstCK6U4DsJGKpbW2ab+yN3rTMUKahaj59rS6HJ9GdAmR090KGoTHThtYyjd60PFCmoWo6eowxvzvvX/4++X1Bfo6Y7jhQLSUSwksS3T6dbmrJbtfbku446E3LDtebhuqSiKW3TvE+PMs3sdTN+KJVH2BLSkoYDAaDwaA7iCLpGTANjNhNfUwbtDCiOwvIBWUavluxSNveM9OnTx8zZkyzZhp/Z9iydHQZRJ3+7+BeiDLDTbbgu+noqdNfvyKUkCKWjlT7fi71gXshygO9aQBQBQylZYYyDUCsra2ZTPwtKBHmTcsDb00Akp6eXlKC83eUC/OmZYYyDUAcHBxYLJrPpdZumDctD5RpAJKUlCSRSOhOoeXQm5YZyjQARtNKh960PFCmATCaVgX0pmWGMg1ATE1NtewURHWD3rQ8UKYBCJ/Px+m4yobetMxQpgFA6dCblgfKNACxtbXF6S3Kht60zPDWBCCpqak4vUWp0JuWB8o0ADE0NMQhRGVDb1pmKNMAJD8/H4cQlQq9aXmgTAOAKqA3LTOUaQDi6OiIsxCVCr1peaBMA5DExESchahs6E3LDGUaAJQOvWl5oEwDEDYbd5tTOvSmZYYyDUDEYjHdEbQcetPyQJkGwIVMVQG9aZmhTAPgQqZKh960PFCmAUAV0JuWGco0ACgdetPyQJkGIJaWlrimh7KhNy0zlGkAwuPxcE0PpUJvWh4o0wCgCuhNywxlGgCUDr1peaBMA2DetCqgNy0zlGkAzJtWOvSm5YEyDQCqgN60zFCmAYi9vT2aHkqF3rQ8UKYBSHJyMpoeyobetMxQpgFA6dCblgfKNACaHqqA3rTMUKYB0PRQOvSm5YEyDUC4XC6u6aFs6E3LDGUagAgEAlzTQ6nQm5YHyjQAqAJ60zJDmQbALWuVDr1peeDdCTVXz54909LSqHbHnTt3GAyGVCrt1KnThg0b6I6mhdCblhlG01BzNW3aVCqVMv4fIcTOzm7cuHF059JC6E3LA2Uaaq4RI0bY2dmVfimVSj09Pd3d3WkNpbXQm5YZyjTUXI0aNaIG1NSXtra2w4YNozuUdkJvWh4o01CjDR061NbWlhpKe3l5NWrUiO5EWgu9aZmhTEON1qRJEy8vLwyllQ29aXmgTENNN2TIEHNzcw8PDwyllQq9aZkxcPIVyODOuaxPr/N1dJkZX4R0Z1EAsVjCYjIZTG04X9zYUtfIlO3ZydSxnho1GQQCQWBg4PXr1+kOopEwbxq+j6RYGvZ7QssA65bdDU2tdQn+y6uZImEJL1l4Nyo7L1vs1tKI7jj/QW9aZhhNw/fZPvf9gBl19Llol6m7f46n2dbWbd7FjO4gIC/8scF3uHE8w3eIHWq0RugwwCYlQZSdVkx3kH+hNy0z/L3Bd3j7SGBhr0d3CqguXX1m0vsCulMQzJuWE8o0VFdBbol1LQ7HEHc50Rg2TvoCvpjuFP9Cb1pmKNNQXSXSEl6yiO4U8B0kYmlBnlrclQbzpuWBMg0AqoDetMxQpgFA6dCblgfKNACoAnrTMkOZBgClQ29aHijTAKAK6E3LDGUaAJQOvWl5oEwDgCqgNy0zlGkAUDr0puWBMg0AqoDetMxQpgFA6dCblgfKNACoAnrTMkOZBgClQ29aHijToEZycvi+XbxPnzlGfSkWi4NG9du+Y3OFK69YtXDUmAFV7jM1NSUlNVnRSWV07PgB3y7eBQVqcXFRFUNvWmYo06C+GAyGkZExh8OReQ9JyYnDg3q/fv1Cobngu6E3LQ/cCxHUF4vF2h6yV549SMRiLbuNnFQqZTA08ta66E3LDGUalCUlNXn4iN6zZy0I7NmPWrJn718HDu4+evjC588f94fvevosnhDSsEGjSZNmNKjvVuHmhJCgEWPHjZ1MLbx6LXrvvr/S0lKca7uUlJRQC4uKivbt33n1alR6RpqFhWVX/55jRk9ksVgpqcmjfxhICFm6bN5SQrp1C5z3yxJqz6GhGx88jNPV1atfr+HYsZMbNnCv/GcRCoX7w3dduxadwUu3sbHr6t/Ty8t7+s/jV6/c3Lp1e2qdc+dPrd+w4mDE2Vsx10JCN/bvP/TGjcsCQZ67W5OJE38u+wPevHn1wKE9GRlpTRp7zpm9yMrKmlr+KP7+zl3B79+/MTMz9/JsMX7cFAsLS0LID+MG13Gu6+xc98TJQyKR8Ozp62y2hv3lojctDzQ9QFnsbO3ruTaIvnSudMmly+c7dvQzMTFNTU0WFYlGBo0fPWpCamryvN+mC4XCcpubmZovX7a+bD26fOXi8hXzLcwtp02d26JFm/cf3lLLWSzWgwdxbdp2+GnSzGZeLcMjwo6fOEgIsTC3XDB/BSHkhzGTtm7eFTR8LCEkM5M3bfrY3LycqVPmTJwwvbi4+OcZ4xMS3lfyg0gkkvkLZhw5Gu7j0/mXOYs7dujyJfFTk8aeTk7OUdGRpav988+Vxo2b2traUV8WFxUtX7p+/m/L+TnZs2ZPLNsf37d/Z/9+Q8eMnvj8xZPVaxZTCx88vPvLr1Oda7vMmb1o8MCgJ08ezpozqfRluXfv9qvXz1et2LR82QaNq9EU9KZlppG/b9AUPXv227xlTWpqiq2t3fPnT5KTE3/7dSkhxM+vh79/ALVOgwbus2ZPevosvoV367Lbcjic9u06lX7AF4lEwSHrPTy8/lgXwmKxCCFJSV/evX9DlenQkL2layanJP5z8+rgQUG6urr16zUkhDg5OTdp4kl9d3/4LjNT8w1/bKeKnb9fQNCovpHnT06bMudbP8WNf648ir8/d86igB59yi7v0b132O7tuXm5xkbGuXm5Dx/dmzJ5dul3J02cYWBg4EZIg/ruQaP6njx5ePJPM6lvbVi/g6rmYrF4567gnBy+iYnptuA/egX2nz7tF2odb+/Wo38YeO/+bZ/2voQQFpu9aMEqze0bCASCwMDA69ev0x1EI6FMgxJ16dx9x5+bL1+5EDRibPSlcy4uro0bN6WODd68de3I0fBPnxIMDAwIIdlZmZXv6umz+Jwc/sABw6kaTQhhsv67K2N2dta+/Tvv3b+Tl5dLCDHiGn1rP3FxMekZaQGBPqVLiouLM9LTKnnqu/di9fT0unUNLLfc3y9g198h165F9+k9MCbmulQq9e3k//XmNja2Tk7OL189K11ibGxCPXCp40oISc9IKyws/PQpISnpS+S5k2W3Tf//YG5ujTW3RlOcnJzojqCpUKZBibhcbmffbpevXBgyeOS165dKW8z79u/avWfHgP7DJoyflpnFW7psXom0pPJdpaenEkJsbe2//lZWVuaESSP09Q3G/vCTvb1jWFjol8RP39pPVnZmmzY+E8ZPK7vQ0JBbyVNnZ2VaWlixWOXv1WthYdmiRZuo6Mg+vQdev3G5efNWJiamFe7ByMiY+v9RDoPJpJoq2dmZhJDRoyZ08OlcdgVzc0vqgT5Hs2s0l8vdt28f3Sk0Fco0KFfPnv3OXzi9P3yXWFzs16UH1b44cHB3z4C+U6fMLjtgrJypiRkhhM/P/vpbZ84ez87OCtm2x8bGlhBibW1bSZk2MjLOyeE7OTlX/0fgco2ysise7Af06LP497kvXjx9+PDuL3MWf2sPvIz0WpU+I5drRAgRiYTfFUyzFBYWavoHArrgECIol7tbY9e69cMjwvy69DA0NCSECIWFIpGo/v/PfMjJ5RNCqGkbbLYOIaTCgWfduvWZTOblKxXMFsjN5ZuamlE1mtph6SQ8PT0OISSTl1G6crNmLZ89e/z6zcvSJVUe2vLyalFYWHjlalTpErFYTD1o09rHxMR05epFbDa7XbtOFW4eH/8gKTmxkbtHJU/h6OhkY2N74eKZ0jBisbi4uLjyYBoE86blgdE0KF3Pnv22bF3bq9e/ZwyamJi6uLieOHnI3NwiXyDYu+8vJpP54cM7QoihoaGDveORo+EmJqa9AvuX3YmNjW2P7r3PnT9VJBK1bNk2M5MXF3fLzMyCEOLp6X3y1JGw3dsbNWp68+bVuLiYkpIS6rictbWNvZ3DkWPhHH393Nyc/v2Gjh414c6dW3N/mTJ4UJCZmfndu7GSEsmKZRsqye/vF3Dq9JE1a39/9eq5a936HxLePXgY99eOCCaTyWazO3X0O33mmG8nf6rJXmrT5lXNm7dKTk48fuKgublFv75DKnkKBoMxZfLsxb/PnTJtTO9eA0skkqjoSH//gIEDhsv32qsRDKVlhtE0KJ1flx7NvFrUc21QumTRglX6HP1ly387fHT/Tz/NHBk0LirqLDV4XLBgpaOjU9mJbqWmTZ3br+/gBw/vhm7f+PzFk7p161PLO/h0HjVy/KnTR1euXFAsLg4J3uPk5Hzy1GGq/C1cuMrAwDA4ZP3FqLPZ2VkO9o7BW8MaNfKIOBAWErqBn5NNtWIqoaent2H9jm5dAy9dPr9565q792I7+HQpHVC7NWxMHSwtt5VYLN7x55Zjxw94eDTbtOFP6pNEJXza+65euVmHrRMSumFf+C4bGzsPj2bVe4E1AOZNy4OhZedogfIIcsRHNiYOmqW1zVPZnDhxaM/eP48fi9bR0aGWHDt+ICR047mz/5QbX6ve24e5/HRh5yHW9MagoDctMzQ9AAghZPqM8QkJ775e3rZtR2qu99eePo2Pio6Mio4MGjGutEZDhTBvWh4o0wCEELJ44epicQWH7CqZCXfv/u2nz+InTZzRv19lfWegYCgtMzQ9oLrQ9NA4atX0AJnhECIAqAKu6SEzlGkAUDrMm5YHyjQAqAJ60zJDmQYApcO8aXmgTAOAKqA3LTOUaQBQOvSm5YEyDQCqgN60zFCmAUDp0JuWB8o0VEteXl50dHTpXWIBvhd60zJDmYZvEgqFUVFR1HUYjh49+vzpCxNLXbpDwXdg6TB1OWrxN47etDzU4lcI6qOkpOTGjRvnzp0jhERFRd24ccPa2poQMnbs2EXL5vASRRIxri6gMbJShAZG5e8NRhf0pmWGa3oAIYTExcUlJSX1798/Njb22LFjgwcPbt269derXdyb5t7GzMwGY2rNcOdchps317E+6qNmw2i65oqPj4+IiCCEvH//fu/evRwOhxDStm3bjRs3VlijCSEtu5ndOJqi8qQgi7cPc8VFEvWp0ehNywxlumb5+PHjiRMnxGJxUVHRtm3bJBIJIaRu3bqhoaEBAQFVbm5uq9t9tN2Z0M8FuRKV5AVZSMTS57H81ISCHmNs6c7yL/Sm5YHrTWs/Ho939epVV1fXZs2a7dmzx9nZmbqJ399//y3D3qwcdf1H2NyNSk9+X1jbnZvDK6p8fZFQWPq4bH+NGryrpxKJhMlkEgaD7iCyYBCSnljIdcgcPq053Vn+B3rTMkNvWjvl5uZev369Vq1aXl5eO3fuzMrKGj16tK2tIsdWooKS7PSikpIq3j/jxo0r+yWDwZBKpcbGxhMnTmzYsKEC8yjQzp07mzRp8q3Oj5rjGLKKSfbu3bvd3d179+794MGD5s3Vq17D98JoWnsUFRXduHGDzWb7+vru37+fx+M1a9aMEPLjjz8q4+n0DJi2zlWPiFmGuampqWWXcDicYQMmdA7wUkYqhfDt4cnn8+1dNHf0ZzNv3jzq0dOnTydNmnTu3Dlqxg6NcC9EmWE0rdmkUunt27ezsrICAwOjoqKuXbsWFBTUuHFjunP9j+bNmzPKNBDatGmzbds2WhPVLCUlJQUFBVwut1evXt26dZs6darqM+BeiPLAIUSN9Pjx44sXLxJCrly5cvDgQWNjY0JIt27d1qxZo1Y1+uXLl5MnTy47hnJ0dFy7di2toaomlUq1qaAwmUwul0sI2b9/P9X4+vjxY0REBJ/PV2UMDKVlhjKtMT5//nzo0CFCSGpq6pYtW6hJGn5+ftu2bevQoQPd6cp79+7d77//HhISMnr06Fu3blELzczM5s6da2BgQHe6KjAYjJCQkA8fPtAdRMFMTU0HDhxICLGzsxMKhQcOHCCEvHjxQljmMK+S4Joe8kBvWq0JBIKbN296eXnZ2tquWrXK1dWVEGJjYxMWFkZ3tG9KTEzcu3fvkydPpk6d6uPjQy20tLTMycnp0aNHu3bt6A5YLTNnzqQ7ghLp6emVHtrNyMiYMGHC6tWrS39ZSoLetMzQm1Y7VLvZ1tbWxcVlzpw5HA7nl19+odoaai4zMzMkJOThw4fTp0/v3Llzue8OHjz4yJEjNEWDKiQmJjo6Os6YMcPJyWnatGk6OjqK3T960/JAmVYXr1+/ZrPZdevWXbBgQW5u7rx58xwcHOgOVV0CgSA0NDQmJmbs2LF9+vShO44C8Hi8+Ph4Pz8/uoOoVEFBwcmTJ7t06WJra3vu3LmePXsqas8CgWDQoEHoe8gGZZpOWVlZeXl5tWvX3rBhw4MHDxYuXOju7k53qO9TXFwcEhLy7t07Hx+fIUOG0B1HYYRCoZ+fX2lXvQZas2bNkydPDhw4kJOTY2JiQnecmk0KKpeamiqVSvfs2ePn53fv3j2pVJqfn093KFmEhoa2a9du3759dAdRilOnTuXk5NCdgn5Xr14dO3bsx48f5dxPQUGBghLVOJjpoSICgYAQEhkZ2aZNm8ePHxNCunfvfunSJW9vb0KI+k9+KOfIkSPe3t46Ojq3bt0aOXIk3XGUok+fPhpxSEDZfH19p02blpSURAiJjo6W7QpKuKaHPFCmle7BgwcDBw48c+YMIaRx48Y3btzo2rUrNWGD7miyOHTokI+PT2Fh4b1798aPH093HCV68OBBTW56lOXp6dm2bVtCiEQi8ff3z8jIkGEnmOYhM/SmlSIpKWn9+vU2Njbz5s178eKFvr5+nTp16A4lrxMnTty4caNWrVqTJ0/WuOG/DO7cuRMeHh4cHEx3ELWTm5trbGy8cOHCoKAgtb0wizZBmVaYoqKiXbt2ZWZmLlq06NWrV+np6T4+PgzNvMpaOdHR0Vu2bGnbtu3kyZPNzMzojqMihYWFJ0+eHD58ON1B1NStW7cOHToUHBxMVe0q18e8aZmhTMsrNjY2Ojp6yZIl6enpZ8+e7datm6OjI92hFObKlSshISE+Pj7Dhg1T7AX2QGu8evVq48aNS5cutbOz+9Y6mDctD5yFKAs+nx8VFdWwYcOmTZvGxMS0atWKEGJtbV3uop0a7fbt2wcOHNDX19+0aVPt2rXpjkOPc+fONW3aVJv+7ypDw4YNJ06cGBcX17dv36SkpG/N98dQWmYYTX+Ht2/fMhgMV1fXFStW6Orq/vjjj1rZAXj27NnWrVt1dXWnTZvWoEEDuuPQaePGjTY2NiNGjKA7iMb4448/cnNzly1bph3tPjWBMl215ORke3v7Xbt2Xb58efny5fXq1aM7kbK8f/8+ODiYzWYPHToU15KnPs6npKT4+vrSHUSTnD9/vkOHDoWFhVZWVmWXozctM5TpyqSkpIwdO3bEiBFBQUF8Pt/U1JTuRMqSnp5+4MCB2NjYqVOnquH19kDj8Hi8ESNG/Pnnn87OzuhNywnzpssTCoUbN26kbn6ho6Ozd+/eoKAg6iKQdEdTiuLi4vXr148ePbpJkyZHjhxBjS6rqKgoPDyc7hQaydLSMiIi4uXLl4QQkUiE3rQ8MJr+V2pqalRU1OjRoxMSEmJjY3v16lUTzkDbuXNnbGxst27dhg4dSncWNeXv73/48GFzc3O6g2iwMWPGBAUF1bTrWClQTR9Ni8Xi7OxsQsiKFSuogx516tQZMWKE1tfow4cPt2/fXiKR7N69GzW6Er/++qtYLKY7hWbbs2fPs2fPZDvLHGr6aDoiImLr1q2nT5+uUTOCo6OjT5065ezsPH36dA6n6nvOAsiP6k2PHj26efPmHh4edMfRMDVx3vTFixcLCwv79etXt27duLg4uuOozsOHDzdv3uzg4LB06dJyR+HhW+7du1dcXExd0QLkoa+vP2bMmLFjxwYHBxsaGtIdR5PUuNH0o0ePjh8//vPPP9eoOvXly5fQ0FAejzdjxoxGjRrRHUeTnDt3Li4ubtmyZXQH0R4CgeDjx49qdW9lNVdTRtOnT58+efLknj17mjRp4uXlRXcc1RGJRJs2bbpz586sWbMwi0MG7u7uaWlpdKfQBqXzprlcrrW1dVBQEGbRVJP2j6ZTU1NtbW137do1atQoXV1duuOoVEREREhIyKxZs6j7SQPQ5et50y9fvmQymTX8NNdq0uaZHmKxeMaMGenp6YSQ8ePH16gaffHiRX9//4KCgtjYWNRoeYjF4kuXLtGdQhuUmzft5ubm4uJy7do1+hJpDG0eTd++fVsikbRv357uICr1/PnzdevWOTo6zp49G7N9FaJFixZxcXFMpjaPaegiEolGjhyJW85XTjvL9JcvXz59+lTTCjSPx9uwYYOent7AgQNxfEaBIiIiBgwYgMmLcvrWNT0kEkleXp62nuWrEFpYpu/cuXP06NENGzbQHUSltm3bFhkZOXv2bOoOXgBqpfJremRlZb18+bJdu3Yqz6UZtPBzXOvWrWtUjT5x4kSnTp2MjIyioqJQo5Vh165dst39D8qq5Joe5ubmb9++3bZtm2oTaQxtG00/f/7cxsbG0tKS7iCqcPfu3T/++MPT03Pu3Lk16gCpio0cOfK3335zd3enO4iWS0xMNDU15XK5dAdRO9o2b3rr1q1//PEH3SmULiUlZd26dUKhcO3atS4uLnTH0XIDBw7EwVj5VXm9aUdHxzdv3tSvX1+FoTSDVpVpqVRqaWmp9VdN2rRpU1JSUr9+/XC6imr06dOH7ggar5rXm3779m14eDjO+SxHq3rTDAZj5cqVdKdQomPHjrVq1crKymr9+vWo0Spz5MiRL1++0J1C41XnetM9e/Zs1qxZYmKiShJpDK0q09R9vrOysuhOoXj37t0bNGjQ27dvY2JiqNsUgMq8fv2ax+PRnUKzcbncCxcuVGfNvn374h7B5WhV04M6peXjx4/Dhw+nO4jCpKenow1NL19f31q1atGdQuNV/16Ie/bs8fDwaNasmfJDaQZtG0136NChqKiI7hQKs23bttGjRwcGBgYHB6NG06V9+/Y1ZO6Q8ggEgh49elRz5Q4dOqxevVrJiTSJtk3I0xqnT58+f/58mzZtxowZQ3eWmu7EiRMtW7bEJ3F5CASCQYMGVbPvQd2SlM1ms9na9nFfNto2mqauOqTRd0V69OjRsGHDHj9+vGXLFtRodRAVFZWamkp3Cs1W/d40hcViFRQUKDORJtHCf1anTp2ytLT09vbu0aNHSUlJVFQU3YmqKysrKyws7NWrV0uXLsXsUfUxfvx4dJzkV/3eNCFER0end+/ee/futba2VnIuDaBVTY++ffsKBAI+n1/6QzVr1mznzp1056qWHTt2HD9+fNGiRZhppyaaNWvGYDCoGxlTs/IJIR4eHrt376Y7muap5rzpso4dO8bhcAIDA5WZSzNoT9Nj2LBhiYmJfD6fmkBN/XW1bt2a7lxVu3jxYu/evdls9qVLl1Cj1Ufr1q1LazT1pjI1Nf3xxx9pDaXBqj+UpgwcOBA1mqI9ZXrdunW1a9cuu8Tc3FzNr+f5+vXrsWPH3rp1KyIiYvz48XTHgf8xYsSIcme0NmjQAPeulc339qYpd+7cEQqFykmkSbSnTNeqVWvChAlmZmalS/T19dX29qwikWj58uVr1679+eefV6xYYWRkRHciKK9du3YNGjQobaAZGxvjxCJ5FBYWfu8m169fj4yMVE4cTaI9ZZoQ0q1bt4CAAOry7VKptF69eup5ta39+/f7+vo2adIkLCysadOmdMeBbwoKCjIxMaEe169fH0NpmX3XvOlS/fr1KykpUU4iTaJVZZoQMnPmTE9PT6lUymKxWrVqRXec8m7dutWrV6/MzMzY2Ni+ffvSHQeq0K5dO1dXV6lUaqhGprAAACAASURBVGJiMmrUKLrjaLbv7U1TXabBgwcrJ44m0cIJeevXrx82bFhBQYFaNaYTExPXrl3LYrH+/PNPe3t7uuNAdY0ZM+bNmzeurq4YSstDtt40dYC9Xbt2NbwrWMWEPKmUPLySnfZZWJAnUWEqeYmEwrT0dCcnJ7qD/EsikSR++WJlbW1gYCDbHrimOmwdYlNbv0k7DbhMa9pn0fvHgvxcSQ5PG07cT0pMNDMzMzA0pDuIvAxNdNg6xNaZnnfRd82bLrVw4cL27dt3795dOaE0Q2VlOjOl6OAfn718zU0sdTlclmqDwf9gspg5GaLCPEnCs7whs2qxdRnV2Igez2Jz3z/Ot3LiWDtwCN416oTJZObwigoF4oQneUNmq/RdJMO8acrDhw8FAkENn6j6zTKd9ll06xSv62gHlUeCymSnFf1zPDXoN3X5oFDO05jcL28Kffrb0B0EKpOdXvTPMZW+i773mh5QVsWHEKUl5PrRdN+hdirPA1Uws9Ft7md55aA63kGVlyj68ESAGq3+zKx1vf2trhxIV9kzytybzs/PDw8PV0IiTVJxmU58V6ijx9TR07Z5INrBsb7By7s5dKeowNvHAqta3918BFo41NN/9SBXlbPdZJg3TQgxNDQMDg4uLi5WQiKNUXEhzk4rsq4t48EuUAEnd8OMRBHdKcoT8CVWjhy6U0B11Xbn8lT1LpJt3jRl3rx5+fn5ik6kSSqekCfMl0g1aWZHjSPKlxQXqd20/1xeEQMfwDSHit9FMkzzoOAMA/xVAYDSydybpq73/erVK0Un0iQo0wCgCrL1pgkhL1++vH//vqLjaBKUaQBQOnl60/7+/u7u7opOpEm08GRxAFBDMvem1fY6lyqD0TQAKJ08vel3797V8PNiUKYBQBVk7k1nZGScO3dO0XE0Cco0ACidPL1pFxeXGn7pJZRpAFAFmXvTNjY2NfymiCjTAKB08vSm+Xx+Db+sB8o0AKiCzL1pgUBw7NgxRcfRJCjTAKB08vSmTU1Na/jNglGmAUAVZO5Nc7ncgQMHKjqOJtHCMi0QCN68VfUVAHr16bR9x2YVPynI78XLZyKRulxrUIvfRfL0pgsKCv7++29FJ9IkWlimx08YeuHCabpTgAa4GHV2ytQxQqGMPVP4LjL3pkUi0cGDBxUdR5OoS5n++l5fld9LtxJFRRp5m1SZf16QmfqMoxVFbd9F8vSmDQwMxo8fr+hEmkSR1/Q4f+H0iZOHPn/+yOUatW3TYdzYyWZm5mKxePeeHVHRkTk5/Nq164wZPbF9u06EkOs3Li9dNm/50vWHj+5/9er5sKGjB/Qf1re/36SJP7999zom5nq9eg23bt5FCDl95tiRo+E8XrqtrX2Xzt2HDB6pp6dHCBEKhfvDd127Fp3BS7exsevq33PE8B9GjOyTnZ116vTRU6eP2tjYHjoQWXnmp0/j9+7768XLp4SQpk2b/zBm0v37d/bs/fPokYsmxibUOitXL3rx/ElE+OmFi2d/THhfr17D+w/uMBjMVq3aTZ4008zMnFpNIMhbuXpRTMx1E2PToUNH9+n9bzdNKBTu+jvkytWLRUWiWo61Bw8e2dm369evwPBhY34YM0mBvw6NUOGLIxaLJ/4UxGaxQ0P2slis4uLiSZNH6ulxtm35m8VipaQmh4ZufPAwTldXr369hmPHTm7Y4N/r8nz926xfr+G0n8fpc/TXrQ2m1jl8ZP+OP7dcPB9z7Xr05i1rCCF9+/sRQn795ffu3XoRQh7F39+5K/j9+zdmZuZeni3Gj5tiYWFZ+U+hPu+iWTPn9+jeW2m/LrnI3JvW09MbOnSoouNoEoWNpvfs/fOP9ctrOdaePXPB4EFBKSlJbB0dQsj6DSsOH9kf2LPfgvkrbG3tFy2e8+TJo9KttmxbGxjQb93a4F6BA6gl4eF/29rYbVi/Y8rk2YSQPXv/+mvn1s6+XefOWdypo9/hI/s2bFpJCJFIJPMXzDhyNNzHp/MvcxZ37NDlS+InFou15Pd1RkbGPu19t27eteT3dZVnvnf/zszZE/PycidNnDHhx+klEolELO7WNVAikVy7Fk2tU1xcfOfOzc6du1FfZvDS3dwar1sbMm7s5Li4mF9+nSoWi6lvXbh4hs1iz5wx37lO3c1b1lA/ZklJyYKFM2/f/mfE8B9mzpjv6tpg+Yr558v0ZEpfgcCe/RX1u9AU33px2Gz27FkL3757ffrMMeqtlZycOP+35SwWKzOTN2362Ny8nKlT5kycML24uPjnGeMTEt5/67dZybO3atlu8KAgQsjqlZu3bt7VqmU7QsiDh3d/+XWqc22XObMXDR4Y9OTJw1lzJgmFwkr2o1bvorZt1PQO3PL0poVCYVhYmKITaRLFjKYzMtLDI8L8/QPmz1tGLRk6ZBQh5PPnj1HRkaNGjh8zeiIhpGOHLkGj+u3Z++fGDTuo1fr1HdKt27/nF+Xk8Akh7u5Nxo+bQi3h8TIiDoQtXLCyY4cu1BILC6tNm1dPnTLn/v07j+Lvz52zKKBHn7JJGjZwZ7PZFhaWTZp4Vhk7OGS9ra39tq1hurq6hJC+fQZRy1u0aBMVHUl9ef/+HYFA0KXzv+eqOtd2of623Ro2MjTkrly18O7d2LZtOxBCuvr3/PWX3wkhPu19Bw/pcf3GJQ8Pr39uXn3y9NHBiLOWllaEEL8u3QsLC46fOFgau+wrUNNU8uK4uzXu12/I7j3bra1sDh3e9/P0Xx0dahFC9ofvMjM13/DHdjabTQjx9wsIGtU38vzJaVPmfOu3+S1mZub29o6EEDe3xiYmptTCbcF/9ArsP33aL9SX3t6tR/8w8N792z7tfb+1H7yLqqmwsFC2AbVQKIyIiBg7dqwSQmkGxZTpBw/jJBJJn17lJ808fvKQENL+/9/iDAajhXfrS5fPl67QrFnLcpuUXfLgQZxYLF65auHKVQupJVTrjZeRfvderJ6eXreusr81U1KTP3/+OH7cFOqvq6zu3XotXTbv8+ePTk7O1/+5XLduPWdnl6/30LJlW0LIy1fPqD+w0j91Dodjb++YnpFGCLlz55ZYLB4e9N/nUIlEYmjIreQVqDkqf3HG/TA5Jub6ot/ntGrVrnevfz9sxcXFpGekBQT6lG5SXFyckZ5WyW+z+lJTUz59SkhK+hJ57mTZ5enpad/aBO+iahIIBHPmzNmxY4cM2+rr60+ePFkJoTSGYsp0VlYmIcTKyqbc8vx8ASHEzNS8dImxsUlBQUHpDSgN9MvfGJfD+e//bWYWjxCyauVm6//ds729Y3ZWpqWFFYvFkjkzPzuLEGL9VWZCSLu2HY2NTaKiI8eMnhgbc2P48B8q3APXkMtgMAoKC77+FpPFkkgkhJDs7EwLC8uN6//n3cli//eyf/0K1ByVvzgGBgadfbsdPLS3f7//+pJZ2Zlt2vhMGD+t7CaGhtz09NRv/Ta/Kw8hZPSoCR18Opddbm7+zd403kXV9+nTJ9k21NPTGzBggKLjaBLFlGku14j6E7K2/p/3q6WlNSEkNzeH+rBGFXQ2m83hVOv+00ZGxtQDJyfnr58xKzvzWxtW53g3NRipcCc6Ojp+fj2iL51zd2siyBd09u1W4R54vAypVFp5aTAyMubzs21s7KjDnlBW5S9OUnLiyVOHDQwMtgX/8deOCOrzspGRcU4O/+v3AzUgqPC3yWAwKo9R+m6h3sYikfDr/X8L3kXVJGdv+vDhw6NHj1Z0KI2hmEOIXp7ehJDz50+VLqEOibi5NWYwGHfiblELi4qK7sTdatTIo5qjYC+vFgwG4+Spw6VLSqdeenm1KCwsvHI1qtwzEkL0OfqZmbwqd16rVm0rK+uo6MjSDaVSaUnJvzda7t6tF4+XEbpjU5MmnjY2thXugTqG08jdo5JnadaspUQiOXP2vysSyDx7VPtU8uJIpdL165dbWFiFbNuTmZmxLfiP0k2ePXv8+s3LcptU8ts0NTGjPpZRUlOTSx/rc/SpQkl96ejoZGNje+HimdIYYrG4uLi4kh8B76Lqk3mmrFAo3Ldvn6LjaBLWkiVLvl6a9K5QIia2darb7zcxMc3MzIg8d/Ljx/f5Bfn3799Zs/b3du062ds5pKamnDx1mBAGj5exffumhI/v585ZbGfn8PHThxs3LvfrO7i0GScSCQ8d3te6dfvS+VXGxiZ5eXnR0efevH0pEonuxMWsWrPIy6uFhYVl7dout+/cPHfuZF5ebnZW5qXL53fu2hbYsz+DwXj79vXNW1fZbPbHTx902DqlU53KYTAYZmYWZ84ej4u7VVxc/PrNy23Bf+jp6tWtW48QYmFuee16dGLi5+HDxpTmuXot+vnzJ0KhMD099dSpI8eOH2jVqt3wYWMIIQcP7alXr2EL79bUmufOn+JwOH5dujs71713/05UdGROLj87O+tiVOS24HWBPftT8cq9AtX3Lj7XqYGBkZnO926oVC/jcm3rGHBNq5uqkhfn9Jljp88cXbxotbt7E1NT8337/6+9+w5o4m7cAP69LBJICBtERAXBAShYVHAULWAdiK1V6xZb0dZVV9/Wau1ra+2wrbaO6isV67aKq9aJFFQUB446UIviQAXZIYGErN8f6cvPV3ZIcnfwfP4iyd3xJIbHyzd339vQunXbtm28vbx8TiQePnHisFarfZz9cNu2jSmnT77W7/Va/jXlCvnhIwesra35AsHvhxL27tup0+nGjX2Xx+MJRdYHDu5+8PA+RahbGdc7tO/k6tri8OEDZ8+d0uvJrVvXf1r1rVqj7tQpoKanwOp30b1rpR4+IlsHS7yLDMdNx8TEGLc6RVFBQUGmDsUaJjtues7sBW5u7ocO7U09m+Ls5NKtWyiPyyOEzP7gYxsb8b79u0pLZW3beC9buqJrULf6b3b6tLkuLq779u26ePGco6NTn979nJ1cDMNV33+3bsOGVScSDx/6Y6+bm3u/vv01Go1AIJg6ZVZhYf6WrXF2Uvtp0+Z6ebWraeMR4QOEQuHmzRt+XrdCKrXz9e3Y0sOz8tFOHQOePs3uGxbx4ir29g4ZGTf27d9lZSWMHvJW7P8OklbF5/OXf7NmQ9yqpKRjhw7t9fDwjB4ynMfDJShJLS9OTs6z/2z4KSJiYPArPQghgwe9cS7t9A8/fNmxg39Ld4/VP238ef3Kbds3UhTl49PhzTfeNmytpn/NgQOis7Mf7dy1ecvWuFf7hI8cMW7b9njDKi3dPebNXRj3y5rVa77z8ekQPeStPr37ffXlyvhN69as/d7GRtw5IKhz5661Pwu8i+rJ6G+ShEJhcz7MgxBCVTuMe+FooUpJAvtVvx/aTHy6eL5Gq/nqy/+fY2HR4nl5z3PXr6N/6tuj8dm9oh3dvYw8X8BMEn7M7tLP0bU1s1LRi8nvouO/PgkZ5NCyHdP/vVQq1d69e0ePHk13ENqw7D/khpLL5aPHVn/Q3tQpH0QNfrPah04kHkk8eeTixXPff/ezmQMCC+BdZBJarda4Hery8vK4uDjUdJNlbW39n/Xbq33IViKtaa0jRw6oNepvvl5l+GoUmjm8ixpPLpdHRUUlJycbsa5QKGzmJ4s38ZrmcDgt3NwbulblSZIvWfr596YIBSyDd5FJGH3mkVAojI2NNXUcNmHKDHkA0ISJxeLjx48bt65SqcS1EAEAzM7oY72VSmV8fLyp47AJahoAzK4x800LhcJJk6o/1b6ZQE0DgCU05rhpXLIWAMC8xGLxyZMnjVtXqVTu3r3b1InYBDUNAIymUCg2bNhAdwo6oaYBwOzkcnn//v2NW1coFA4dOrQeCzZZqGkAsASjZ8izsbGZPn26qeOwCWoaAMyukfNNHz58uB4LNlnV1zTFJZTx10UBs+NbcQmpY7Z7y+NZcSkO41JBTbh8jiXfRUZfWby4uHjNmjWmjsMm1de0tYSnKK7tqsxAr+LnKrEd4070txJRihK8bVhDll8htrPQ7lhjjpsWiUSDBg0ydSI2qb6mnVoIlAqtxcNAvWgq9AIhh4E17dZaKCswcvwRLEyr1nN5lCWvLGH0WYhSqRRj09VwbS3kcMnj2wqL54G6XTqe7xci5TDva4XAvnY3zxarVTq6g0DdLh3P9wu15VhqbLMxx03LZLKDBw+aOhGb1Pi3PmRyi4wLxVk35JbNA3U4fzjP1pHXJazG+TPpNeYjzxNbnpQW1Xb9QKDd+cN5EgdeYN8GX5erMQxXSTdCQUFBM78WYvVXb6l09Necoly12J4nEjPrsnvNjZWIk5ddzuFSHu1EwZH2dMepTWmhJnFHbplc6+5to1XXfYl3sBgrESfviZJDkZbtRN36W/Rd1Jj5pouKihITE0eMGGGGXOxQR00TQkryNflPlAoZvhqiE5fHtXXgOrpbWUvYcQhO/tOKwmcqZVlT+IZj586dPXv29PT0rMeyjMbhcWwdeE4trKxtLf0uakxNQ901DdDMTZ06NTY2NjgYF2FplIqKCuOuDFBcXHz58uXXXnvNDKHYgXnfQwFAU2T01VsePXq0ZcsWU8dhE9Q0AJidQqEwenBZIpH07NnT1InYBDUNUAehUEhROLuyUfR6fV5ennHrtm3bFtdCBIDaiEQi1HQj2djY7Ny507h18/Pzr169aupEbIKaBqhDaWmp0bO7gQFFUW5ubsate/36dVyyFgBqI5VKdTqcWtkoCoVi2LBhxq3r7OwcEhJi6kRswrh5IQCYhsPhlJSU0J2C3fR6fWFhoXHr+vv7+/v7mzoRm2BvGqAOdnZ2qOlGasycHrdv305NTTV1IjZBTQPUwd3d3ejZ3aCS0VcWT09PP3/+vKnjsAlqGqAOTk5Od+/epTsFuzVmvmlPT88uXbqYOhGbYGwaoA6tWrXiMHDeWLYx+hNJnz59TJ2FZTCnB0AdtFptaGjohQsX6A7CYjqd7uHDh23btjVi3aysLJFIZPTxfE0A9hEA6sDlcn19fTMyMugOwmIcDse4jiaExMfHp6enmzoRm6CmAerWu3fv27dv052CxcrKyiZPnmzcuq1atWoCs8g2BmoaoG7BwcFHjx6lOwWL6XS6zMxM49aNjY0NCAgwdSI2QU0D1C04OLiwsLCsrIzuIGxlY2Oze/du49ZNSkqSyWSmTsQmqGmAeunVq1dCQgLdKdiKoigHBwfj1t2zZ49KpTJ1IjZBTQPUy9ixY7dt20Z3CraSy+Xh4eHGrevr6+vo6GjqRGyCmgaoF2dn5wEDBiQlJdEdpNmZPXt2Mz9uHcdNA9SXRqPp1atXMz9x2ThGHzddVlaWkZHxyiuvmCcXOzTr/6MAGoTH4y1atGjlypV0B2Efo4+bvnr16qZNm8yQiE1Q0wANMGTIEJlMduDAAbqDsIzRx03zeDyjB7WbDMzpAdAwixcvnjx5slAofP311+nOwhpGHzfdvXt3M8RhGexNAzRYXFxcSkoKBqnrz+j5phMSEu7du2eGRGyCmgYwxrJly3799Vcc+FF/xs03vWnTJpFIZIY4bIKaBjDS2rVrr1+//sUXX9AdhAUUCsWIESMaupZarQ4PD3d3dzdPKNZATQMY74MPPggICBg6dOiDBw/ozsJoer0+Ly+voWvx+fzZs2ebJxGb4LhpgMbKzs7+9NNPu3btOnPmTLqzMJRery8qKmro+eKXL19WKpU9e/Y0Wy52wN40QGN5eHjEx8dLJJJhw4YlJyfTHYeJjJvTY+PGjRRFmScRm6CmAUwjJiZm/fr1hw4dmjx58q1bt+iOwyzGzekRGRkZEhJinkRsgkEPABO7cuXK6tWrHRwcpk6d2q5dO7rjMIJcLo+KisJHDeOgpgHMIikpaf369W3bto2NjfX29qY7Dv3Ky8sbdGjd9u3b27ZtGxoaas5Q7ICaBjCjxMTEXbt2SSSSd955x9/fn+44bBIaGpqSkiIQCOgOQj/UNIDZpaSkxMfHt2nTJjIyslevXnTHoUFDBz3Ky8vLysqa+TTTlfAVIoDZhYWFbdq0aejQobt27Ro5cuSJEyfoTkSDBo388Pl8e3t7c8ZhE+xNA1jUvXv3jhw5sn///vHjx0+cOJHuOEyUnZ09Y8aM/fv30x2EKVDTADQoLi7evHnz5cuXAwMDx40b5+TkRHciBlmzZo2jo+OoUaPoDsIUqGkAOm3dunXLli39+/ePjo728fGhO465KBSKmJgYoy8u3syhpgHod+LEiV9++cXR0XHSpEnBwcF0xzG9+n+FWFxcXFxc3KZNG4vkYgfUNABTpKWlHT9+PCMjY8KECQMHDqQ7juVMnDjx+fPnR44cIYTExsa+//77Xbt2pTsUg6CmAZjl7t27mzdvVigUwcHBY8eOpTtOo0yfPj0tLa3q/enp6S/enDNnzunTp/V6vZ2dXWRk5Mcff2zBjCyAA/IAmMXX13fp0qWLFi3Kzc0NDQ1dvXp1cXFx1cXGjx9PR7qGmTZtmpOTE/W/qg5oODs76/V6iqJKSkp2794dEhISHR1NU2QmQk0DMJGjo+PcuXNTU1PFYvFbb731448/ZmVlVT46YMCAzMzMZcuW0Zqxbn5+fi+de0lRVN++fV9a7MXJ8yiK0mg02dnZQ4cOtVRMpkNNAzAXh8OJiYk5efJk+/btP/zww6VLl16+fJkQkpeXp1arExMTExIS6M5Yh4kTJ77Ywq1bt656GRepVPriTYqiunfvjsu3V0JNA7DAgAED9uzZEx4eHh8f36NHD8MszDKZbMuWLYbiZqzOnTsHBgYaftbr9WFhYW5ubi8tI5VKK+fu4PP5ffv2XbduncWTMhdqGoA1QkNDV61apdVqK+/Jzs5etmxZtYPXzDF+/HjDDrW7u/vbb79ddQE7OzvD5HkikSg6Onr58uV0xGQu1DQAmwwePPilex48eDBv3jya4tRLQEBAYGCgXq/v16+fi4tL1QUkEgmfz7e1tY2JiVmwYAEdGRkNB+QBmJJSoXt4WyErUJfLdebY/q5du6r+zVIU5e7u3qdPH3P8RpMoKio6c+ZMeHi4tbV11UdLS0uTk5P9/Py8vLzoSEcbkYRj7yzw6izm1LrDjJoGMJnMa/IrScU2dnzX1iKd1iw1DU2JTkfyHisLnqkGxri5tLKqaTHUNIBpPLhZdu10yWujW9AdBFhGq9b/ufNZr6GONTU1xqYBTEBWoElOeI6OBiNw+VT4OPffVjyuaQHUNIAJXDtV3LG7Hd0pgK0oinToJr1+pqTaR1HTACZQ9LzC0V1IdwpgMUd3q4LcimofQk0DmIC8SCMQ4q8JjCew4iqKNNU+hDcWAACjoaYBABgNNQ0AwGioaQAARkNNAwAwGmoaAIDRUNMAAIyGmgYAYDTUNAAAo6GmAQAYDTUNAMBoqGkAAEZDTQPAP+Ry+d2/bzd+O5mZd2fNnjxwcO/5H04jhNy/nxk9tN+Z1GTjtpacktgvPPjRoweND8ZSPLoDAABTTJ4yKjSkj69Ph8ZsRK1WL1o819nZ9bPF30jEEkIIj8cTiyU8LtrGSHjhAOin1+ufPnvS0t3D3L+FoqhaFqioqH6+4wZ58PB+bm7OpwuX+fl1Ntzj6dlm+7aDjd8y69T5gtcTahqAHrcybqxZ+/39+387Oji1aeudmXln86a9AoFAqVTG/bLmZNLRigpVK4/WI0eOf61ff0LInoTtSX8eHzF87C+/rCkozPfx6TB/7iJPzzaGrV25emlD3Op79+7a2zsEBXab/O50R0cnQsikd0e2bePdpo333n07VSrl7l1HT59J2r//t/tZmSKRdfduoTOmz7ezsyeEjBoTVVRUuP/A7v0Hdru6uu3cfogQUlOYmmzeEhe/aR0hZMasd2xtpQf2nTx67Pdvvl1CCFn+7ZrgV3rU8iyuX7+6ZWvc9RtXCSEd2vu9997s9r4d6/96VlRUbN6yISnp2PO8XEdHp/6Rg2MmTuVyuYSQIUP7zv5gwZkzf6adP2NjIx4S9dbECbGGZ7fyp6/Pnj1FCOncOWjGtPl/Jh//z4ZVu3b84eLiSgi5ceNayqmT06fNNfyKFSu/On8h1fDK1PMF35eQKBQ29noRGJsGoEFubs78D9/n8XgLFywNCuqWmpoSPWS4QCDQ6XQLF805d+7U2DGT5sz+pF279l8s/eTwkQOGtTIybvz225Z58xZ9vuS7vOe5X33zmeH+9MsX/vXRjDatvebP+3Tk8HF//XV57vz3lEql4dGLF8/dvnNz2dIVX3z+vVgsvnXruqdnm6lTZg2JGpZ6NuWb5UsMi/37s28lEts+vfv9tDLu3599SwipPUy1+vWNjJk4lRAyJXbmgo8/J4QEBXabEjvzxWVqehY5OU9VFarx4yZPnDAlJ+fpxwtmVT6F+uByuenp50N7vvr+e3O6BnXfum1jwt4dlY9+/c1n7dq1X7liQ2TEoE2/rk9LO0MI2b4j/tixQ8PfGjN1yiyZrEQkEoWFRRBCUs+mGNY6cvTg8RN/GD5k6HS602f+DHs1okEveOM7GnvTAPQ4kXi4vLz8s0+/dnBw7NUr7Npfl9POnxkzOubU6aS/rl/Zse13JydnQkhE+IDy8rKEvTsGDRxqWPHLpSscHBwJIcOGjVr784oSWYnUVrpq9fIhUcNmzfyXYZng4JCJk4ZfvHSuT+9+hBAuj/fpwmUikcjw6Nw5n1R+EufxeFu3bVSpVFZWVh3ad+LxeI6OTgEBgYZH6wxTVatWrQ1jHV06d+3UKYAQ4urq1qVz15cWq/ZZREQMjIwcZFigfftOc+e9d/3G1W7BIfV8Sblc7to1v1Y+tafPsk+dTho5Ypzh5qCBQ8eOmUQIaeft+8fh/RcunQsJ6f0s56lIJBozOobH4w0e9AYhRCq18/XpcPZsyptvjCwvL09OOVFWVnbqdFJE+IBrf10uKio09HiDXvDGQ00D0CAvL9fGxsZQVRRFubt75OY+I4SkpZ3RaDRjxkVXLqnVam1sxJU3xqCBaAAAD8VJREFUhcJ//vhdXVsQQgry88rLyh4+zHry5PGhP/a9+CueP881/NCxo/+LlaFWq/fu23ki8fDz5zlWVkKdTldcXOTq6lY1ZJ1hjFb1WUhtpRRFnT7z52+7tz58mGVtbU0IKSosaNBmi4oKN2/ZcPFSWmmpjBBi+ALzpd/I5XKdnV0K8vMIIRHhA0+ePPrRxzOnT5vn5dXOsEBYWET8pnVyufxM6p+G/5z++GNfRPiAlJREV1e3Th39c3KeNegFbzzUNAANWrZspVAo7t/P9PJqp1arMzPvBAYGE0KKigocHZ1++G7diwtzedX8nfJ5fEKIVqctKioghEycMOXVPq+9uICDg5PhB5Hw/ytDr9d/snD2nbu3Jk6Y0qlT59Onk3bu2qzT66oNWf8wRqt8FpXj2m8NGz1l8syCwvwln39cU7BqFRYWTHlvrEhk/c6k993dPTZuXPs4+2G1S/K4PMNv7NG951fLfly3fuW7saMGD3pj9gcf83i8sLCIDXGr086fOXzkQGTEoKjBw2Knjnn06MGp00mREYMML0v9X3CTQE0D0OD1/lG792z7ZNHs/pGDr15L12g0MROmEEIkEtvi4iJX1xZWVlb13JRYLCGEqFTKyq8Ta3Ht2uX0yxcWfrI0InwAIeRJ9qOXFtDr9ZU/GxHGaCqVavuO+MGD3pgxfd6Le6b1d/D3hKKiwjWrNhk+Gbi4uNVU0y/q0b1nt+CQhL071v68wtW1xfhx77Z09/D16ZCQsP32nVsfzPzI29unY0f/b5YvqRzxaNALbhL4ChGABlKp3Yzp862shFlZ94JfCdmwfruHhychpGvX7lqt9uDveyqXLC8vr31THh6erq5uR44erFxSo9Go1epqFy6RFRNCKo+MNtzU6f7ZaRUJRQUF+ZULGxHGaEpluUql8v3voR0vBhPwBYQQmayk9i3IZMV2dvaVozclsuIX/8upluG7QQ6HM2L4WCcn57//e2pPWFjE7Tu3/Pw6e3v7EEKGDhl+69Z1w4hHQ19wk8DeNAANMm7f/Hb5klkz/sXj8zkczrNnTxwcHLlcbmTEoN8P7V23/sdnOU99fTpkZt49k/rnpo17ajlggKKo6dPmLf7sw+kzY6KHDNdptceOH4qMHDT8rTFVF+7UMUAgEGyIWz148Jv37/+9fUc8ISTrfqbhkO2AgKCTSUe379gkkdj6depsRBijSaV2Xl7t9u7b6eDgqJDLf938Hw6Hc/9+JiGkrVc7Doez4sevZkyfHxQYXNMWAgOD9+3/bWP8z35+XU6fTjp/PlWn05WUFEuldjWtsnffztSzKZERgwoK8vLz89q372S43zDuMXTIcMPNvn0j1/z8g+EYj4a+4CaBvWkAGri5tmjRouU3y5cs/XLh518s+GBO7PvTJiiVSj6fv/ybNVGD30xKOvbDimWXr1yIHjKcV9dwcJ/e/b76ciWfx1+z9vvNW+NcXVt0rnJwhYGzs8uihV/+nXn730v+lZ5+/ofv14eE9N67b6fh0alTZgUFBm/ZGrd9e/yTp4+NC2O0TxcuEwlFn3+xYNfuLe+/P2f8uHePHftdrVa3cHP/6MPPVCqV4Si6mrza57UJ4yfvP7D7yy8XqjXqNas3eXq22bd/Vy2ruLt7qCsqfl634o/D+4cNG/X2yPGG+1u6e7zStbthiIMQYmVlNXBAdOXNBr3gJkHV+bkAAOq0/etHvYe52bsK6r+KVqs1nHyh1WpPn/lzyecff//dz12DupkzJjDXowzFgxuywZNbVH0Igx4ANHj06MEHc2JDQ/q08/ZVVahOnTopFAo9WnrSnateNsStfnHAupKtRLpta20nvzTerNmTs7Iyq97fs2fYgo+WmPVX0wg1DUADGxtx+GsD0tJOn0g8LBZLAvwDZ89eYDhBmflGjhwfFTWs6v0cyuyDqIsXfaXWVPNlncmPgWMUDHoAmIARgx4AL6pl0ANfIQIAMBpqGgCA0VDTAACMhpoGAGA01DQAAKOhpgEAGA01DQDAaKhpAABGQ00DADAaahoAgNFQ0wAmIHbgV6gacEUogJeoK3Riu+onWUJNA5iAnRO/4KmS7hTAYvlPlDXNCYOaBjCBLn2kdy/VcRUogFrcuVTSuY+02ocwQx6Aady/obhxRtZvdDUznAHUQqfVn9zxrFeUo2vr6i8NjJoGMJm/r8ivphTbOghcWov0OvxlQR20WpL3uPz5I+XAGFfX1jVf7hI1DWBCZaW6h7fkskKNQqahOwuDqNXqU6dOhYeH0x2EWcS2PDsXfrsgCafW4WfUNACYnVwuj4qKSk5OpjsIK+ErRAAARkNNAwAwGmoaACzB29ub7ghshZoGAEu4d+8e3RHYCjUNAJbA5XLpjsBWqGkAsAStVkt3BLZCTQOAJUgkErojsBVqGgAsobS0lO4IbIWaBgBL8PPzozsCW6GmAcASbt68SXcEtkJNA4AlODs70x2BrVDTAGAJeXl5dEdgK9Q0AACjoaYBwBL8/f3pjsBWqGkAsIQbN27QHYGtUNMAAIyGmgYASxAKa7yIFNQONQ0AlqBUKumOwFaoaQCwBJyFaDTUNABYAs5CNBpqGgCA0VDTAGAJuCyA0VDTAGAJuCyA0VDTAACMhpoGAGA01DQAWEKHDh3ojsBWqGkAsITbt2/THYGtUNMAAIyGmgYAYDTUNABYAo6bNhpqGgAsAcdNGw01DQDAaKhpALAENzc3uiOwFWoaACwhJyeH7ghshZoGAEuwtramOwJboaYBwBLKysrojsBWqGkAAEZDTQOAJfj7+9Mdga1Q0wBgCTdu3KA7AluhpgHAEnDJWqNRer2e7gwA0DS98847165dI4RQ1P9UTXp6Oq25WAZ70wBgLrGxsS4uLhRFGZraoEWLFnTnYhnUNACYS2hoqK+v74v36PX6oKAg+hKxEmoaAMxo1KhRTk5OlTddXV3Hjx9PayL2QU0DgBmFhoZ6eXkZftbr9d26dXtp/xrqhJoGAPMaO3asVCo17EpPmDCB7jjsg5oGAPPq1auXj4+PXq/v3r27t7c33XHYh0d3AABgHKVcpyjVlMk0yjKdWqVr/AYH95mqK3YPCxp9K03W+K3xrTgiMddawhXb8QTCpr+vieOmAeAfBU8r7l1X/H1FTnG55XI1T8ATWAt0TLzqCqVWqjUVWpGYx+Pp278i9vK3sXXk053KXFDTAEAKcypO7c1XKQklEEicrEVSK7oT1ZeiUCnPV1B6ja0999VhTtaSJnjFRdQ0QHOXvDs/66bCydtB4sTiKaFLnslzMwu79LHrMdCe7iwmhpoGaL4qlLqtyx45eztKXFhc0C+S5cjLi0rfnutBdxBTavqj7wBQrbJS7S+Ls1oFtmgyHU0IsXUTi93s1310X6uhO4rpYG8aoDkqLVLv/vGpV48mtddZSavR3TuXPWVZW7qDmAb2pgGao63LHrUJbkl3CnPh8jiegW5blj2iO4hpYG8aoNk5sC5HYC8V2QroDmJestwyG6EyfJQz3UEaC3vTAM3LzTSZvJQ0+Y4mhNi6Wj/+W/ksq5zuII2FmgZoXs4eKnD1caA7hYU4ezuc2ldAd4rGQk0DNCM3zsocPGx5Vk3wHJBq2dgLOXzBo9tldAdpFNQ0QDNy85xMJBXRnaJ6n38btefA1ybfLN/aKuNCqck3a0moaYDmolyuLcmvsLZjzYngJmHrYp11U053ikZBTQM0Fw9uldm3lNCdwtI4PI7UxfrJPSXdQYyHiUwBmovn2SqKZ65R6cz76YdPrH2ac1cidmjXNnhg5Pu2EidCyKIvw98a8tGNjORbd1JFQnFItzf795tsWEWr1SYm/5J2aX9FRbm31ytqtbmalOJyC5+pWnoLzbR9c8PeNEBzIS/W8K3Msmf2972LGzbPcnVpO/KNha/2HHP/wZV18dMrKv6p3Z17l7i7+U57d13XLgOPJ224dSfVcP++Q8tPJP/Swbfnm1HzBXxhudJcI8hcPldewuKTx7E3DdBclMk01s5m2Zve/8f3IcFvvhk133DTt12P5T+9fSczLaBTX0JI967R4WExhBB3N98L6QfuZqZ1at8r++nttEv7wsMmDYx4jxASHDT4XtZlc2QjhHAFXEVJhZk2bgGoaYDmgsfncnmm/wBdWPQsNy8rv/Bx2qX9L95fXJJr+EEg+OfYEi6XK7V1KZHlEUKu30omhLzac3Tl8hRlrg/3XD6HUJSZNm4BqGmA5oLDIxVKjdDU5x+WygsIIZH9Jnfu1O/F+yUSp2oycHg6nZYQUlycIxSKbaylpg1TLXW5xkqKmgYAxhNLuSUy018xSySUEELUapWLc5v6r2VjY69UytWaCj7P7Ketayq0YjsWX4ILXyECNBcObgJihqnWnJ087aRuFy//rqr4Z/YMrVaj0ahrX8ujZQdCyJW/jpk8T1U8HmXrwOJdUtQ0QHPR0ltUkmP6oykoiho6aI6sNH/V+ndTz+85fW7XT+vfPXthT+1rdfGLcHFuk3Dg64NHfky/eiTh929lpXkmz2aQ91DWypfFlz5ATQM0Fy6trLRqrUZl+nGPgE593xn3A5fLP3h4RWLyRnt7N682QbWvwuVyJ49f6duux7mLCYeOreJQHBtrO5MHI4QoipQObgKBiMVdh/mmAZqRMwfy8/P5du5iuoNYTn5WiW8At3MfS3xXaSYsHq8BgIYK6mu3/dvHtdR0xt2z23Z/WvV+Ps9KrVFVu8rM2DhXF5NdzirjTuq2PYur3q/X6wnRV3vQ3rR317m7+VS7Nb1O//x+0fBp3qaKRwvsTQM0L0m/5RUX8xxa2Vb7aEWFUq4orHq/RqPm8ao/WEJq68LlmmyHr6YAOp1Or9dzudWcnmMrca4pW+7dgvZdBIF9zTKcYjGoaYDmRa3S/7byScvOLegOYnbaCl3B/ecj57D+ko8sHlYHACPwraje0Q6Pr+XQHcTssi4+eX2CC90pTAA1DdDstO5o3THYOvduPt1BzCj7r5x+I5ykjiw+q6USBj0AmqmbaaXXz5a7dXSkO4jpPbqaEz7Skb0zl74Ee9MAzZRfiMQnkJ/dtEY/9Fr9vXPZoQOkTaajsTcN0Nw9vlt+el+B0N7aoRWLjyw2KHhQpKtQRY5xsXNuCmMdlVDTAM2dVk1Sf8+/fanU2cvexkEkELHsdAqVXK0oVj7LyO8+wLFbf3u645geahoACCGkrFR7NaX49sVSisOxdRUTiuJZcflCHsW8mZr1Wr1apVGrNJReX/S0VGDF8ethGxRux2mig7ioaQD4H/lPK57eLy98ViEv0ep0RF5Ux1x3liey4QmsKbGU59RC4OErkjo1qSGOqlDTAACM1kQ/JAAANBWoaQAARkNNAwAwGmoaAIDRUNMAAIyGmgYAYLT/A8izBqaQ15AeAAAAAElFTkSuQmCC",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "from IPython.display import Image, display\n",
+ "from langgraph.graph import END, START, StateGraph\n",
+ "\n",
+ "langgraph = StateGraph(OverallState, input=InputState, output=OutputState)\n",
+ "langgraph.add_node(guardrails)\n",
+ "langgraph.add_node(generate_cypher)\n",
+ "langgraph.add_node(validate_cypher)\n",
+ "langgraph.add_node(correct_cypher)\n",
+ "langgraph.add_node(execute_cypher)\n",
+ "langgraph.add_node(generate_final_answer)\n",
+ "\n",
+ "langgraph.add_edge(START, \"guardrails\")\n",
+ "langgraph.add_conditional_edges(\n",
+ " \"guardrails\",\n",
+ " guardrails_condition,\n",
+ ")\n",
+ "langgraph.add_edge(\"generate_cypher\", \"validate_cypher\")\n",
+ "langgraph.add_conditional_edges(\n",
+ " \"validate_cypher\",\n",
+ " validate_cypher_condition,\n",
+ ")\n",
+ "langgraph.add_edge(\"execute_cypher\", \"generate_final_answer\")\n",
+ "langgraph.add_edge(\"correct_cypher\", \"validate_cypher\")\n",
+ "langgraph.add_edge(\"generate_final_answer\", END)\n",
+ "\n",
+ "langgraph = langgraph.compile()\n",
+ "\n",
+ "# View\n",
+ "display(Image(langgraph.get_graph().draw_mermaid_png()))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can now test the application by asking an irrelevant question."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {},
+ "outputs": [
{
"data": {
"text/plain": [
- "{'query': 'What was the cast of the Casino?',\n",
- " 'result': 'The cast of Casino included Joe Pesci, Robert De Niro, Sharon Stone, and James Woods.'}"
+ "{'answer': \"I'm sorry, but I cannot provide current weather information. Please check a reliable weather website or app for the latest updates on the weather in Spain.\",\n",
+ " 'steps': ['guardrail', 'generate_final_answer']}"
]
},
- "execution_count": 6,
+ "execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
- "chain = GraphCypherQAChain.from_llm(\n",
- " graph=graph,\n",
- " llm=llm,\n",
- " verbose=True,\n",
- " validate_cypher=True,\n",
- " allow_dangerous_requests=True,\n",
- ")\n",
- "response = chain.invoke({\"query\": \"What was the cast of the Casino?\"})\n",
- "response"
+ "langgraph.invoke({\"question\": \"What's the weather in Spain?\"})"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's now ask something relevant about the movies."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'answer': 'The cast of \"Casino\" includes Robert De Niro, Joe Pesci, Sharon Stone, and James Woods.',\n",
+ " 'steps': ['guardrail',\n",
+ " 'generate_cypher',\n",
+ " 'validate_cypher',\n",
+ " 'execute_cypher',\n",
+ " 'generate_final_answer'],\n",
+ " 'cypher_statement': \"MATCH (m:Movie {title: 'Casino'})<-[:ACTED_IN]-(a:Person) RETURN a.name\"}"
+ ]
+ },
+ "execution_count": 21,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "langgraph.invoke({\"question\": \"What was the cast of the Casino?\"})"
]
},
{
@@ -304,10 +1065,8 @@
"source": [
"### Next steps\n",
"\n",
- "For more complex query-generation, we may want to create few-shot prompts or add query-checking steps. For advanced techniques like this and more check out:\n",
+ "For other graph techniques like this and more check out:\n",
"\n",
- "* [Prompting strategies](/docs/how_to/graph_prompting): Advanced prompt engineering techniques.\n",
- "* [Mapping values](/docs/how_to/graph_mapping): Techniques for mapping values from questions to database.\n",
"* [Semantic layer](/docs/how_to/graph_semantic): Techniques for implementing semantic layers.\n",
"* [Constructing graphs](/docs/how_to/graph_constructing): Techniques for constructing knowledge graphs."
]
@@ -336,7 +1095,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.10.4"
+ "version": "3.11.5"
}
},
"nbformat": 4,
diff --git a/docs/scripts/prepare_notebooks_for_ci.py b/docs/scripts/prepare_notebooks_for_ci.py
index 4fb96152a9a73..b96e262946293 100644
--- a/docs/scripts/prepare_notebooks_for_ci.py
+++ b/docs/scripts/prepare_notebooks_for_ci.py
@@ -25,8 +25,6 @@
"docs/docs/how_to/example_selectors_langsmith.ipynb", # TODO: add langchain-benchmarks; fix cassette issue
"docs/docs/how_to/extraction_long_text.ipynb", # Non-determinism due to batch
"docs/docs/how_to/graph_constructing.ipynb", # Requires local neo4j
- "docs/docs/how_to/graph_mapping.ipynb", # Requires local neo4j
- "docs/docs/how_to/graph_prompting.ipynb", # Requires local neo4j
"docs/docs/how_to/graph_semantic.ipynb", # Requires local neo4j
"docs/docs/how_to/hybrid.ipynb", # Requires AstraDB instance
"docs/docs/how_to/indexing.ipynb", # Requires local Elasticsearch
diff --git a/docs/src/theme/FeatureTables.js b/docs/src/theme/FeatureTables.js
index afa27742aaee9..74b3a4525e10e 100644
--- a/docs/src/theme/FeatureTables.js
+++ b/docs/src/theme/FeatureTables.js
@@ -1138,7 +1138,20 @@ const FEATURE_TABLES = {
multiTenancy: true,
local: true,
idsInAddDocuments: false,
- }
+ },
+ {
+ name: "SQLServer",
+ link: "sqlserver",
+ deleteById: true,
+ filtering: true,
+ searchByVector: true,
+ searchWithScore: true,
+ async: false,
+ passesStandardTests: false,
+ multiTenancy: false,
+ local: false,
+ idsInAddDocuments: false,
+ },
],
}
};
diff --git a/docs/static/img/langgraph_text2cypher.webp b/docs/static/img/langgraph_text2cypher.webp
new file mode 100644
index 0000000000000..a5afd292cebae
Binary files /dev/null and b/docs/static/img/langgraph_text2cypher.webp differ
diff --git a/docs/vercel.json b/docs/vercel.json
index 869e2e6b506ee..236755ba7dae4 100644
--- a/docs/vercel.json
+++ b/docs/vercel.json
@@ -62,6 +62,14 @@
"source": "/docs/tutorials/local_rag",
"destination": "/docs/tutorials/rag"
},
+ {
+ "source": "/docs/how_to/graph_mapping(/?)",
+ "destination": "/docs/tutorials/graph#query-validation"
+ },
+ {
+ "source": "/docs/how_to/graph_prompting(/?)",
+ "destination": "/docs/tutorials/graph#few-shot-prompting"
+ },
{
"source": "/docs/tutorials/data_generation",
"destination": "https://python.langchain.com/v0.2/docs/tutorials/data_generation/"
@@ -113,6 +121,10 @@
{
"source": "/docs/contributing/:path((?:faq|repo_structure|review_process)/?)",
"destination": "/docs/contributing/reference/:path"
+ },
+ {
+ "source": "/docs/integrations/retrievers/weaviate-hybrid(/?)",
+ "destination": "/docs/integrations/vectorstores/weaviate/#search-mechanism"
}
]
}
diff --git a/libs/community/langchain_community/chains/pebblo_retrieval/enforcement_filters.py b/libs/community/langchain_community/chains/pebblo_retrieval/enforcement_filters.py
index 570cbdfa783f8..579b86acb0ebc 100644
--- a/libs/community/langchain_community/chains/pebblo_retrieval/enforcement_filters.py
+++ b/libs/community/langchain_community/chains/pebblo_retrieval/enforcement_filters.py
@@ -27,8 +27,9 @@
PINECONE = "Pinecone"
QDRANT = "Qdrant"
PGVECTOR = "PGVector"
+PINECONE_VECTOR_STORE = "PineconeVectorStore"
-SUPPORTED_VECTORSTORES = {PINECONE, QDRANT, PGVECTOR}
+SUPPORTED_VECTORSTORES = {PINECONE, QDRANT, PGVECTOR, PINECONE_VECTOR_STORE}
def clear_enforcement_filters(retriever: VectorStoreRetriever) -> None:
@@ -505,7 +506,7 @@ def _set_identity_enforcement_filter(
of the retriever based on the type of the vectorstore.
"""
search_kwargs = retriever.search_kwargs
- if retriever.vectorstore.__class__.__name__ == PINECONE:
+ if retriever.vectorstore.__class__.__name__ in [PINECONE, PINECONE_VECTOR_STORE]:
_apply_pinecone_authorization_filter(search_kwargs, auth_context)
elif retriever.vectorstore.__class__.__name__ == QDRANT:
_apply_qdrant_authorization_filter(search_kwargs, auth_context)
diff --git a/libs/community/langchain_community/chat_models/moonshot.py b/libs/community/langchain_community/chat_models/moonshot.py
index 7290c52b76e8b..68f8fdc5a5a28 100644
--- a/libs/community/langchain_community/chat_models/moonshot.py
+++ b/libs/community/langchain_community/chat_models/moonshot.py
@@ -13,21 +13,142 @@
class MoonshotChat(MoonshotCommon, ChatOpenAI): # type: ignore[misc, override, override]
- """Moonshot large language models.
+ """Moonshot chat model integration.
- To use, you should have the ``openai`` python package installed, and the
- environment variable ``MOONSHOT_API_KEY`` set with your API key.
- (Moonshot's chat API is compatible with OpenAI's SDK.)
+ Setup:
+ Install ``openai`` and set environment variables ``MOONSHOT_API_KEY``.
- Referenced from https://platform.moonshot.cn/docs
+ .. code-block:: bash
- Example:
+ pip install openai
+ export MOONSHOT_API_KEY="your-api-key"
+
+ Key init args — completion params:
+ model: str
+ Name of Moonshot model to use.
+ temperature: float
+ Sampling temperature.
+ max_tokens: Optional[int]
+ Max number of tokens to generate.
+
+ Key init args — client params:
+ api_key: Optional[str]
+ Moonshot API KEY. If not passed in will be read from env var MOONSHOT_API_KEY.
+ api_base: Optional[str]
+ Base URL for API requests.
+
+ See full list of supported init args and their descriptions in the params section.
+
+ Instantiate:
+ .. code-block:: python
+
+ from langchain_community.chat_models import MoonshotChat
+
+ chat = MoonshotChat(
+ temperature=0.5,
+ api_key="your-api-key",
+ model="moonshot-v1-8k",
+ # api_base="...",
+ # other params...
+ )
+
+ Invoke:
+ .. code-block:: python
+
+ messages = [
+ ("system", "你是一名专业的翻译家,可以将用户的中文翻译为英文。"),
+ ("human", "我喜欢编程。"),
+ ]
+ chat.invoke(messages)
+
+ .. code-block:: python
+
+ AIMessage(
+ content='I like programming.',
+ additional_kwargs={},
+ response_metadata={
+ 'token_usage': {
+ 'completion_tokens': 5,
+ 'prompt_tokens': 27,
+ 'total_tokens': 32
+ },
+ 'model_name': 'moonshot-v1-8k',
+ 'system_fingerprint': None,
+ 'finish_reason': 'stop',
+ 'logprobs': None
+ },
+ id='run-71c03f4e-6628-41d5-beb6-d2559ae68266-0'
+ )
+
+ Stream:
.. code-block:: python
- from langchain_community.chat_models.moonshot import MoonshotChat
+ for chunk in chat.stream(messages):
+ print(chunk)
+
+ .. code-block:: python
+
+ content='' additional_kwargs={} response_metadata={} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+ content='I' additional_kwargs={} response_metadata={} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+ content=' like' additional_kwargs={} response_metadata={} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+ content=' programming' additional_kwargs={} response_metadata={} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+ content='.' additional_kwargs={} response_metadata={} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+ content='' additional_kwargs={} response_metadata={'finish_reason': 'stop'} id='run-80d77096-8b83-4c39-a84d-71d9c746da92'
+
+ .. code-block:: python
+
+ stream = chat.stream(messages)
+ full = next(stream)
+ for chunk in stream:
+ full += chunk
+ full
+
+ .. code-block:: python
+
+ AIMessageChunk(
+ content='I like programming.',
+ additional_kwargs={},
+ response_metadata={'finish_reason': 'stop'},
+ id='run-10c80976-7aa5-4ff7-ba3e-1251665557ef'
+ )
+
+ Async:
+ .. code-block:: python
+
+ await chat.ainvoke(messages)
+
+ # stream:
+ # async for chunk in chat.astream(messages):
+ # print(chunk)
+
+ # batch:
+ # await chat.abatch([messages])
+
+ .. code-block:: python
+
+ [AIMessage(content='I like programming.', additional_kwargs={}, response_metadata={'token_usage': {'completion_tokens': 5, 'prompt_tokens': 27, 'total_tokens': 32}, 'model_name': 'moonshot-v1-8k', 'system_fingerprint': None, 'finish_reason': 'stop', 'logprobs': None}, id='run-2938b005-9204-4b9f-b273-1c3272fce9e5-0')]
+
+ Response metadata
+ .. code-block:: python
+
+ ai_msg = chat.invoke(messages)
+ ai_msg.response_metadata
+
+ .. code-block:: python
- moonshot = MoonshotChat(model="moonshot-v1-8k")
- """
+ {
+ 'token_usage': {
+ 'completion_tokens': 5,
+ 'prompt_tokens': 27,
+ 'total_tokens': 32
+ },
+ 'model_name': 'moonshot-v1-8k',
+ 'system_fingerprint': None,
+ 'finish_reason': 'stop',
+ 'logprobs': None
+ }
+
+ """ # noqa: E501
@pre_init
def validate_environment(cls, values: Dict) -> Dict:
diff --git a/libs/community/langchain_community/chat_models/perplexity.py b/libs/community/langchain_community/chat_models/perplexity.py
index ce415dd59cbd5..d168b1363e56f 100644
--- a/libs/community/langchain_community/chat_models/perplexity.py
+++ b/libs/community/langchain_community/chat_models/perplexity.py
@@ -148,7 +148,6 @@ def validate_environment(self) -> Self:
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling PerplexityChat API."""
return {
- "request_timeout": self.request_timeout,
"max_tokens": self.max_tokens,
"stream": self.streaming,
"temperature": self.temperature,
@@ -222,7 +221,7 @@ def _stream(
if stop:
params["stop_sequences"] = stop
stream_resp = self.client.chat.completions.create(
- model=params["model"], messages=message_dicts, stream=True
+ messages=message_dicts, stream=True, **params
)
for chunk in stream_resp:
if not isinstance(chunk, dict):
@@ -258,9 +257,7 @@ def _generate(
return generate_from_stream(stream_iter)
message_dicts, params = self._create_message_dicts(messages, stop)
params = {**params, **kwargs}
- response = self.client.chat.completions.create(
- model=params["model"], messages=message_dicts
- )
+ response = self.client.chat.completions.create(messages=message_dicts, **params)
message = AIMessage(
content=response.choices[0].message.content,
additional_kwargs={"citations": response.citations},
@@ -271,8 +268,6 @@ def _generate(
def _invocation_params(self) -> Mapping[str, Any]:
"""Get the parameters used to invoke the model."""
pplx_creds: Dict[str, Any] = {
- "api_key": self.pplx_api_key,
- "api_base": "https://api.perplexity.ai",
"model": self.model,
}
return {**pplx_creds, **self._default_params}
diff --git a/libs/community/langchain_community/document_loaders/base_o365.py b/libs/community/langchain_community/document_loaders/base_o365.py
index 5f89d0794fccd..981a637cbb3b1 100644
--- a/libs/community/langchain_community/document_loaders/base_o365.py
+++ b/libs/community/langchain_community/document_loaders/base_o365.py
@@ -5,7 +5,9 @@
import logging
import mimetypes
import os
+import re
import tempfile
+import urllib
from abc import abstractmethod
from pathlib import Path, PurePath
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, Sequence, Union
@@ -186,9 +188,18 @@ def _load_from_folder(self, folder: Folder) -> Iterable[Blob]:
for file in items:
if file.is_file:
if file.mime_type in list(file_mime_types.values()):
+ source = file.web_url
+ if re.search(
+ r"Doc.aspx\?sourcedoc=.*file=([^&]+)", file.web_url
+ ):
+ source = (
+ file._parent.web_url
+ + "/"
+ + urllib.parse.quote(file.name)
+ )
file.download(to_path=temp_dir, chunk_size=self.chunk_size)
metadata_dict[file.name] = {
- "source": file.web_url,
+ "source": source,
"mime_type": file.mime_type,
"created": str(file.created),
"modified": str(file.modified),
@@ -241,9 +252,18 @@ def _load_from_object_ids(
continue
if file.is_file:
if file.mime_type in list(file_mime_types.values()):
+ source = file.web_url
+ if re.search(
+ r"Doc.aspx\?sourcedoc=.*file=([^&]+)", file.web_url
+ ):
+ source = (
+ file._parent.web_url
+ + "/"
+ + urllib.parse.quote(file.name)
+ )
file.download(to_path=temp_dir, chunk_size=self.chunk_size)
metadata_dict[file.name] = {
- "source": file.web_url,
+ "source": source,
"mime_type": file.mime_type,
"created": file.created,
"modified": file.modified,
diff --git a/libs/community/langchain_community/document_loaders/confluence.py b/libs/community/langchain_community/document_loaders/confluence.py
index 70c86e7dce962..263c0c8d31fe2 100644
--- a/libs/community/langchain_community/document_loaders/confluence.py
+++ b/libs/community/langchain_community/document_loaders/confluence.py
@@ -166,6 +166,7 @@ def __init__(
include_archived_content: bool = False,
include_attachments: bool = False,
include_comments: bool = False,
+ include_labels: bool = False,
content_format: ContentFormat = ContentFormat.STORAGE,
limit: Optional[int] = 50,
max_pages: Optional[int] = 1000,
@@ -181,6 +182,7 @@ def __init__(
self.include_archived_content = include_archived_content
self.include_attachments = include_attachments
self.include_comments = include_comments
+ self.include_labels = include_labels
self.content_format = content_format
self.limit = limit
self.max_pages = max_pages
@@ -327,12 +329,20 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
)
include_attachments = self._resolve_param("include_attachments", kwargs)
include_comments = self._resolve_param("include_comments", kwargs)
+ include_labels = self._resolve_param("include_labels", kwargs)
content_format = self._resolve_param("content_format", kwargs)
limit = self._resolve_param("limit", kwargs)
max_pages = self._resolve_param("max_pages", kwargs)
ocr_languages = self._resolve_param("ocr_languages", kwargs)
keep_markdown_format = self._resolve_param("keep_markdown_format", kwargs)
keep_newlines = self._resolve_param("keep_newlines", kwargs)
+ expand = ",".join(
+ [
+ content_format.value,
+ "version",
+ *(["metadata.labels"] if include_labels else []),
+ ]
+ )
if not space_key and not page_ids and not label and not cql:
raise ValueError(
@@ -347,13 +357,14 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
limit=limit,
max_pages=max_pages,
status="any" if include_archived_content else "current",
- expand=f"{content_format.value},version",
+ expand=expand,
)
yield from self.process_pages(
pages,
include_restricted_content,
include_attachments,
include_comments,
+ include_labels,
content_format,
ocr_languages=ocr_languages,
keep_markdown_format=keep_markdown_format,
@@ -380,13 +391,14 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
limit=limit,
max_pages=max_pages,
include_archived_spaces=include_archived_content,
- expand=f"{content_format.value},version",
+ expand=expand,
)
yield from self.process_pages(
pages,
include_restricted_content,
include_attachments,
include_comments,
+ False, # labels are not included in the search results
content_format,
ocr_languages,
keep_markdown_format,
@@ -408,7 +420,8 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
before_sleep=before_sleep_log(logger, logging.WARNING),
)(self.confluence.get_page_by_id)
page = get_page(
- page_id=page_id, expand=f"{content_format.value},version"
+ page_id=page_id,
+ expand=expand,
)
if not include_restricted_content and not self.is_public_page(page):
continue
@@ -416,6 +429,7 @@ def _lazy_load(self, **kwargs: Any) -> Iterator[Document]:
page,
include_attachments,
include_comments,
+ include_labels,
content_format,
ocr_languages,
keep_markdown_format,
@@ -498,6 +512,7 @@ def process_pages(
include_restricted_content: bool,
include_attachments: bool,
include_comments: bool,
+ include_labels: bool,
content_format: ContentFormat,
ocr_languages: Optional[str] = None,
keep_markdown_format: Optional[bool] = False,
@@ -511,6 +526,7 @@ def process_pages(
page,
include_attachments,
include_comments,
+ include_labels,
content_format,
ocr_languages=ocr_languages,
keep_markdown_format=keep_markdown_format,
@@ -522,6 +538,7 @@ def process_page(
page: dict,
include_attachments: bool,
include_comments: bool,
+ include_labels: bool,
content_format: ContentFormat,
ocr_languages: Optional[str] = None,
keep_markdown_format: Optional[bool] = False,
@@ -575,10 +592,19 @@ def process_page(
]
text = text + "".join(comment_texts)
+ if include_labels:
+ labels = [
+ label["name"]
+ for label in page.get("metadata", {})
+ .get("labels", {})
+ .get("results", [])
+ ]
+
metadata = {
"title": page["title"],
"id": page["id"],
"source": self.base_url.strip("/") + page["_links"]["webui"],
+ **({"labels": labels} if include_labels else {}),
}
if "version" in page and "when" in page["version"]:
diff --git a/libs/community/langchain_community/graphs/kuzu_graph.py b/libs/community/langchain_community/graphs/kuzu_graph.py
index 1f99f49fc9435..3fe3d60c283c2 100644
--- a/libs/community/langchain_community/graphs/kuzu_graph.py
+++ b/libs/community/langchain_community/graphs/kuzu_graph.py
@@ -1,4 +1,7 @@
-from typing import Any, Dict, List
+from hashlib import md5
+from typing import Any, Dict, List, Tuple
+
+from langchain_community.graphs.graph_document import GraphDocument, Relationship
class KuzuGraph:
@@ -16,7 +19,19 @@ class KuzuGraph:
See https://python.langchain.com/docs/security for more information.
"""
- def __init__(self, db: Any, database: str = "kuzu") -> None:
+ def __init__(
+ self, db: Any, database: str = "kuzu", allow_dangerous_requests: bool = False
+ ) -> None:
+ """Initializes the Kùzu graph database connection."""
+
+ if allow_dangerous_requests is not True:
+ raise ValueError(
+ "The KuzuGraph class is a powerful tool that can be used to execute "
+ "arbitrary queries on the database. To enable this functionality, "
+ "set the `allow_dangerous_requests` parameter to `True` when "
+ "constructing the KuzuGraph object."
+ )
+
try:
import kuzu
except ImportError:
@@ -57,7 +72,7 @@ def refresh_schema(self) -> None:
if properties[property_name]["dimension"] > 0:
if "shape" in properties[property_name]:
for s in properties[property_name]["shape"]:
- list_type_flag += "[%s]" % s
+ list_type_flag += f"[{s}]"
else:
for i in range(properties[property_name]["dimension"]):
list_type_flag += "[]"
@@ -71,7 +86,7 @@ def refresh_schema(self) -> None:
rel_tables = self.conn._get_rel_table_names()
for table in rel_tables:
relationships.append(
- "(:%s)-[:%s]->(:%s)" % (table["src"], table["name"], table["dst"])
+ f"(:{table['src']})-[:{table['name']}]->(:{table['dst']})"
)
rel_properties = []
@@ -93,3 +108,154 @@ def refresh_schema(self) -> None:
f"Relationships properties: {rel_properties}\n"
f"Relationships: {relationships}\n"
)
+
+ def _create_chunk_node_table(self) -> None:
+ self.conn.execute(
+ """
+ CREATE NODE TABLE IF NOT EXISTS Chunk (
+ id STRING,
+ text STRING,
+ type STRING,
+ PRIMARY KEY(id)
+ );
+ """
+ )
+
+ def _create_entity_node_table(self, node_label: str) -> None:
+ self.conn.execute(
+ f"""
+ CREATE NODE TABLE IF NOT EXISTS {node_label} (
+ id STRING,
+ type STRING,
+ PRIMARY KEY(id)
+ );
+ """
+ )
+
+ def _create_entity_relationship_table(self, rel: Relationship) -> None:
+ self.conn.execute(
+ f"""
+ CREATE REL TABLE IF NOT EXISTS {rel.type} (
+ FROM {rel.source.type} TO {rel.target.type}
+ );
+ """
+ )
+
+ def add_graph_documents(
+ self,
+ graph_documents: List[GraphDocument],
+ allowed_relationships: List[Tuple[str, str, str]],
+ include_source: bool = False,
+ ) -> None:
+ """
+ Adds a list of `GraphDocument` objects that represent nodes and relationships
+ in a graph to a Kùzu backend.
+
+ Parameters:
+ - graph_documents (List[GraphDocument]): A list of `GraphDocument` objects
+ that contain the nodes and relationships to be added to the graph. Each
+ `GraphDocument` should encapsulate the structure of part of the graph,
+ including nodes, relationships, and the source document information.
+
+ - allowed_relationships (List[Tuple[str, str, str]]): A list of allowed
+ relationships that exist in the graph. Each tuple contains three elements:
+ the source node type, the relationship type, and the target node type.
+ Required for Kùzu, as the names of the relationship tables that need to
+ pre-exist are derived from these tuples.
+
+ - include_source (bool): If True, stores the source document
+ and links it to nodes in the graph using the `MENTIONS` relationship.
+ This is useful for tracing back the origin of data. Merges source
+ documents based on the `id` property from the source document metadata
+ if available; otherwise it calculates the MD5 hash of `page_content`
+ for merging process. Defaults to False.
+ """
+ # Get unique node labels in the graph documents
+ node_labels = list(
+ {node.type for document in graph_documents for node in document.nodes}
+ )
+
+ for document in graph_documents:
+ # Add chunk nodes and create source document relationships if include_source
+ # is True
+ if include_source:
+ self._create_chunk_node_table()
+ if not document.source.metadata.get("id"):
+ # Add a unique id to each document chunk via an md5 hash
+ document.source.metadata["id"] = md5(
+ document.source.page_content.encode("utf-8")
+ ).hexdigest()
+
+ self.conn.execute(
+ f"""
+ MERGE (c:Chunk {{id: $id}})
+ SET c.text = $text,
+ c.type = "text_chunk"
+ """, # noqa: F541
+ parameters={
+ "id": document.source.metadata["id"],
+ "text": document.source.page_content,
+ },
+ )
+
+ for node_label in node_labels:
+ self._create_entity_node_table(node_label)
+
+ # Add entity nodes from data
+ for node in document.nodes:
+ self.conn.execute(
+ f"""
+ MERGE (e:{node.type} {{id: $id}})
+ SET e.type = "entity"
+ """,
+ parameters={"id": node.id},
+ )
+ if include_source:
+ # If include_source is True, we need to create a relationship table
+ # between the chunk nodes and the entity nodes
+ self._create_chunk_node_table()
+ ddl = "CREATE REL TABLE GROUP IF NOT EXISTS MENTIONS ("
+ table_names = []
+ for node_label in node_labels:
+ table_names.append(f"FROM Chunk TO {node_label}")
+ table_names = list(set(table_names))
+ ddl += ", ".join(table_names)
+ # Add common properties for all the tables here
+ ddl += ", label STRING, triplet_source_id STRING)"
+ if ddl:
+ self.conn.execute(ddl)
+
+ # Only allow relationships that exist in the schema
+ if node.type in node_labels:
+ self.conn.execute(
+ f"""
+ MATCH (c:Chunk {{id: $id}}),
+ (e:{node.type} {{id: $node_id}})
+ MERGE (c)-[m:MENTIONS]->(e)
+ SET m.triplet_source_id = $id
+ """,
+ parameters={
+ "id": document.source.metadata["id"],
+ "node_id": node.id,
+ },
+ )
+
+ # Add entity relationships
+ for rel in document.relationships:
+ self._create_entity_relationship_table(rel)
+ # Create relationship
+ source_label = rel.source.type
+ source_id = rel.source.id
+ target_label = rel.target.type
+ target_id = rel.target.id
+ self.conn.execute(
+ f"""
+ MATCH (e1:{source_label} {{id: $source_id}}),
+ (e2:{target_label} {{id: $target_id}})
+ MERGE (e1)-[:{rel.type}]->(e2)
+ """,
+ parameters={
+ "source_id": source_id,
+ "target_id": target_id,
+ },
+ )
diff --git a/libs/community/langchain_community/tools/gmail/utils.py b/libs/community/langchain_community/tools/gmail/utils.py
index e8bb4d468b3f3..01a6b7744868a 100644
--- a/libs/community/langchain_community/tools/gmail/utils.py
+++ b/libs/community/langchain_community/tools/gmail/utils.py
@@ -89,7 +89,7 @@ def get_gmail_credentials(
flow = InstalledAppFlow.from_client_secrets_file(
client_secrets_file, scopes
)
- creds = flow.run_local_server(port=0)
+ creds = flow.run_local_server(port=0, open_browser=False)
# Save the credentials for the next run
with open(token_file, "w") as token:
token.write(creds.to_json())
diff --git a/libs/community/langchain_community/vectorstores/azuresearch.py b/libs/community/langchain_community/vectorstores/azuresearch.py
index 2a715b846f816..6d19574e8ce30 100644
--- a/libs/community/langchain_community/vectorstores/azuresearch.py
+++ b/libs/community/langchain_community/vectorstores/azuresearch.py
@@ -1545,10 +1545,9 @@ def as_retriever(self, **kwargs: Any) -> AzureSearchVectorStoreRetriever: # typ
"""Return AzureSearchVectorStoreRetriever initialized from this VectorStore.
Args:
- search_type (Optional[str]): Defines the type of search that
- the Retriever should perform.
- Can be "similarity" (default), "hybrid", or
- "semantic_hybrid".
+ search_type (Optional[str]): Overrides the type of search that
+ the Retriever should perform. Defaults to `self.search_type`.
+ Can be "similarity", "hybrid", or "semantic_hybrid".
search_kwargs (Optional[Dict]): Keyword arguments to pass to the
search function. Can include things like:
score_threshold: Minimum relevance threshold
@@ -1561,6 +1560,9 @@ def as_retriever(self, **kwargs: Any) -> AzureSearchVectorStoreRetriever: # typ
Returns:
AzureSearchVectorStoreRetriever: Retriever class for VectorStore.
"""
+ search_type = kwargs.get("search_type", self.search_type)
+ kwargs["search_type"] = search_type
+
tags = kwargs.pop("tags", None) or []
tags.extend(self._get_retriever_tags())
return AzureSearchVectorStoreRetriever(vectorstore=self, **kwargs, tags=tags)
diff --git a/libs/community/tests/unit_tests/document_loaders/test_confluence.py b/libs/community/tests/unit_tests/document_loaders/test_confluence.py
index feecb1588b571..abb47326beef7 100644
--- a/libs/community/tests/unit_tests/document_loaders/test_confluence.py
+++ b/libs/community/tests/unit_tests/document_loaders/test_confluence.py
@@ -195,6 +195,36 @@ def test_confluence_loader_when_content_format_and_keep_markdown_format_enabled(
assert mock_confluence.cql.call_count == 0
assert mock_confluence.get_page_child_by_type.call_count == 0
+ @pytest.mark.requires("markdownify")
+ def test_confluence_loader_when_include_lables_set_to_true(
+ self, mock_confluence: MagicMock
+ ) -> None:
+ # one response with two pages
+ mock_confluence.get_all_pages_from_space.return_value = [
+ self._get_mock_page("123", include_labels=True),
+ self._get_mock_page("456", include_labels=False),
+ ]
+ mock_confluence.get_all_restrictions_for_content.side_effect = [
+ self._get_mock_page_restrictions("123"),
+ self._get_mock_page_restrictions("456"),
+ ]
+
+ conflence_loader = self._get_mock_confluence_loader(
+ mock_confluence,
+ space_key=self.MOCK_SPACE_KEY,
+ include_labels=True,
+ max_pages=2,
+ )
+
+ documents = conflence_loader.load()
+
+ assert mock_confluence.get_all_pages_from_space.call_count == 1
+
+ assert len(documents) == 2
+ assert all(isinstance(doc, Document) for doc in documents)
+ assert documents[0].metadata["labels"] == ["l1", "l2"]
+ assert documents[1].metadata["labels"] == []
+
def _get_mock_confluence_loader(
self, mock_confluence: MagicMock, **kwargs: Any
) -> ConfluenceLoader:
@@ -208,7 +238,10 @@ def _get_mock_confluence_loader(
return confluence_loader
def _get_mock_page(
- self, page_id: str, content_format: ContentFormat = ContentFormat.STORAGE
+ self,
+ page_id: str,
+ content_format: ContentFormat = ContentFormat.STORAGE,
+ include_labels: bool = False,
) -> Dict:
return {
"id": f"{page_id}",
@@ -216,6 +249,20 @@ def _get_mock_page(
"body": {
f"{content_format.name.lower()}": {"value": f"Content {page_id}
"}
},
+ **(
+ {
+ "metadata": {
+ "labels": {
+ "results": [
+ {"prefix": "global", "name": "l1", "id": "111"},
+ {"prefix": "global", "name": "l2", "id": "222"},
+ ]
+ }
+ }
+ if include_labels
+ else {},
+ }
+ ),
"status": "current",
"type": "page",
"_links": {
diff --git a/libs/core/langchain_core/messages/tool.py b/libs/core/langchain_core/messages/tool.py
index 653dd838f860e..873f872cef268 100644
--- a/libs/core/langchain_core/messages/tool.py
+++ b/libs/core/langchain_core/messages/tool.py
@@ -9,7 +9,16 @@
from langchain_core.utils._merge import merge_dicts, merge_obj
-class ToolMessage(BaseMessage):
+class ToolOutputMixin:
+ """Mixin for objects that tools can return directly.
+
+ If a custom BaseTool is invoked with a ToolCall and the output of custom code is
+ not an instance of ToolOutputMixin, the output will automatically be coerced to a
+ string and wrapped in a ToolMessage.
+ """
+
+
+class ToolMessage(BaseMessage, ToolOutputMixin):
"""Message for passing the result of executing a tool back to a model.
ToolMessages contain the result of a tool invocation. Typically, the result
diff --git a/libs/core/langchain_core/prompts/pipeline.py b/libs/core/langchain_core/prompts/pipeline.py
index e25a0a7f72461..f316ba3d12175 100644
--- a/libs/core/langchain_core/prompts/pipeline.py
+++ b/libs/core/langchain_core/prompts/pipeline.py
@@ -3,6 +3,7 @@
from pydantic import model_validator
+from langchain_core._api.deprecation import deprecated
from langchain_core.prompt_values import PromptValue
from langchain_core.prompts.base import BasePromptTemplate
from langchain_core.prompts.chat import BaseChatPromptTemplate
@@ -12,8 +13,28 @@ def _get_inputs(inputs: dict, input_variables: list[str]) -> dict:
return {k: inputs[k] for k in input_variables}
+@deprecated(
+ since="0.3.22",
+ removal="1.0",
+ message=(
+ "This class is deprecated. Please see the docstring below or at the link"
+ " for a replacement option: "
+ "https://python.langchain.com/api_reference/core/prompts/langchain_core.prompts.pipeline.PipelinePromptTemplate.html"
+ ),
+)
class PipelinePromptTemplate(BasePromptTemplate):
- """Prompt template for composing multiple prompt templates together.
+ """
+ This has been deprecated in favor of chaining individual prompts together in your
+ code. E.g. using a for loop, you could do:
+
+ .. code-block:: python
+
+ my_input = {"key": "value"}
+ for name, prompt in pipeline_prompts:
+ my_input[name] = prompt.invoke(my_input).to_string()
+ my_output = final_prompt.invoke(my_input)
+
+ Prompt template for composing multiple prompt templates together.
This can be useful when you want to reuse parts of prompts.
diff --git a/libs/core/langchain_core/tools/base.py b/libs/core/langchain_core/tools/base.py
index 815607f3b4325..ff264edac3284 100644
--- a/libs/core/langchain_core/tools/base.py
+++ b/libs/core/langchain_core/tools/base.py
@@ -45,7 +45,7 @@
CallbackManager,
Callbacks,
)
-from langchain_core.messages.tool import ToolCall, ToolMessage
+from langchain_core.messages.tool import ToolCall, ToolMessage, ToolOutputMixin
from langchain_core.runnables import (
RunnableConfig,
RunnableSerializable,
@@ -494,7 +494,9 @@ async def ainvoke(
# --- Tool ---
- def _parse_input(self, tool_input: Union[str, dict]) -> Union[str, dict[str, Any]]:
+ def _parse_input(
+ self, tool_input: Union[str, dict], tool_call_id: Optional[str]
+ ) -> Union[str, dict[str, Any]]:
"""Convert tool input to a pydantic model.
Args:
@@ -512,9 +514,39 @@ def _parse_input(self, tool_input: Union[str, dict]) -> Union[str, dict[str, Any
else:
if input_args is not None:
if issubclass(input_args, BaseModel):
+ for k, v in get_all_basemodel_annotations(input_args).items():
+ if (
+ _is_injected_arg_type(v, injected_type=InjectedToolCallId)
+ and k not in tool_input
+ ):
+ if tool_call_id is None:
+ msg = (
+ "When tool includes an InjectedToolCallId "
+ "argument, tool must always be invoked with a full "
+ "model ToolCall of the form: {'args': {...}, "
+ "'name': '...', 'type': 'tool_call', "
+ "'tool_call_id': '...'}"
+ )
+ raise ValueError(msg)
+ tool_input[k] = tool_call_id
result = input_args.model_validate(tool_input)
result_dict = result.model_dump()
elif issubclass(input_args, BaseModelV1):
+ for k, v in get_all_basemodel_annotations(input_args).items():
+ if (
+ _is_injected_arg_type(v, injected_type=InjectedToolCallId)
+ and k not in tool_input
+ ):
+ if tool_call_id is None:
+ msg = (
+ "When tool includes an InjectedToolCallId "
+ "argument, tool must always be invoked with a full "
+ "model ToolCall of the form: {'args': {...}, "
+ "'name': '...', 'type': 'tool_call', "
+ "'tool_call_id': '...'}"
+ )
+ raise ValueError(msg)
+ tool_input[k] = tool_call_id
result = input_args.parse_obj(tool_input)
result_dict = result.dict()
else:
@@ -570,8 +602,10 @@ async def _arun(self, *args: Any, **kwargs: Any) -> Any:
kwargs["run_manager"] = kwargs["run_manager"].get_sync()
return await run_in_executor(None, self._run, *args, **kwargs)
- def _to_args_and_kwargs(self, tool_input: Union[str, dict]) -> tuple[tuple, dict]:
- tool_input = self._parse_input(tool_input)
+ def _to_args_and_kwargs(
+ self, tool_input: Union[str, dict], tool_call_id: Optional[str]
+ ) -> tuple[tuple, dict]:
+ tool_input = self._parse_input(tool_input, tool_call_id)
# For backwards compatibility, if run_input is a string,
# pass as a positional argument.
if isinstance(tool_input, str):
@@ -648,10 +682,9 @@ def run(
child_config = patch_config(config, callbacks=run_manager.get_child())
context = copy_context()
context.run(_set_config_context, child_config)
- tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
+ tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input, tool_call_id)
if signature(self._run).parameters.get("run_manager"):
tool_kwargs["run_manager"] = run_manager
-
if config_param := _get_runnable_config_param(self._run):
tool_kwargs[config_param] = config
response = context.run(self._run, *tool_args, **tool_kwargs)
@@ -755,7 +788,7 @@ async def arun(
artifact = None
error_to_raise: Optional[Union[Exception, KeyboardInterrupt]] = None
try:
- tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input)
+ tool_args, tool_kwargs = self._to_args_and_kwargs(tool_input, tool_call_id)
child_config = patch_config(config, callbacks=run_manager.get_child())
context = copy_context()
context.run(_set_config_context, child_config)
@@ -889,20 +922,23 @@ def _prep_run_args(
def _format_output(
- content: Any, artifact: Any, tool_call_id: Optional[str], name: str, status: str
-) -> Union[ToolMessage, Any]:
- if tool_call_id:
- if not _is_message_content_type(content):
- content = _stringify(content)
- return ToolMessage(
- content,
- artifact=artifact,
- tool_call_id=tool_call_id,
- name=name,
- status=status,
- )
- else:
+ content: Any,
+ artifact: Any,
+ tool_call_id: Optional[str],
+ name: str,
+ status: str,
+) -> Union[ToolOutputMixin, Any]:
+ if isinstance(content, ToolOutputMixin) or not tool_call_id:
return content
+ if not _is_message_content_type(content):
+ content = _stringify(content)
+ return ToolMessage(
+ content,
+ artifact=artifact,
+ tool_call_id=tool_call_id,
+ name=name,
+ status=status,
+ )
def _is_message_content_type(obj: Any) -> bool:
@@ -954,10 +990,31 @@ class InjectedToolArg:
"""Annotation for a Tool arg that is **not** meant to be generated by a model."""
-def _is_injected_arg_type(type_: type) -> bool:
+class InjectedToolCallId(InjectedToolArg):
+ r'''Annotation for injecting the tool_call_id.
+
+ Example:
+ ..code-block:: python
+
+ from typing_extensions import Annotated
+
+ from langchain_core.messages import ToolMessage
+ from langchain_core.tools import tool, InjectedToolCallID
+
+ @tool
+ def foo(x: int, tool_call_id: Annotated[str, InjectedToolCallID]) -> ToolMessage:
+ """Return x."""
+ return ToolMessage(str(x), artifact=x, name="foo", tool_call_id=tool_call_id)
+ ''' # noqa: E501
+
+
+def _is_injected_arg_type(
+ type_: type, injected_type: Optional[type[InjectedToolArg]] = None
+) -> bool:
+ injected_type = injected_type or InjectedToolArg
return any(
- isinstance(arg, InjectedToolArg)
- or (isinstance(arg, type) and issubclass(arg, InjectedToolArg))
+ isinstance(arg, injected_type)
+ or (isinstance(arg, type) and issubclass(arg, injected_type))
for arg in get_args(type_)[1:]
)
diff --git a/libs/core/langchain_core/tools/simple.py b/libs/core/langchain_core/tools/simple.py
index 118c8b39f6db3..d9e38ba227c8b 100644
--- a/libs/core/langchain_core/tools/simple.py
+++ b/libs/core/langchain_core/tools/simple.py
@@ -62,9 +62,11 @@ def args(self) -> dict:
# assume it takes a single string input.
return {"tool_input": {"type": "string"}}
- def _to_args_and_kwargs(self, tool_input: Union[str, dict]) -> tuple[tuple, dict]:
+ def _to_args_and_kwargs(
+ self, tool_input: Union[str, dict], tool_call_id: Optional[str]
+ ) -> tuple[tuple, dict]:
"""Convert tool input to pydantic model."""
- args, kwargs = super()._to_args_and_kwargs(tool_input)
+ args, kwargs = super()._to_args_and_kwargs(tool_input, tool_call_id)
# For backwards compatibility. The tool must be run with a single input
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
diff --git a/libs/core/langchain_core/utils/function_calling.py b/libs/core/langchain_core/utils/function_calling.py
index 4779d26244203..e6b70c4ade1d7 100644
--- a/libs/core/langchain_core/utils/function_calling.py
+++ b/libs/core/langchain_core/utils/function_calling.py
@@ -646,9 +646,13 @@ def _parse_google_docstring(
for line in args_block.split("\n")[1:]:
if ":" in line:
arg, desc = line.split(":", maxsplit=1)
- arg_descriptions[arg.strip()] = desc.strip()
+ arg = arg.strip()
+ arg_name, _, _annotations = arg.partition(" ")
+ if _annotations.startswith("(") and _annotations.endswith(")"):
+ arg = arg_name
+ arg_descriptions[arg] = desc.strip()
elif arg:
- arg_descriptions[arg.strip()] += " " + line.strip()
+ arg_descriptions[arg] += " " + line.strip()
return description, arg_descriptions
diff --git a/libs/core/pyproject.toml b/libs/core/pyproject.toml
index af153d8b66267..ebb41501b275f 100644
--- a/libs/core/pyproject.toml
+++ b/libs/core/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-core"
-version = "0.3.22"
+version = "0.3.23"
description = "Building applications with LLMs through composability"
authors = []
license = "MIT"
diff --git a/libs/core/tests/unit_tests/test_tools.py b/libs/core/tests/unit_tests/test_tools.py
index ce7ea4894bb5a..164ecc508e76e 100644
--- a/libs/core/tests/unit_tests/test_tools.py
+++ b/libs/core/tests/unit_tests/test_tools.py
@@ -31,6 +31,7 @@
CallbackManagerForToolRun,
)
from langchain_core.messages import ToolMessage
+from langchain_core.messages.tool import ToolOutputMixin
from langchain_core.runnables import (
Runnable,
RunnableConfig,
@@ -46,6 +47,7 @@
)
from langchain_core.tools.base import (
InjectedToolArg,
+ InjectedToolCallId,
SchemaAnnotationError,
_is_message_content_block,
_is_message_content_type,
@@ -856,6 +858,7 @@ class _RaiseNonValidationErrorTool(BaseTool):
def _parse_input(
self,
tool_input: Union[str, dict],
+ tool_call_id: Optional[str],
) -> Union[str, dict[str, Any]]:
raise NotImplementedError
@@ -920,6 +923,7 @@ class _RaiseNonValidationErrorTool(BaseTool):
def _parse_input(
self,
tool_input: Union[str, dict],
+ tool_call_id: Optional[str],
) -> Union[str, dict[str, Any]]:
raise NotImplementedError
@@ -2110,3 +2114,63 @@ def injected_tool(x: int, foo: Annotated[Foo, InjectedToolArg]) -> str:
return foo.value
assert injected_tool.invoke({"x": 5, "foo": Foo()}) == "bar" # type: ignore
+
+
+def test_tool_injected_tool_call_id() -> None:
+ @tool
+ def foo(x: int, tool_call_id: Annotated[str, InjectedToolCallId]) -> ToolMessage:
+ """foo"""
+ return ToolMessage(x, tool_call_id=tool_call_id) # type: ignore
+
+ assert foo.invoke(
+ {"type": "tool_call", "args": {"x": 0}, "name": "foo", "id": "bar"}
+ ) == ToolMessage(0, tool_call_id="bar") # type: ignore
+
+ with pytest.raises(ValueError):
+ assert foo.invoke({"x": 0})
+
+ @tool
+ def foo2(x: int, tool_call_id: Annotated[str, InjectedToolCallId()]) -> ToolMessage:
+ """foo"""
+ return ToolMessage(x, tool_call_id=tool_call_id) # type: ignore
+
+ assert foo2.invoke(
+ {"type": "tool_call", "args": {"x": 0}, "name": "foo", "id": "bar"}
+ ) == ToolMessage(0, tool_call_id="bar") # type: ignore
+
+
+def test_tool_uninjected_tool_call_id() -> None:
+ @tool
+ def foo(x: int, tool_call_id: str) -> ToolMessage:
+ """foo"""
+ return ToolMessage(x, tool_call_id=tool_call_id) # type: ignore
+
+ with pytest.raises(ValueError):
+ foo.invoke({"type": "tool_call", "args": {"x": 0}, "name": "foo", "id": "bar"})
+
+ assert foo.invoke(
+ {
+ "type": "tool_call",
+ "args": {"x": 0, "tool_call_id": "zap"},
+ "name": "foo",
+ "id": "bar",
+ }
+ ) == ToolMessage(0, tool_call_id="zap") # type: ignore
+
+
+def test_tool_return_output_mixin() -> None:
+ class Bar(ToolOutputMixin):
+ def __init__(self, x: int) -> None:
+ self.x = x
+
+ def __eq__(self, other: Any) -> bool:
+ return isinstance(other, self.__class__) and self.x == other.x
+
+ @tool
+ def foo(x: int) -> Bar:
+ """Foo."""
+ return Bar(x=x)
+
+ assert foo.invoke(
+ {"type": "tool_call", "args": {"x": 0}, "name": "foo", "id": "bar"}
+ ) == Bar(x=0)
diff --git a/libs/core/tests/unit_tests/utils/test_function_calling.py b/libs/core/tests/unit_tests/utils/test_function_calling.py
index ba4c50187f139..bf1a4f56337fe 100644
--- a/libs/core/tests/unit_tests/utils/test_function_calling.py
+++ b/libs/core/tests/unit_tests/utils/test_function_calling.py
@@ -71,6 +71,19 @@ def dummy_function(arg1: int, arg2: Literal["bar", "baz"]) -> None:
return dummy_function
+@pytest.fixture()
+def function_docstring_annotations() -> Callable:
+ def dummy_function(arg1: int, arg2: Literal["bar", "baz"]) -> None:
+ """dummy function
+
+ Args:
+ arg1 (int): foo
+ arg2: one of 'bar', 'baz'
+ """
+
+ return dummy_function
+
+
@pytest.fixture()
def runnable() -> Runnable:
class Args(ExtensionsTypedDict):
@@ -278,6 +291,7 @@ def dummy_function(cls, arg1: int, arg2: Literal["bar", "baz"]) -> None:
def test_convert_to_openai_function(
pydantic: type[BaseModel],
function: Callable,
+ function_docstring_annotations: Callable,
dummy_structured_tool: StructuredTool,
dummy_tool: BaseTool,
json_schema: dict,
@@ -311,6 +325,7 @@ def test_convert_to_openai_function(
for fn in (
pydantic,
function,
+ function_docstring_annotations,
dummy_structured_tool,
dummy_tool,
json_schema,
diff --git a/libs/packages.yml b/libs/packages.yml
index 7be568aa646d4..7d995ec5d632a 100644
--- a/libs/packages.yml
+++ b/libs/packages.yml
@@ -153,3 +153,6 @@ packages:
- name: langchain-neo4j
repo: langchain-ai/langchain-neo4j
path: libs/neo4j
+ - name: langchain-linkup
+ repo: LinkupPlatform/langchain-linkup
+ path: .
diff --git a/libs/partners/mistralai/langchain_mistralai/chat_models.py b/libs/partners/mistralai/langchain_mistralai/chat_models.py
index 9e68e23ae4542..be973f3b9ec78 100644
--- a/libs/partners/mistralai/langchain_mistralai/chat_models.py
+++ b/libs/partners/mistralai/langchain_mistralai/chat_models.py
@@ -595,7 +595,7 @@ def _stream(
for chunk in self.completion_with_retry(
messages=message_dicts, run_manager=run_manager, **params
):
- if len(chunk["choices"]) == 0:
+ if len(chunk.get("choices", [])) == 0:
continue
new_chunk = _convert_chunk_to_message_chunk(chunk, default_chunk_class)
# make future chunks same type as first chunk
@@ -621,7 +621,7 @@ async def _astream(
async for chunk in await acompletion_with_retry(
self, messages=message_dicts, run_manager=run_manager, **params
):
- if len(chunk["choices"]) == 0:
+ if len(chunk.get("choices", [])) == 0:
continue
new_chunk = _convert_chunk_to_message_chunk(chunk, default_chunk_class)
# make future chunks same type as first chunk
diff --git a/libs/partners/openai/poetry.lock b/libs/partners/openai/poetry.lock
index a3de06c40eb1c..913cacca821f6 100644
--- a/libs/partners/openai/poetry.lock
+++ b/libs/partners/openai/poetry.lock
@@ -495,7 +495,7 @@ files = [
[[package]]
name = "langchain-core"
-version = "0.3.21"
+version = "0.3.22"
description = "Building applications with LLMs through composability"
optional = false
python-versions = ">=3.9,<4.0"
@@ -520,7 +520,7 @@ url = "../../core"
[[package]]
name = "langchain-tests"
-version = "0.3.4"
+version = "0.3.6"
description = "Standard tests for LangChain implementations"
optional = false
python-versions = ">=3.9,<4.0"
@@ -528,9 +528,15 @@ files = []
develop = true
[package.dependencies]
-httpx = "^0.27.0"
-langchain-core = "^0.3.19"
+httpx = ">=0.25.0,<1"
+langchain-core = "^0.3.22"
+numpy = [
+ {version = ">=1.24.0,<2.0.0", markers = "python_version < \"3.12\""},
+ {version = ">=1.26.2,<3", markers = "python_version >= \"3.12\""},
+]
pytest = ">=7,<9"
+pytest-asyncio = ">=0.20,<1"
+pytest-socket = ">=0.6.0,<1"
syrupy = "^4"
[package.source]
@@ -1639,4 +1645,4 @@ watchmedo = ["PyYAML (>=3.10)"]
[metadata]
lock-version = "2.0"
python-versions = ">=3.9,<4.0"
-content-hash = "ded25b72c77fad9a869f3308c1bba084b58f54eb13df2785f061bc340d6ec748"
+content-hash = "6fb8c9f98c76ba402d53234ac2ac78bcebafbe818e64cd849e0ae26cafcd5ba4"
diff --git a/libs/partners/openai/pyproject.toml b/libs/partners/openai/pyproject.toml
index 4fee814b9ede5..a85ab72b05f51 100644
--- a/libs/partners/openai/pyproject.toml
+++ b/libs/partners/openai/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-openai"
-version = "0.2.11"
+version = "0.2.12"
description = "An integration package connecting OpenAI and LangChain"
authors = []
readme = "README.md"
@@ -24,7 +24,7 @@ ignore_missing_imports = true
[tool.poetry.dependencies]
python = ">=3.9,<4.0"
langchain-core = "^0.3.21"
-openai = "^1.54.0"
+openai = "^1.55.3"
tiktoken = ">=0.7,<1"
[tool.ruff.lint]
diff --git a/libs/standard-tests/langchain_tests/integration_tests/vectorstores.py b/libs/standard-tests/langchain_tests/integration_tests/vectorstores.py
index c043023f3ac48..6b8d8d1d565d6 100644
--- a/libs/standard-tests/langchain_tests/integration_tests/vectorstores.py
+++ b/libs/standard-tests/langchain_tests/integration_tests/vectorstores.py
@@ -76,6 +76,21 @@ def vectorstore(self) -> Generator[VectorStore, None, None]: # type: ignore
store.delete_collection()
pass
+ Note that by default we enable both sync and async tests. To disable either,
+ override the ``has_sync`` or ``has_async`` properties to ``False`` in the
+ subclass. For example:
+
+ .. code-block:: python
+
+ class TestParrotVectorStore(VectorStoreIntegrationTests):
+ @pytest.fixture()
+ def vectorstore(self) -> Generator[VectorStore, None, None]: # type: ignore
+ ...
+
+ @property
+ def has_async(self) -> bool:
+ return False
+
.. note::
API references for individual test methods include troubleshooting tips.
""" # noqa: E501
@@ -88,6 +103,20 @@ def vectorstore(self) -> VectorStore:
The returned vectorstore should be EMPTY.
"""
+ @property
+ def has_sync(self) -> bool:
+ """
+ Configurable property to enable or disable sync tests.
+ """
+ return True
+
+ @property
+ def has_async(self) -> bool:
+ """
+ Configurable property to enable or disable async tests.
+ """
+ return True
+
@staticmethod
def get_embeddings() -> Embeddings:
"""A pre-defined embeddings model that should be used for this test.
@@ -110,6 +139,9 @@ def test_vectorstore_is_empty(self, vectorstore: VectorStore) -> None:
``VectorStoreIntegrationTests``) initializes an empty vector store in the
``vectorestore`` fixture.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
assert vectorstore.similarity_search("foo", k=1) == []
def test_add_documents(self, vectorstore: VectorStore) -> None:
@@ -123,6 +155,9 @@ def test_add_documents(self, vectorstore: VectorStore) -> None:
2. Calling ``.similarity_search`` for the top ``k`` similar documents does not threshold by score.
3. We do not mutate the original document object when adding it to the vector store (e.g., by adding an ID).
""" # noqa: E501
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
original_documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -152,6 +187,9 @@ def test_vectorstore_still_empty(self, vectorstore: VectorStore) -> None:
``VectorStoreIntegrationTests``) correctly clears the vector store in the
``finally`` block.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
assert vectorstore.similarity_search("foo", k=1) == []
def test_deleting_documents(self, vectorstore: VectorStore) -> None:
@@ -163,6 +201,9 @@ def test_deleting_documents(self, vectorstore: VectorStore) -> None:
passed in through ``ids``, and that ``delete`` correctly removes
documents.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -181,6 +222,9 @@ def test_deleting_bulk_documents(self, vectorstore: VectorStore) -> None:
If this test fails, check that ``delete`` correctly removes multiple
documents when given a list of IDs.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -200,6 +244,9 @@ def test_delete_missing_content(self, vectorstore: VectorStore) -> None:
If this test fails, check that ``delete`` does not raise an exception
when deleting IDs that do not exist.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
vectorstore.delete(["1"])
vectorstore.delete(["1", "2", "3"])
@@ -214,6 +261,9 @@ def test_add_documents_with_ids_is_idempotent(
same IDs has the same effect as adding it once (i.e., it does not
duplicate the documents).
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -235,6 +285,9 @@ def test_add_documents_by_id_with_mutation(self, vectorstore: VectorStore) -> No
ID that already exists in the vector store, the content is updated
rather than duplicated.
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -283,6 +336,9 @@ def test_get_by_ids(self, vectorstore: VectorStore) -> None:
def test_get_by_ids(self, vectorstore: VectorStore) -> None:
super().test_get_by_ids(vectorstore)
"""
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -313,6 +369,9 @@ def test_get_by_ids_missing(self, vectorstore: VectorStore) -> None:
def test_get_by_ids_missing(self, vectorstore: VectorStore) -> None:
super().test_get_by_ids_missing(vectorstore)
""" # noqa: E501
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
# This should not raise an exception
documents = vectorstore.get_by_ids(["1", "2", "3"])
assert documents == []
@@ -339,6 +398,9 @@ def test_add_documents_documents(self, vectorstore: VectorStore) -> None:
def test_add_documents_documents(self, vectorstore: VectorStore) -> None:
super().test_add_documents_documents(vectorstore)
""" # noqa: E501
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -373,6 +435,9 @@ def test_add_documents_with_existing_ids(self, vectorstore: VectorStore) -> None
def test_add_documents_with_existing_ids(self, vectorstore: VectorStore) -> None:
super().test_add_documents_with_existing_ids(vectorstore)
""" # noqa: E501
+ if not self.has_sync:
+ pytest.skip("Sync tests not supported.")
+
documents = [
Document(id="foo", page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -393,6 +458,9 @@ async def test_vectorstore_is_empty_async(self, vectorstore: VectorStore) -> Non
``VectorStoreIntegrationTests``) initializes an empty vector store in the
``vectorestore`` fixture.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
assert await vectorstore.asimilarity_search("foo", k=1) == []
async def test_add_documents_async(self, vectorstore: VectorStore) -> None:
@@ -406,6 +474,9 @@ async def test_add_documents_async(self, vectorstore: VectorStore) -> None:
2. Calling ``.asimilarity_search`` for the top ``k`` similar documents does not threshold by score.
3. We do not mutate the original document object when adding it to the vector store (e.g., by adding an ID).
""" # noqa: E501
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
original_documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -438,6 +509,9 @@ async def test_vectorstore_still_empty_async(
``VectorStoreIntegrationTests``) correctly clears the vector store in the
``finally`` block.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
assert await vectorstore.asimilarity_search("foo", k=1) == []
async def test_deleting_documents_async(self, vectorstore: VectorStore) -> None:
@@ -449,6 +523,9 @@ async def test_deleting_documents_async(self, vectorstore: VectorStore) -> None:
passed in through ``ids``, and that ``delete`` correctly removes
documents.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -469,6 +546,9 @@ async def test_deleting_bulk_documents_async(
If this test fails, check that ``adelete`` correctly removes multiple
documents when given a list of IDs.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -488,6 +568,9 @@ async def test_delete_missing_content_async(self, vectorstore: VectorStore) -> N
If this test fails, check that ``adelete`` does not raise an exception
when deleting IDs that do not exist.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
await vectorstore.adelete(["1"])
await vectorstore.adelete(["1", "2", "3"])
@@ -502,6 +585,9 @@ async def test_add_documents_with_ids_is_idempotent_async(
same IDs has the same effect as adding it once (i.e., it does not
duplicate the documents).
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -525,6 +611,9 @@ async def test_add_documents_by_id_with_mutation_async(
ID that already exists in the vector store, the content is updated
rather than duplicated.
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -573,6 +662,9 @@ async def test_get_by_ids_async(self, vectorstore: VectorStore) -> None:
async def test_get_by_ids(self, vectorstore: VectorStore) -> None:
await super().test_get_by_ids(vectorstore)
"""
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -603,6 +695,9 @@ async def test_get_by_ids_missing_async(self, vectorstore: VectorStore) -> None:
async def test_get_by_ids_missing(self, vectorstore: VectorStore) -> None:
await super().test_get_by_ids_missing(vectorstore)
""" # noqa: E501
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
# This should not raise an exception
assert await vectorstore.aget_by_ids(["1", "2", "3"]) == []
@@ -630,6 +725,9 @@ async def test_add_documents_documents_async(
async def test_add_documents_documents(self, vectorstore: VectorStore) -> None:
await super().test_add_documents_documents(vectorstore)
""" # noqa: E501
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
@@ -666,6 +764,9 @@ async def test_add_documents_with_existing_ids_async(
async def test_add_documents_with_existing_ids(self, vectorstore: VectorStore) -> None:
await super().test_add_documents_with_existing_ids(vectorstore)
""" # noqa: E501
+ if not self.has_async:
+ pytest.skip("Async tests not supported.")
+
documents = [
Document(id="foo", page_content="foo", metadata={"id": 1}),
Document(page_content="bar", metadata={"id": 2}),
diff --git a/libs/standard-tests/poetry.lock b/libs/standard-tests/poetry.lock
index 8f814f087b377..316059246f762 100644
--- a/libs/standard-tests/poetry.lock
+++ b/libs/standard-tests/poetry.lock
@@ -466,66 +466,66 @@ files = [
[[package]]
name = "numpy"
-version = "2.1.3"
+version = "2.2.0"
description = "Fundamental package for array computing in Python"
optional = false
python-versions = ">=3.10"
files = [
- {file = "numpy-2.1.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:c894b4305373b9c5576d7a12b473702afdf48ce5369c074ba304cc5ad8730dff"},
- {file = "numpy-2.1.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:b47fbb433d3260adcd51eb54f92a2ffbc90a4595f8970ee00e064c644ac788f5"},
- {file = "numpy-2.1.3-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:825656d0743699c529c5943554d223c021ff0494ff1442152ce887ef4f7561a1"},
- {file = "numpy-2.1.3-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:6a4825252fcc430a182ac4dee5a505053d262c807f8a924603d411f6718b88fd"},
- {file = "numpy-2.1.3-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e711e02f49e176a01d0349d82cb5f05ba4db7d5e7e0defd026328e5cfb3226d3"},
- {file = "numpy-2.1.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:78574ac2d1a4a02421f25da9559850d59457bac82f2b8d7a44fe83a64f770098"},
- {file = "numpy-2.1.3-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:c7662f0e3673fe4e832fe07b65c50342ea27d989f92c80355658c7f888fcc83c"},
- {file = "numpy-2.1.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:fa2d1337dc61c8dc417fbccf20f6d1e139896a30721b7f1e832b2bb6ef4eb6c4"},
- {file = "numpy-2.1.3-cp310-cp310-win32.whl", hash = "sha256:72dcc4a35a8515d83e76b58fdf8113a5c969ccd505c8a946759b24e3182d1f23"},
- {file = "numpy-2.1.3-cp310-cp310-win_amd64.whl", hash = "sha256:ecc76a9ba2911d8d37ac01de72834d8849e55473457558e12995f4cd53e778e0"},
- {file = "numpy-2.1.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:4d1167c53b93f1f5d8a139a742b3c6f4d429b54e74e6b57d0eff40045187b15d"},
- {file = "numpy-2.1.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:c80e4a09b3d95b4e1cac08643f1152fa71a0a821a2d4277334c88d54b2219a41"},
- {file = "numpy-2.1.3-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:576a1c1d25e9e02ed7fa5477f30a127fe56debd53b8d2c89d5578f9857d03ca9"},
- {file = "numpy-2.1.3-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:973faafebaae4c0aaa1a1ca1ce02434554d67e628b8d805e61f874b84e136b09"},
- {file = "numpy-2.1.3-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:762479be47a4863e261a840e8e01608d124ee1361e48b96916f38b119cfda04a"},
- {file = "numpy-2.1.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bc6f24b3d1ecc1eebfbf5d6051faa49af40b03be1aaa781ebdadcbc090b4539b"},
- {file = "numpy-2.1.3-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:17ee83a1f4fef3c94d16dc1802b998668b5419362c8a4f4e8a491de1b41cc3ee"},
- {file = "numpy-2.1.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:15cb89f39fa6d0bdfb600ea24b250e5f1a3df23f901f51c8debaa6a5d122b2f0"},
- {file = "numpy-2.1.3-cp311-cp311-win32.whl", hash = "sha256:d9beb777a78c331580705326d2367488d5bc473b49a9bc3036c154832520aca9"},
- {file = "numpy-2.1.3-cp311-cp311-win_amd64.whl", hash = "sha256:d89dd2b6da69c4fff5e39c28a382199ddedc3a5be5390115608345dec660b9e2"},
- {file = "numpy-2.1.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:f55ba01150f52b1027829b50d70ef1dafd9821ea82905b63936668403c3b471e"},
- {file = "numpy-2.1.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:13138eadd4f4da03074851a698ffa7e405f41a0845a6b1ad135b81596e4e9958"},
- {file = "numpy-2.1.3-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:a6b46587b14b888e95e4a24d7b13ae91fa22386c199ee7b418f449032b2fa3b8"},
- {file = "numpy-2.1.3-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:0fa14563cc46422e99daef53d725d0c326e99e468a9320a240affffe87852564"},
- {file = "numpy-2.1.3-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8637dcd2caa676e475503d1f8fdb327bc495554e10838019651b76d17b98e512"},
- {file = "numpy-2.1.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2312b2aa89e1f43ecea6da6ea9a810d06aae08321609d8dc0d0eda6d946a541b"},
- {file = "numpy-2.1.3-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a38c19106902bb19351b83802531fea19dee18e5b37b36454f27f11ff956f7fc"},
- {file = "numpy-2.1.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:02135ade8b8a84011cbb67dc44e07c58f28575cf9ecf8ab304e51c05528c19f0"},
- {file = "numpy-2.1.3-cp312-cp312-win32.whl", hash = "sha256:e6988e90fcf617da2b5c78902fe8e668361b43b4fe26dbf2d7b0f8034d4cafb9"},
- {file = "numpy-2.1.3-cp312-cp312-win_amd64.whl", hash = "sha256:0d30c543f02e84e92c4b1f415b7c6b5326cbe45ee7882b6b77db7195fb971e3a"},
- {file = "numpy-2.1.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:96fe52fcdb9345b7cd82ecd34547fca4321f7656d500eca497eb7ea5a926692f"},
- {file = "numpy-2.1.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f653490b33e9c3a4c1c01d41bc2aef08f9475af51146e4a7710c450cf9761598"},
- {file = "numpy-2.1.3-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:dc258a761a16daa791081d026f0ed4399b582712e6fc887a95af09df10c5ca57"},
- {file = "numpy-2.1.3-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:016d0f6f5e77b0f0d45d77387ffa4bb89816b57c835580c3ce8e099ef830befe"},
- {file = "numpy-2.1.3-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c181ba05ce8299c7aa3125c27b9c2167bca4a4445b7ce73d5febc411ca692e43"},
- {file = "numpy-2.1.3-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5641516794ca9e5f8a4d17bb45446998c6554704d888f86df9b200e66bdcce56"},
- {file = "numpy-2.1.3-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:ea4dedd6e394a9c180b33c2c872b92f7ce0f8e7ad93e9585312b0c5a04777a4a"},
- {file = "numpy-2.1.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:b0df3635b9c8ef48bd3be5f862cf71b0a4716fa0e702155c45067c6b711ddcef"},
- {file = "numpy-2.1.3-cp313-cp313-win32.whl", hash = "sha256:50ca6aba6e163363f132b5c101ba078b8cbd3fa92c7865fd7d4d62d9779ac29f"},
- {file = "numpy-2.1.3-cp313-cp313-win_amd64.whl", hash = "sha256:747641635d3d44bcb380d950679462fae44f54b131be347d5ec2bce47d3df9ed"},
- {file = "numpy-2.1.3-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:996bb9399059c5b82f76b53ff8bb686069c05acc94656bb259b1d63d04a9506f"},
- {file = "numpy-2.1.3-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:45966d859916ad02b779706bb43b954281db43e185015df6eb3323120188f9e4"},
- {file = "numpy-2.1.3-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:baed7e8d7481bfe0874b566850cb0b85243e982388b7b23348c6db2ee2b2ae8e"},
- {file = "numpy-2.1.3-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:a9f7f672a3388133335589cfca93ed468509cb7b93ba3105fce780d04a6576a0"},
- {file = "numpy-2.1.3-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d7aac50327da5d208db2eec22eb11e491e3fe13d22653dce51b0f4109101b408"},
- {file = "numpy-2.1.3-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4394bc0dbd074b7f9b52024832d16e019decebf86caf909d94f6b3f77a8ee3b6"},
- {file = "numpy-2.1.3-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:50d18c4358a0a8a53f12a8ba9d772ab2d460321e6a93d6064fc22443d189853f"},
- {file = "numpy-2.1.3-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:14e253bd43fc6b37af4921b10f6add6925878a42a0c5fe83daee390bca80bc17"},
- {file = "numpy-2.1.3-cp313-cp313t-win32.whl", hash = "sha256:08788d27a5fd867a663f6fc753fd7c3ad7e92747efc73c53bca2f19f8bc06f48"},
- {file = "numpy-2.1.3-cp313-cp313t-win_amd64.whl", hash = "sha256:2564fbdf2b99b3f815f2107c1bbc93e2de8ee655a69c261363a1172a79a257d4"},
- {file = "numpy-2.1.3-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:4f2015dfe437dfebbfce7c85c7b53d81ba49e71ba7eadbf1df40c915af75979f"},
- {file = "numpy-2.1.3-pp310-pypy310_pp73-macosx_14_0_x86_64.whl", hash = "sha256:3522b0dfe983a575e6a9ab3a4a4dfe156c3e428468ff08ce582b9bb6bd1d71d4"},
- {file = "numpy-2.1.3-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c006b607a865b07cd981ccb218a04fc86b600411d83d6fc261357f1c0966755d"},
- {file = "numpy-2.1.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:e14e26956e6f1696070788252dcdff11b4aca4c3e8bd166e0df1bb8f315a67cb"},
- {file = "numpy-2.1.3.tar.gz", hash = "sha256:aa08e04e08aaf974d4458def539dece0d28146d866a39da5639596f4921fd761"},
+ {file = "numpy-2.2.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:1e25507d85da11ff5066269d0bd25d06e0a0f2e908415534f3e603d2a78e4ffa"},
+ {file = "numpy-2.2.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a62eb442011776e4036af5c8b1a00b706c5bc02dc15eb5344b0c750428c94219"},
+ {file = "numpy-2.2.0-cp310-cp310-macosx_14_0_arm64.whl", hash = "sha256:b606b1aaf802e6468c2608c65ff7ece53eae1a6874b3765f69b8ceb20c5fa78e"},
+ {file = "numpy-2.2.0-cp310-cp310-macosx_14_0_x86_64.whl", hash = "sha256:36b2b43146f646642b425dd2027730f99bac962618ec2052932157e213a040e9"},
+ {file = "numpy-2.2.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7fe8f3583e0607ad4e43a954e35c1748b553bfe9fdac8635c02058023277d1b3"},
+ {file = "numpy-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:122fd2fcfafdefc889c64ad99c228d5a1f9692c3a83f56c292618a59aa60ae83"},
+ {file = "numpy-2.2.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:3f2f5cddeaa4424a0a118924b988746db6ffa8565e5829b1841a8a3bd73eb59a"},
+ {file = "numpy-2.2.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7fe4bb0695fe986a9e4deec3b6857003b4cfe5c5e4aac0b95f6a658c14635e31"},
+ {file = "numpy-2.2.0-cp310-cp310-win32.whl", hash = "sha256:b30042fe92dbd79f1ba7f6898fada10bdaad1847c44f2dff9a16147e00a93661"},
+ {file = "numpy-2.2.0-cp310-cp310-win_amd64.whl", hash = "sha256:54dc1d6d66f8d37843ed281773c7174f03bf7ad826523f73435deb88ba60d2d4"},
+ {file = "numpy-2.2.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:9874bc2ff574c40ab7a5cbb7464bf9b045d617e36754a7bc93f933d52bd9ffc6"},
+ {file = "numpy-2.2.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:0da8495970f6b101ddd0c38ace92edea30e7e12b9a926b57f5fabb1ecc25bb90"},
+ {file = "numpy-2.2.0-cp311-cp311-macosx_14_0_arm64.whl", hash = "sha256:0557eebc699c1c34cccdd8c3778c9294e8196df27d713706895edc6f57d29608"},
+ {file = "numpy-2.2.0-cp311-cp311-macosx_14_0_x86_64.whl", hash = "sha256:3579eaeb5e07f3ded59298ce22b65f877a86ba8e9fe701f5576c99bb17c283da"},
+ {file = "numpy-2.2.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40deb10198bbaa531509aad0cd2f9fadb26c8b94070831e2208e7df543562b74"},
+ {file = "numpy-2.2.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c2aed8fcf8abc3020d6a9ccb31dbc9e7d7819c56a348cc88fd44be269b37427e"},
+ {file = "numpy-2.2.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a222d764352c773aa5ebde02dd84dba3279c81c6db2e482d62a3fa54e5ece69b"},
+ {file = "numpy-2.2.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:4e58666988605e251d42c2818c7d3d8991555381be26399303053b58a5bbf30d"},
+ {file = "numpy-2.2.0-cp311-cp311-win32.whl", hash = "sha256:4723a50e1523e1de4fccd1b9a6dcea750c2102461e9a02b2ac55ffeae09a4410"},
+ {file = "numpy-2.2.0-cp311-cp311-win_amd64.whl", hash = "sha256:16757cf28621e43e252c560d25b15f18a2f11da94fea344bf26c599b9cf54b73"},
+ {file = "numpy-2.2.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:cff210198bb4cae3f3c100444c5eaa573a823f05c253e7188e1362a5555235b3"},
+ {file = "numpy-2.2.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:58b92a5828bd4d9aa0952492b7de803135038de47343b2aa3cc23f3b71a3dc4e"},
+ {file = "numpy-2.2.0-cp312-cp312-macosx_14_0_arm64.whl", hash = "sha256:ebe5e59545401fbb1b24da76f006ab19734ae71e703cdb4a8b347e84a0cece67"},
+ {file = "numpy-2.2.0-cp312-cp312-macosx_14_0_x86_64.whl", hash = "sha256:e2b8cd48a9942ed3f85b95ca4105c45758438c7ed28fff1e4ce3e57c3b589d8e"},
+ {file = "numpy-2.2.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:57fcc997ffc0bef234b8875a54d4058afa92b0b0c4223fc1f62f24b3b5e86038"},
+ {file = "numpy-2.2.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85ad7d11b309bd132d74397fcf2920933c9d1dc865487128f5c03d580f2c3d03"},
+ {file = "numpy-2.2.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:cb24cca1968b21355cc6f3da1a20cd1cebd8a023e3c5b09b432444617949085a"},
+ {file = "numpy-2.2.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:0798b138c291d792f8ea40fe3768610f3c7dd2574389e37c3f26573757c8f7ef"},
+ {file = "numpy-2.2.0-cp312-cp312-win32.whl", hash = "sha256:afe8fb968743d40435c3827632fd36c5fbde633b0423da7692e426529b1759b1"},
+ {file = "numpy-2.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:3a4199f519e57d517ebd48cb76b36c82da0360781c6a0353e64c0cac30ecaad3"},
+ {file = "numpy-2.2.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f8c8b141ef9699ae777c6278b52c706b653bf15d135d302754f6b2e90eb30367"},
+ {file = "numpy-2.2.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0f0986e917aca18f7a567b812ef7ca9391288e2acb7a4308aa9d265bd724bdae"},
+ {file = "numpy-2.2.0-cp313-cp313-macosx_14_0_arm64.whl", hash = "sha256:1c92113619f7b272838b8d6702a7f8ebe5edea0df48166c47929611d0b4dea69"},
+ {file = "numpy-2.2.0-cp313-cp313-macosx_14_0_x86_64.whl", hash = "sha256:5a145e956b374e72ad1dff82779177d4a3c62bc8248f41b80cb5122e68f22d13"},
+ {file = "numpy-2.2.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:18142b497d70a34b01642b9feabb70156311b326fdddd875a9981f34a369b671"},
+ {file = "numpy-2.2.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a7d41d1612c1a82b64697e894b75db6758d4f21c3ec069d841e60ebe54b5b571"},
+ {file = "numpy-2.2.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:a98f6f20465e7618c83252c02041517bd2f7ea29be5378f09667a8f654a5918d"},
+ {file = "numpy-2.2.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:e09d40edfdb4e260cb1567d8ae770ccf3b8b7e9f0d9b5c2a9992696b30ce2742"},
+ {file = "numpy-2.2.0-cp313-cp313-win32.whl", hash = "sha256:3905a5fffcc23e597ee4d9fb3fcd209bd658c352657548db7316e810ca80458e"},
+ {file = "numpy-2.2.0-cp313-cp313-win_amd64.whl", hash = "sha256:a184288538e6ad699cbe6b24859206e38ce5fba28f3bcfa51c90d0502c1582b2"},
+ {file = "numpy-2.2.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:7832f9e8eb00be32f15fdfb9a981d6955ea9adc8574c521d48710171b6c55e95"},
+ {file = "numpy-2.2.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:f0dd071b95bbca244f4cb7f70b77d2ff3aaaba7fa16dc41f58d14854a6204e6c"},
+ {file = "numpy-2.2.0-cp313-cp313t-macosx_14_0_arm64.whl", hash = "sha256:b0b227dcff8cdc3efbce66d4e50891f04d0a387cce282fe1e66199146a6a8fca"},
+ {file = "numpy-2.2.0-cp313-cp313t-macosx_14_0_x86_64.whl", hash = "sha256:6ab153263a7c5ccaf6dfe7e53447b74f77789f28ecb278c3b5d49db7ece10d6d"},
+ {file = "numpy-2.2.0-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e500aba968a48e9019e42c0c199b7ec0696a97fa69037bea163b55398e390529"},
+ {file = "numpy-2.2.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:440cfb3db4c5029775803794f8638fbdbf71ec702caf32735f53b008e1eaece3"},
+ {file = "numpy-2.2.0-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:a55dc7a7f0b6198b07ec0cd445fbb98b05234e8b00c5ac4874a63372ba98d4ab"},
+ {file = "numpy-2.2.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:4bddbaa30d78c86329b26bd6aaaea06b1e47444da99eddac7bf1e2fab717bd72"},
+ {file = "numpy-2.2.0-cp313-cp313t-win32.whl", hash = "sha256:30bf971c12e4365153afb31fc73f441d4da157153f3400b82db32d04de1e4066"},
+ {file = "numpy-2.2.0-cp313-cp313t-win_amd64.whl", hash = "sha256:d35717333b39d1b6bb8433fa758a55f1081543de527171543a2b710551d40881"},
+ {file = "numpy-2.2.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:e12c6c1ce84628c52d6367863773f7c8c8241be554e8b79686e91a43f1733773"},
+ {file = "numpy-2.2.0-pp310-pypy310_pp73-macosx_14_0_x86_64.whl", hash = "sha256:b6207dc8fb3c8cb5668e885cef9ec7f70189bec4e276f0ff70d5aa078d32c88e"},
+ {file = "numpy-2.2.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a50aeff71d0f97b6450d33940c7181b08be1441c6c193e678211bff11aa725e7"},
+ {file = "numpy-2.2.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:df12a1f99b99f569a7c2ae59aa2d31724e8d835fc7f33e14f4792e3071d11221"},
+ {file = "numpy-2.2.0.tar.gz", hash = "sha256:140dd80ff8981a583a60980be1a655068f8adebf7a45a06a6858c873fcdcd4a0"},
]
[[package]]
diff --git a/libs/standard-tests/pyproject.toml b/libs/standard-tests/pyproject.toml
index ebc8d2a9879c2..2a44bc3b7bf83 100644
--- a/libs/standard-tests/pyproject.toml
+++ b/libs/standard-tests/pyproject.toml
@@ -4,7 +4,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "langchain-tests"
-version = "0.3.6"
+version = "0.3.7"
description = "Standard tests for LangChain implementations"
authors = ["Erick Friis "]
readme = "README.md"