Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatBedrockConverse - https://langchain-ai.github.io/langgraphjs/#example #6052

Closed
5 tasks done
MarcoLooy opened this issue Jul 12, 2024 · 3 comments
Closed
5 tasks done
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@MarcoLooy
Copy link

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

import {HumanMessage} from "@langchain/core/messages";
import {DynamicStructuredTool} from "@langchain/core/tools";
import {z} from "zod";
// import {ChatAnthropic} from "@langchain/anthropic";
import {ChatBedrockConverse} from "@langchain/aws";
import {END, START, StateGraph, MemorySaver} from "@langchain/langgraph";
import {ToolNode} from "@langchain/langgraph/prebuilt";

// Define the graph state
const graphState = {
messages: {
value: (x, y) => x.concat(y),
default: () => [],
},
};

// Define the tools for the agent to use
const searchTool = new DynamicStructuredTool({
name: "weather",
description: "Get the weather information for a city.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
func: async ({query}) => {
// This is a placeholder for the actual implementation
if (
query.toLowerCase().includes("sf") ||
query.toLowerCase().includes("san francisco")
) {
return "It's 60 degrees and foggy.";
}
return "It's 90 degrees and sunny.";
},
});

const tools = [searchTool];
const toolNode = new ToolNode(tools);

const model = new ChatBedrockConverse({
model: "anthropic.claude-3-5-sonnet-20240620-v1:0",
region: "us-east-1",
temperature: 0,
}).bindTools(tools);

// Define the function that determines whether to continue or not
function shouldContinue(state) {
const messages = state.messages;
const lastMessage = messages[messages.length - 1];
// If the LLM makes a tool call, then we route to the "tools" node
if (lastMessage.tool_calls?.length) {
return "tools";
}
// Otherwise, we stop (reply to the user)
return END;
}

// Define the function that calls the model
async function callModel(state) {
const messages = state.messages;
const response = await model.invoke(messages);
// We return a list, because this will get added to the existing list
return {messages: [response]};
}

// Define a new graph
const workflow = new StateGraph({channels: graphState})
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", shouldContinue)
.addEdge("tools", "agent");

// Initialize memory to persist state between graph runs
const checkpointer = new MemorySaver();

// Finally, we compile it!
// This compiles it into a LangChain Runnable.
// Note that we're (optionally) passing the memory when compiling the graph
const app = workflow.compile({checkpointer});

// Use the Runnable
(async () => {
const finalState = await app.invoke(
{messages: [new HumanMessage("what is the weather in sf")]},
{configurable: {thread_id: "42"}}
);
console.log(finalState.messages[finalState.messages.length - 1].content);
})();

Error Message and Stack Trace (if applicable)

/home/ubuntu/chat-bedrock-converse/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/index.js:1082
const exception = new ValidationException({
^

ValidationException: The model returned the following errors: Your API request included an assistant message in the final position, which would pre-fill the assistant response. When using tools, pre-filling the assistant response is not supported.
at de_ValidationExceptionRes (/home/ubuntu/chat-bedrock-converse/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/index.js:1082:21)
at de_CommandError (/home/ubuntu/chat-bedrock-converse/node_modules/@aws-sdk/client-bedrock-runtime/dist-cjs/index.js:937:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /home/ubuntu/chat-bedrock-converse/node_modules/@smithy/middleware-serde/dist-cjs/index.js:35:20
at async /home/ubuntu/chat-bedrock-converse/node_modules/@smithy/core/dist-cjs/index.js:165:18
at async /home/ubuntu/chat-bedrock-converse/node_modules/@smithy/middleware-retry/dist-cjs/index.js:320:38
at async /home/ubuntu/chat-bedrock-converse/node_modules/@aws-sdk/middleware-logger/dist-cjs/index.js:34:22
at async ChatBedrockConverse._generateNonStreaming (file:///home/ubuntu/chat-bedrock-convers/node_modules/@langchain/aws/dist/chat_models.js:219:26)
at async Promise.allSettled (index 0)
at async ChatBedrockConverse._generateUncached (file:///home/ubuntu/chat-bedrock-converse/node_modules/@langchain/core/dist/language_models/chat_models.js:173:29) {
'$fault': 'client',
'$metadata': {
httpStatusCode: 400,
requestId: '687b34bb-1b95-44d8-b2d2-5376e1058bc3',
extendedRequestId: undefined,
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
}
}

Node.js v18.18.0

Description

The @langchain/aws ChatBedrockConverse implementation does not work with the standard Langgraph example.

https://langchain-ai.github.io/langgraphjs/#example

System Info

Node.js v18.18.0

"dependencies": {
"@aws-sdk/credential-provider-node": "^3.609.0",
"@langchain/aws": "^0.0.1",
"@langchain/core": "^0.2.14",
"@langchain/langgraph": "^0.0.26",
"zod": "^3.23.8"
}

@MarcoLooy MarcoLooy changed the title ChatBedrockConvers - https://langchain-ai.github.io/langgraphjs/#example ChatBedrockConverse - https://langchain-ai.github.io/langgraphjs/#example Jul 12, 2024
@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jul 12, 2024
Copy link

dosubot bot commented Jul 12, 2024

Hey there, @MarcoLooy! 👋 I'm here to help you out and keep you company while we wait for a human maintainer.

To resolve the issue with the ChatBedrockConverse implementation from the @langchain/aws package, you need to ensure that the final message in the API request is not an assistant message. The convertToConverseMessages function in your code is responsible for converting messages to the format expected by the Bedrock API.

Here's a snippet of the convertToConverseMessages function that handles the conversion:

export function convertToConverseMessages(messages: BaseMessage[]): {
  converseMessages: BedrockMessage[];
  converseSystem: BedrockSystemContentBlock[];
} {
  const converseSystem: BedrockSystemContentBlock[] = messages
    .filter((msg) => msg._getType() === "system")
    .map((msg) => {
      const text = msg.content;
      if (typeof text !== "string") {
        throw new Error("System message content must be a string.");
      }
      return { text };
    });
  const converseMessages: BedrockMessage[] = messages
    .filter((msg) => !["system", "tool", "function"].includes(msg._getType()))
    .map((msg) => {
      if (msg._getType() === "ai") {
        const castMsg = msg as AIMessage;
        if (typeof castMsg.content === "string") {
          return {
            role: "assistant",
            content: [
              {
                text: castMsg.content,
              },
            ],
          };
        } else {
          if (castMsg.tool_calls && castMsg.tool_calls.length) {
            return {
              role: "assistant",
              content: castMsg.tool_calls.map((tc) => ({
                toolUse: {
                  toolUseId: tc.id,
                  name: tc.name,
                  input: tc.args,
                },
              })),
            };
          } else {
            const contentBlocks: ContentBlock[] = castMsg.content.map(
              (block) => {
                if (block.type === "text") {
                  return {
                    text: block.text,
                  };
                } else {
                  throw new Error(
                    `Unsupported content block type: ${block.type}`
                  );
                }
              }
            );
            return {
              role: "assistant",
              content: contentBlocks,
            };
          }
        }
      } else if (msg._getType() === "human" || msg._getType() === "generic") {
        if (typeof msg.content === "string") {
          return {
            role: "user",
            content: [
              {
                text: msg.content,
              },
            ],
          };
        } else {
          const contentBlocks: ContentBlock[] = msg.content.flatMap((block) => {
            if (block.type === "image_url") {
              const base64: string =
                typeof block.image_url === "string"
                  ? block.image_url
                  : block.image_url.url;
              return extractImageInfo(base64);
            } else if (block.type === "text") {
              return {
                text: block.text,
              };
            } else {
              throw new Error(`Unsupported content block type: ${block.type}`);
            }
          });
          return {
            role: "user",
            content: contentBlocks,
          };
        }
      } else if (msg._getType() === "tool") {
        const castMsg = msg as ToolMessage;
        if (typeof castMsg.content === "string") {
          return {
            role: undefined,
            content: [
              {
                toolResult: {
                  toolUseId: castMsg.tool_call_id,
                  content: [
                    {
                      text: castMsg.content,
                    },
                  ],
                },
              },
            ],
          };
        } else {
          return {
            role: undefined,
            content: [
              {
                toolResult: {
                  toolUseId: castMsg.tool_call_id,
                  content: [
                    {
                      json: castMsg.content,
                    },
                  ],
                },
              },
            ],
          };
        }
      } else {
        throw new Error(`Unsupported message type: ${msg._getType()}`);
      }
    });
  return { converseMessages, converseSystem };
}

To fix the issue, ensure that the final message in the converseMessages array is not an assistant message. You may need to adjust the order of messages or filter out the assistant message if it appears at the end.

Additionally, you can refer to the example provided in the integration_bedrock_tools_converse.ts file to understand how to properly invoke the ChatBedrockConverse model with tools:

import { ChatBedrockConverse } from "@langchain/aws";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const model = new ChatBedrockConverse({
  model: "anthropic.claude-3-haiku-20240307-v1:0",
  region: "us-east-1",
  credentials: {
    accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
  },
});

const weatherTool = tool(
  ({ city, state }) => `The weather in ${city}, ${state} is 72°F and sunny`,
  {
    name: "weather_tool",
    description: "Get the weather for a city",
    schema: z.object({
      city: z.string().describe("The city to get the weather for"),
      state: z.string().describe("The state to get the weather for").optional(),
    }),
  }
);

const modelWithTools = model.bindTools([weatherTool]);

const res = await modelWithTools.invoke("What's the weather in New York?");
console.log(res);

/*
AIMessage {
  content: [
    {
      type: 'text',
      text: "Okay, let's get the weather for New York City."
    }
  ],
  response_metadata: { ... },
  id: '49a97da0-e971-4d7f-9f04-2495e068c15e',
  tool_calls: [
    {
      id: 'tooluse_O6Q1Ghm7SmKA9mn2ZKmBzg',
      name: 'weather_tool',
      args: {
        'city': 'New York',
      },
  ],
  usage_metadata: { input_tokens: 289, output_tokens: 68, total_tokens: 357 }
}
*/

This example demonstrates how to invoke the model with tools and handle the response, ensuring that the final message is appropriately formatted [1][2][3][4].

To continue talking to Dosu, mention @dosu.

@bracesproul
Copy link
Member

Fixed by #6043

This code will be released sometime today, then you'll have access to the fix. Thank you for your patience with this.

@MarcoLooy
Copy link
Author

MarcoLooy commented Jul 17, 2024

@bracesproul super cool. Thanks a lot for fixing the issue, very much appreciated!

Just updated the "@langchain/aws" library it is now works perfectly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants