diff --git a/docs/core_docs/docs/integrations/chat/openai.ipynb b/docs/core_docs/docs/integrations/chat/openai.ipynb index 9b832eba98ad..eb829b093865 100644 --- a/docs/core_docs/docs/integrations/chat/openai.ipynb +++ b/docs/core_docs/docs/integrations/chat/openai.ipynb @@ -305,6 +305,36 @@ "await llmWithCustomHeaders.invoke(\"Hi there!\");" ] }, + { + "cell_type": "markdown", + "id": "7af61d1d", + "metadata": {}, + "source": [ + "## Disabling streaming usage metadata\n", + "\n", + "Some proxies or third-party providers present largely the same API interface as OpenAI, but don't support the more recently added `stream_options` parameter to return streaming usage. You can use `ChatOpenAI` to access these providers by disabling streaming usage like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "id": "0ff40bd7", + "metadata": {}, + "outputs": [], + "source": [ + "import { ChatOpenAI } from \"@langchain/openai\";\n", + "\n", + "const llmWithoutStreamUsage = new ChatOpenAI({\n", + " temperature: 0.9,\n", + " streamUsage: false,\n", + " configuration: {\n", + " baseURL: \"https://proxy.com\",\n", + " },\n", + "});\n", + "\n", + "await llmWithoutStreamUsage.invoke(\"Hi there!\");" + ] + }, { "cell_type": "markdown", "id": "013b6300",