Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parallalization issue with telemetry context manager #171

Closed
dumbPy opened this issue Apr 20, 2024 · 1 comment
Closed

parallalization issue with telemetry context manager #171

dumbPy opened this issue Apr 20, 2024 · 1 comment

Comments

@dumbPy
Copy link

dumbPy commented Apr 20, 2024

The recently introduced telemetry context managers in #31 by @holtskinner causes issue with parallelisation.

Given that the generation is an io task, concurrency is a crucial aspect that cannot be overlooked.

If multiple models are awaited in parallel, they all seem to append to same list causing exception on exit

here's an reproducible example

from langchain_google_vertexai import ChatVertexAI
from langchain_core.prompts import ChatPromptTemplate
import asyncio


model1 = ChatVertexAI(model_name="chat-bison@002")
model2 = ChatVertexAI(model_name="gemini-1.0-pro")


chain1 = ChatPromptTemplate.from_messages(["Why is the sky blue?"]) | model1

chain2 = ChatPromptTemplate.from_messages(["Why is the sky blue?"]) | model2


async def main():
    async with asyncio.TaskGroup() as tg:
        tg.create_task(chain1.ainvoke({}))
        tg.create_task(chain2.ainvoke({}))


asyncio.run(main())

Exception:

 | Traceback (most recent call last):
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2536, in ainvoke
    |     input = await step.ainvoke(
    |             ^^^^^^^^^^^^^^^^^^^
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 179, in ainvoke
    |     llm_result = await self.agenerate_prompt(
    |                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 570, in agenerate_prompt
    |     return await self.agenerate(
    |            ^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 530, in agenerate
    |     raise exceptions[0]
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 715, in _agenerate_with_cache
    |     result = await self._agenerate(
    |              ^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 635, in _agenerate
    |     with telemetry.tool_context_manager(self._user_agent):
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/contextlib.py", line 144, in __exit__
    |     next(self.gen)
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/google/cloud/aiplatform/telemetry.py", line 48, in tool_context_manager
    |     _pop_tool_name(tool_name)
    |   File "/Users/sufiyan/micromamba/envs/nlp-cloud-server/lib/python3.11/site-packages/google/cloud/aiplatform/telemetry.py", line 57, in _pop_tool_name
    |     raise RuntimeError(
    | RuntimeError: Tool context error detected. This can occur due to parallelization.
    +------------------------------------
@lkuligin
Copy link
Collaborator

can you try the version from main, please? it works without an error on my side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants