Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log exceptions in /generate to a file #431

Merged
merged 8 commits into from
Nov 6, 2023

Conversation

dlqqq
Copy link
Member

@dlqqq dlqqq commented Nov 4, 2023

Description

  • Performs two method renames:
    • BaseChatHandler.process_message() => BaseChatHandler.on_message()
    • BaseChatHandler._process_message() => BaseChatHandler.process_message()
    • While the old syntax was technically correct in how it uses public and private methods, I found this to be needlessly obscure. So I renamed the methods and added doc comments to clarify the difference between these two methods.
  • Adds a new method on BaseChatHandler for chat handlers to override: self.handle_exc(). This allows chat handlers to define their own way of handling exceptions. If this is undefined, a default implementation is provided by BaseChatHandler.
  • Logs exceptions raised by GenerateChatHandler to a timestamped file under ./jupyter-ai-logs/.

Demo

Screenshot 2023-11-03 at 5 52 16 PM Screenshot 2023-11-03 at 5 52 34 PM

@dlqqq dlqqq added the enhancement New feature or request label Nov 4, 2023
@ellisonbg
Copy link
Contributor

I think this is a good approach as a fallback, but given that a common failure mode for /generate is that a model doesn't produce valid JSON, we should catch that error and handle it in a more user centered manner. Otherwise, users are going to look at that log file and open a GitHub issue this repo, when really we should be informing them that the model they were using wasn't able to generate valid JSON for the prompt and that they should try a different model.

@dlqqq
Copy link
Member Author

dlqqq commented Nov 6, 2023

@ellisonbg I am tracking your suggestion as a separate issue in #435. Making this change will take more time as we need to reliably detect when an LLM is failing to generate JSON, and distinguish that from other exceptions. Currently, I am seeing a double exception emitted from LangChain when I use some less-capable language models:

Traceback (most recent call last):
  File "/Users/dlq/miniconda3/envs/jai/lib/python3.11/site-packages/langchain/output_parsers/pydantic.py", line 27, in parse
    json_object = json.loads(json_str, strict=False)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dlq/miniconda3/envs/jai/lib/python3.11/json/__init__.py", line 359, in loads
    return cls(**kw).decode(s)
           ^^^^^^^^^^^^^^^^^^^
  File "/Users/dlq/miniconda3/envs/jai/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dlq/miniconda3/envs/jai/lib/python3.11/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
               ^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting ':' delimiter: line 20 column 28 (char 436)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Volumes/workplace/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/base.py", line 43, in on_message
    await self.process_message(message)
  File "/Volumes/workplace/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/generate.py", line 262, in process_message
    final_path = await self._generate_notebook(prompt=message.body)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/workplace/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/generate.py", line 238, in _generate_notebook
    outline = await generate_outline(prompt, llm=self.llm, verbose=True)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/workplace/jupyter-ai/packages/jupyter-ai/jupyter_ai/chat_handlers/generate.py", line 56, in generate_outline
    outline = parser.parse(outline)
              ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dlq/miniconda3/envs/jai/lib/python3.11/site-packages/langchain/output_parsers/pydantic.py", line 33, in parse
    raise OutputParserException(msg, llm_output=text)
langchain.schema.output_parser.OutputParserException: Failed to parse Outline from completion 

@dlqqq dlqqq merged commit 831c42b into jupyterlab:main Nov 6, 2023
4 checks passed
@dlqqq dlqqq deleted the generate-log-exception branch November 6, 2023 22:01
dbelgrod pushed a commit to dbelgrod/jupyter-ai that referenced this pull request Jun 10, 2024
* rename process_message() => on_message()

* remove unused import

* add handle_exc() method in BaseChatHandler

* add _default_handle_exc() to handle excs from handle_exc()

* log exceptions from /generate to a file

* pre-commit

* improve call to action in GenerateCH.handle_exc()

* prefer period over colon in timestamped filenames

Co-authored-by: Jason Weill <[email protected]>

---------

Co-authored-by: Jason Weill <[email protected]>
Marchlak pushed a commit to Marchlak/jupyter-ai that referenced this pull request Oct 28, 2024
* rename process_message() => on_message()

* remove unused import

* add handle_exc() method in BaseChatHandler

* add _default_handle_exc() to handle excs from handle_exc()

* log exceptions from /generate to a file

* pre-commit

* improve call to action in GenerateCH.handle_exc()

* prefer period over colon in timestamped filenames

Co-authored-by: Jason Weill <[email protected]>

---------

Co-authored-by: Jason Weill <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Save log locally when /generate command fails
3 participants