Skip to content

Commit

Permalink
update tools to take tools file in slash command
Browse files Browse the repository at this point in the history
  • Loading branch information
srdas committed Sep 16, 2024
1 parent 3159ddd commit 95a13d4
Show file tree
Hide file tree
Showing 3 changed files with 52 additions and 20 deletions.
Binary file modified docs/source/_static/tools_correct_answer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
16 changes: 14 additions & 2 deletions docs/source/users/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,19 @@

In many situations LLMs will handle complex mathematical formulas quite well and return correct answers, but this is often not the case. Even for textual repsonses, using custom functions can constrain responses to formats and content that is more accurate and acceptable.

Jupyter AI includes a slash command `/tools` that directs the LLM to use functions from a tools library that you provide. This is a single file titled `mytools.py` which may be stored in the default directory, that is, the one from which Jupyter is started. We provide an example of the tools file here, containing just three functions. Make sure to add the `@tool` decorator to each function and to import all packages that are not already installed within each function. The functions below are common financial formulas that are widely in use and you may expect that an LLM would be trained on these. While this is accurate, we will see that the LLM is unable to accurately execute the math in these formulas.
Jupyter AI includes a slash command `/tools` that directs the LLM to use functions from a tools library that you provide. This is a single file titled `mytools.py` which will be stored under `.jupyter/jupyter-ai/tools/`.

The usage of this slash command is as follows:

`/tools -t <tools_file_name> <query>`

For example, we may try:

`/tools -t mytools.py What is the sum of 1 and 2?`

Note that since the file has to be placed in `.jupyter/jupyter-ai/tools/`, only file name is needed in the command.

We provide an example of the tools file here, containing just three functions. Make sure to add the `@tool` decorator to each function and to import all packages that are not already installed within each function. The functions below are common financial formulas that are widely in use and you may expect that an LLM would be trained on these. While this is accurate, we will see that the LLM is unable to accurately execute the math in these formulas.

```
@tool
Expand Down Expand Up @@ -56,7 +68,7 @@ def calculate_monthly_payment(principal, annual_interest_rate, loan_term_years):
return monthly_payment
```

Each function contains the `@tool` decorator and the required imports. Note also the comment string that describes what each tool does. This will help direct the LLM to relevant tool. Providing sufficient guiding comments in the function is helpful in the form of comment strings, variable annotations, and expolicit argument comments, example of which are shown in the code above. For example, default values in comments will be used by the LLM if the user forgets to provide them (for example, see the explicit mention of a 6% interest rate in `calculate_monthly_payment` function above).
Each function contains the `@tool` decorator and the required imports. Note also the comment string that describes what each tool does. This will help direct the LLM to relevant tool. Providing sufficient guiding comments in the function is helpful in the form of comment strings, variable annotations, and explicit argument comments, example of which are shown in the code above. For example, default values in comments will be used by the LLM if the user forgets to provide them (for example, see the explicit mention of a 6% interest rate in `calculate_monthly_payment` function above).

When the `/tools` command is used, Jupyter AI will bind the custom tools file to the LLM currently in use and build a `LangGraph` (https://langchain-ai.github.io/langgraph/). It will use this graph to respond to the query and use the appropriate tools, if available.

Expand Down
56 changes: 38 additions & 18 deletions packages/jupyter-ai/jupyter_ai/chat_handlers/tools.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import argparse
import ast
import math
import ast
from pathlib import Path

# LangGraph imports for using tools
import os
Expand Down Expand Up @@ -58,10 +60,19 @@ def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)

self.parser.prog = "/tools"
self.parser.add_argument(
"-t",
"--tools",
action="store",
default=None,
type=str,
help="Use tools in the given file name",
)
self.parser.add_argument("query", nargs=argparse.REMAINDER)
self.tools_file_path = os.path.join(
self.output_dir, "mytools.py"
) # Maybe pass as parameter?
self.tools_file_path = None
# os.path.join(
# Path.home(), ".jupyter/jupyter-ai/tools", "mytools.py"
# ) # Maybe pass as parameter?

# https://python.langchain.com/v0.2/docs/integrations/platforms/
def setChatProvider(self, provider): # For selecting the model to bind tools with
Expand Down Expand Up @@ -110,31 +121,34 @@ def create_llm_chain(
/tools <query>
"""

def conditional_continue(state: MessagesState) -> Literal["tools", "__end__"]:
def conditional_continue(self, state: MessagesState) -> Literal["tools", "__end__"]:
messages = state["messages"]
last_message = messages[-1]
if last_message.tool_calls:
return "tools"
return "__end__"

def get_tool_names(tools_file_path):
def get_tool_names(self, tools_file_path):
"""
Read a file and extract the function names following the @tool decorator.
Args:
file_path (str): The path to the file.
Returns:
list: A list of function names.
"""
with open(tools_file_path) as file:
content = file.read()
tree = ast.parse(content)
tools = []
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
for decorator in node.decorator_list:
if isinstance(decorator, ast.Name) and decorator.id == "tool":
tools.append(node.name)
return tools
try:
with open(tools_file_path) as file:
content = file.read()
tree = ast.parse(content)
tools = []
for node in ast.walk(tree):
if isinstance(node, ast.FunctionDef):
for decorator in node.decorator_list:
if isinstance(decorator, ast.Name) and decorator.id == 'tool':
tools.append(node.name)
return tools
except FileNotFoundError as e:
self.reply(f"Tools file not found at {tools_file_path}.")

def toolChat(self, query):
print("TOOL CHAT", query)
Expand Down Expand Up @@ -169,7 +183,7 @@ def call_tool(state: MessagesState):
exec(file.read())

# Get tool names and create node with tools
tool_names = ToolsChatHandler.get_tool_names(file_path)
tool_names = self.get_tool_names(file_path)
tools = [eval(j) for j in tool_names]
tool_node = ToolNode(tools)

Expand All @@ -195,7 +209,7 @@ def call_tool(state: MessagesState):
# Add edges to the graph
agentic_workflow.add_edge("__start__", "agent")
agentic_workflow.add_conditional_edges(
"agent", ToolsChatHandler.conditional_continue
"agent", self.conditional_continue
)
agentic_workflow.add_edge("tools", "agent")
# Compile graph
Expand All @@ -210,6 +224,12 @@ async def process_message(self, message: HumanChatMessage):
args = self.parse_args(message)
if args is None:
return

if args.tools:
self.tools_file_path = os.path.join(
Path.home(), ".jupyter/jupyter-ai/tools", args.tools
)

query = " ".join(args.query)
if not query:
self.reply(f"{self.parser.format_usage()}", message)
Expand All @@ -226,7 +246,7 @@ async def process_message(self, message: HumanChatMessage):
response = """Sorry, tool usage failed.
Either (i) this LLM does not accept tools, (ii) there an error in
the custom tools file, (iii) you may also want to check the
location of the tools file, or (iv) you may need to install the
location and name of the tools file, or (iv) you may need to install the
`langchain_<provider_name>` package. (v) Finally, check that you have
authorized access to the LLM."""
self.reply(response, message)

0 comments on commit 95a13d4

Please sign in to comment.