Skip to content

Commit

Permalink
update tools.py and tools.md
Browse files Browse the repository at this point in the history
  • Loading branch information
srdas committed Sep 17, 2024
1 parent 3bee6f1 commit 06e924e
Show file tree
Hide file tree
Showing 3 changed files with 108 additions and 55 deletions.
Binary file added docs/source/_static/tools_simple_example.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
56 changes: 49 additions & 7 deletions docs/source/users/tools.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,43 @@

In many situations LLMs will handle complex mathematical formulas quite well and return correct answers, but this is often not the case. Even for textual repsonses, using custom functions can constrain responses to formats and content that is more accurate and acceptable.

Jupyter AI includes a slash command `/tools` that directs the LLM to use functions from a tools library that you provide. This is a single file titled `mytools.py` which will be stored under `.jupyter/jupyter-ai/tools/`.
Jupyter AI includes a slash command `/tools` that directs the LLM to use functions from a tools library that you provide. You can have multiple tool files stored in the subdirectory `.jupyter/jupyter-ai/tools/`.

The usage of this slash command is as follows:
You can use a single tool file from this subdirectory. In this case the usage of this slash command is as follows:

`/tools -t <tools_file_name> <query>`
```
/tools -t <tools_file_name> <query>
```

For example, we may try using a tools file called `arithmetic.py`. Note that since the file has to be placed in `.jupyter/jupyter-ai/tools/`, only file name is needed in the command.

```
/tools -t arithmetic.py What is the sum of 1 and 2?
```

The contents of the example file `arithmetic.py` are very simple:

```
@tool
def multiply(first_number: float, second_number: float):
"""Multiplies two numbers together."""
return first_number * second_number
@tool
def add(first_number: float, second_number: float):
"""Adds two numbers together."""
return first_number + second_number
```

For example, we may try:
The result is shown below:

`/tools -t mytools.py What is the sum of 1 and 2?`
<img src="../_static/tools_simple_example.png"
width="75%"
alt='Use of the arithmetic tools file.'
class="screenshot" />

Note that since the file has to be placed in `.jupyter/jupyter-ai/tools/`, only file name is needed in the command.

We provide an example of the tools file here, containing just three functions. Make sure to add the `@tool` decorator to each function and to import all packages that are not already installed within each function. The functions below are common financial formulas that are widely in use and you may expect that an LLM would be trained on these. While this is accurate, we will see that the LLM is unable to accurately execute the math in these formulas.
We provide another example of tahe tools file here, containing just three functions, called `finance.py`. Make sure to add the `@tool` decorator to each function and to import all packages that are not already installed within each function. The functions below are common financial formulas that are widely in use and you may expect that an LLM would be trained on these. While this is accurate, we will see that the LLM is unable to accurately execute the math in these formulas.

```
@tool
Expand Down Expand Up @@ -87,3 +111,21 @@ Next, use the `/tools` command with the same query to get the correct answer:
class="screenshot" />

You can try the other tools in this example or build your own custom tools file to experiment with this feature.

If you do not want to use any specific tool file, that is, use all the tool files together, the command is simply:

```
/tools <query>
```

To list all the tool file names:

```
/tools -l
```

and the response is

```
The available tools files are: ['arithmetic.py', 'finance.py']
```
107 changes: 59 additions & 48 deletions packages/jupyter-ai/jupyter_ai/chat_handlers/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,34 +65,19 @@ def __init__(self, *args, **kwargs):
action="store",
default=None,
type=str,
help="Use tools in the given file name",
help="Uses tools in the given file name",
)
self.parser.add_argument(
"-l",
"--list",
action="store_true",
help="Lists available files in tools directory.",
)

self.parser.add_argument("query", nargs=argparse.REMAINDER)
self.tools_file_path = None

# https://python.langchain.com/v0.2/docs/integrations/platforms/
def setChatProvider(self, provider): # For selecting the model to bind tools with
try:
if "bedrock" in provider.name.lower():
chat_provider = "ChatBedrock"
elif "ollama" in provider.name.lower():
chat_provider = "ChatOllama"
elif "anthropic" in provider.name.lower():
chat_provider = "ChatAnthropic"
elif "azure" in provider.name.lower():
chat_provider = "AzureChatOpenAI"
elif "openai" in provider.name.lower():
chat_provider = "ChatOpenAI"
elif "cohere" in provider.name.lower():
chat_provider = "ChatCohere"
elif "google" in provider.name.lower():
chat_provider = "ChatGoogleGenerativeAI"
return chat_provider
except Exception as e:
self.log.error(e)
response = """The related chat provider is not supported."""
self.reply(response)

def create_llm_chain(
self, provider: Type[BaseProvider], provider_params: Dict[str, str]
):
Expand All @@ -102,7 +87,7 @@ def create_llm_chain(
}
llm = provider(**unified_parameters)
self.llm = llm
self.chat_provider = self.setChatProvider(provider)
# self.chat_provider = self.setChatProvider(provider)
memory = ConversationBufferWindowMemory(
memory_key="chat_history", return_messages=True, k=2
)
Expand All @@ -123,8 +108,22 @@ def conditional_continue(self, state: MessagesState) -> Literal["tools", "__end_
if last_message.tool_calls:
return "tools"
return "__end__"

# Get required tool files from ``.jupyter/jupyter-ai/tools/``
def getToolFiles(self, fpath):
if os.path.isfile(fpath):
file_paths = [fpath]
elif os.path.isdir(fpath):
file_paths = []
for filename in os.listdir(fpath):
file_paths.append(os.path.join(fpath, filename))
else:
self.reply("No tools found.")
return
return file_paths


def get_tool_names(self, tools_file_path):
def getToolNames(self, tools_file_path):
"""
Read a file and extract the function names following the @tool decorator.
Args:
Expand All @@ -145,8 +144,8 @@ def get_tool_names(self, tools_file_path):
and decorator.id == "tool"
):
tools.append(node.name)
return tools
except FileNotFoundError as e:
return tools # this is a list
except FileNotFoundError as e: # to do
self.reply(f"Tools file not found at {tools_file_path}.")

def toolChat(self, query):
Expand All @@ -157,6 +156,7 @@ def toolChat(self, query):
response = chunk["messages"][-1].pretty_print()
return response


##### MAIN FUNCTION #####
def useLLMwithTools(self, query):
"""
Expand All @@ -171,35 +171,38 @@ def useLLMwithTools(self, query):
Every time a query is submitted the langgraph is rebuilt in case the tools file has been changed.
"""

# Calls the requisite tool in the LangGraph
def call_tool(state: MessagesState):
messages = state["messages"]
response = self.model_with_tools.invoke(messages)
return {"messages": [response]}

# Get all tool objects from the tool files
def getTools(file_paths):
if len(file_paths)>0:
tool_names = []
for file_path in file_paths:
with open(file_path) as file:
exec(file.read())
tool_names = tool_names + self.getToolNames(file_path)
tools = [eval(j) for j in tool_names]
return tools

# Read in the tools file, WARNING - THIS USES EXEC()
file_path = self.tools_file_path
with open(file_path) as file:
exec(file.read())

# Get tool names and create node with tools
tool_names = self.get_tool_names(file_path)
tools = [eval(j) for j in tool_names]
# Get tool file(s), then tools within tool files, and create tool node from tools
file_paths = self.getToolFiles(self.tools_file_path)
tools = getTools(file_paths)
tool_node = ToolNode(tools)

# Bind tools to LLM
# print("SELF.LLM_CLASS", self.llm.__class__.id, "MODEL", self.llm.model_id, "CHAT PROVIDER", eval(self.chat_provider.__class__), "SELF.LLM.CHAT_MODELS", self.llm.chat_models)
# self.model_with_tools = self.llm.__class__(
# model_id=self.llm.model_id
# ).bind_tools(tools)
if self.chat_provider == "ChatBedrock":
self.model_with_tools = eval(self.chat_provider)(
model_id=self.llm.model_id, # model_kwargs={"temperature": 0}
).bind_tools(tools)
else:
self.model_with_tools = eval(self.chat_provider)(
model=self.llm.model_id, # temperature=0
# Check if the LLM class takes tools else advise user accordingly.
# Can be extended to include temperature parameter
try:
self.model_with_tools = self.llm.__class__(
model_id=self.llm.model_id
).bind_tools(tools)

except Exception as e:
self.reply(f"Not a chat model, cannot be used with tools. {e}")

# Initialize graph
agentic_workflow = StateGraph(MessagesState)
# Define the agent and tool nodes we will cycle between
Expand All @@ -222,10 +225,18 @@ async def process_message(self, message: HumanChatMessage):
if args is None:
return

if args.tools:
if args.list:
tool_files = os.listdir(os.path.join(Path.home(), ".jupyter/jupyter-ai/tools"))
self.reply(f"The available tools files are: {tool_files}")
return
elif args.tools:
self.tools_file_path = os.path.join(
Path.home(), ".jupyter/jupyter-ai/tools", args.tools
)
else:
self.tools_file_path = os.path.join(
Path.home(), ".jupyter/jupyter-ai/tools"
)

query = " ".join(args.query)
if not query:
Expand Down

0 comments on commit 06e924e

Please sign in to comment.