From b590659bf12939db1c58869ee127b96696c2409e Mon Sep 17 00:00:00 2001 From: dloman118 <99347459+dloman118@users.noreply.github.com> Date: Tue, 20 Aug 2024 08:13:50 -0400 Subject: [PATCH 1/2] Groq <> Toolhouse --- toolhouse/Groq <> Toolhouse.ipynb | 356 ++++++++++++++++++++++++++++++ 1 file changed, 356 insertions(+) create mode 100644 toolhouse/Groq <> Toolhouse.ipynb diff --git a/toolhouse/Groq <> Toolhouse.ipynb b/toolhouse/Groq <> Toolhouse.ipynb new file mode 100644 index 0000000..6c75b4e --- /dev/null +++ b/toolhouse/Groq <> Toolhouse.ipynb @@ -0,0 +1,356 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "id": "73b0444b", + "metadata": {}, + "source": [ + "# Using Toolhouse for Tool Use with Groq API" + ] + }, + { + "cell_type": "markdown", + "id": "626a645c", + "metadata": {}, + "source": [ + "[Toolhouse](https://app.toolhouse.ai/) is the first marketplace of AI tools, featuring tools like Code Interpreter, Web search and an Email tool. Rat . In this short demo we'll show how to use Toolhouse with Groq API, in particular Groq's finetuned for tool use `llama3-groq-70b-8192-tool-use-preview` model, for effective and fast tool calling." + ] + }, + { + "cell_type": "markdown", + "id": "002c27a7", + "metadata": {}, + "source": [ + "### Setup" + ] + }, + { + "cell_type": "code", + "execution_count": 1, + "id": "d237c86c", + "metadata": {}, + "outputs": [], + "source": [ + "# Import packages\n", + "from toolhouse import Toolhouse\n", + "from groq import Groq" + ] + }, + { + "cell_type": "markdown", + "id": "458ace84", + "metadata": {}, + "source": [ + "To use Groq and Toolhouse, set GROQ_API_KEY and TOOLHOUSE_API_KEY as environment variables:" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "id": "bf32deff", + "metadata": {}, + "outputs": [], + "source": [ + "client = Groq()\n", + "th = Toolhouse()" + ] + }, + { + "cell_type": "markdown", + "id": "32220c3f", + "metadata": {}, + "source": [ + "We will use the Groq custom finetuned [Llama3 70B Tool Use model](https://wow.groq.com/introducing-llama-3-groq-tool-use-models/). This model has been specifically finetuned off of Meta Llama3 70B for tool use and function calling, and achieves state-of-the-art performance on the Berkeley Function Calling Leaderboard:" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "id": "8771ddd7", + "metadata": {}, + "outputs": [], + "source": [ + "MODEL = \"llama3-groq-70b-8192-tool-use-preview\"" + ] + }, + { + "cell_type": "markdown", + "id": "89ad14fd", + "metadata": {}, + "source": [ + "We can use the `th.get_tools()` function to display all of the Toolhouse tools available:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "id": "d814c0bf", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "TOOLS AVAILABLE:\n", + "Name: code_interpreter\n", + "Type: function\n", + "Description: Allows you to run the code you generate. You can use this tool to verify that the code you generate is valid and contains all the relevant dependencies. IMPORTANT: When sending code, end your code with a print statement of the result. If you return something, make sure to change that return statement for a print statement.\n", + "Required Parameters:\n", + "code_str - The code to execute. Only Python is supported at the moment. IMPORTANT: When sending code, end your code with a print statement of the result. If you return something, make sure to change that return statement for a print statement.\n", + "\n", + "\n" + ] + } + ], + "source": [ + "print('TOOLS AVAILABLE:')\n", + "for tool in th.get_tools():\n", + " print(f\"Name: {tool['function']['name']}\")\n", + " print(f\"Type: {tool['type']}\")\n", + " print(f\"Description: {tool['function']['description']}\")\n", + " print('Required Parameters:')\n", + " for required_parameter in tool['required']:\n", + " print(f\"{required_parameter} - {tool['function']['parameters']['properties'][required_parameter]['description']}\")\n", + " print('\\n')" + ] + }, + { + "cell_type": "markdown", + "id": "b5b2b413", + "metadata": {}, + "source": [ + "For this demo, we only have one tool available: `code_interpretor`. This tool takes in the `code_str` parameter, which it identifies from the user message, and runs the code. " + ] + }, + { + "cell_type": "markdown", + "id": "cd565fc7", + "metadata": {}, + "source": [ + "### Configure Tool Call" + ] + }, + { + "cell_type": "markdown", + "id": "09bf2fec", + "metadata": {}, + "source": [ + "First we'll configure the user message to the LLM with the code we want run:" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "id": "4b457ee7", + "metadata": {}, + "outputs": [], + "source": [ + "# User message to the LLM\n", + "messages = [\n", + " {\n", + " \"role\": \"user\",\n", + " \"content\": \"\"\"Run this Python code: ```python\n", + "a = 409830948\n", + "print(a / 9834294824)\n", + "```\"\"\",\n", + " }\n", + "]" + ] + }, + { + "cell_type": "markdown", + "id": "396cabd1", + "metadata": {}, + "source": [ + "We'll send this to our LLM via the Groq API. Note that this works just like a typical [Tool Use example in Groq](https://github.com/groq/groq-api-cookbook/blob/main/function-calling-101-ecommerce/Function-Calling-101-Ecommerce.ipynb), but instead of defining the tools ourselves we are using our existing tools in Toolhouse:" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "id": "08148e22", + "metadata": {}, + "outputs": [], + "source": [ + "# Groq API response with Toolhouse tools\n", + "response = client.chat.completions.create(\n", + " model=MODEL,\n", + " messages=messages,\n", + " # Passes Code Execution as a tool\n", + " tools=th.get_tools(),\n", + ")" + ] + }, + { + "cell_type": "markdown", + "id": "f279224f", + "metadata": {}, + "source": [ + "As you can see, the LLM properly identified that we'd like to invoke the code_interpreter tool, and properly passed our code to it via the `code_str` parameter:" + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "id": "8505b93d", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ID: call_2s9a\n", + "Type: function\n", + "Function: Function(arguments='{\"code_str\": \"a = 409830948\\\\nprint(a / 9834294824)\"}', name='code_interpreter')\n", + "\n", + "\n" + ] + } + ], + "source": [ + "tools_called = response.choices[0].message.tool_calls\n", + "for tool_called in tools_called:\n", + " print(f\"ID: {tool_called.id}\")\n", + " print(f\"Type: {tool_called.type}\")\n", + " print(f\"Function: {tool_called.function}\")\n", + " print('\\n')" + ] + }, + { + "cell_type": "markdown", + "id": "e13ddcaf", + "metadata": {}, + "source": [ + "### Execute Tool Call" + ] + }, + { + "cell_type": "markdown", + "id": "a0a0978f", + "metadata": {}, + "source": [ + "Now we'll run the Code Execution tool, get the result, and append it to the context. The tool gets run through Toolhouse via the `run_tools` command, with the parameters that were identified in the previous LLM call.\n", + "\n", + "As you can see from this tool run, our messages list now has entries for the assistant (the tool call configuration) and the tool (the tool call result):" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "id": "355f0f50", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Role: assistant\n", + "Tool Calls: [{'id': 'call_2s9a', 'function': {'arguments': '{\"code_str\": \"a = 409830948\\\\nprint(a / 9834294824)\"}', 'name': 'code_interpreter'}, 'type': 'function'}]\n", + "Content: None\n", + "\n", + "\n", + "Role: tool\n", + "Tool Call ID: call_2s9a\n", + "Content: 0.04167364872973225\n", + "\n", + "\n", + "\n" + ] + } + ], + "source": [ + "tool_run = th.run_tools(response)\n", + "messages.extend(tool_run)\n", + "\n", + "for message in tool_run:\n", + " print(f\"Role: {message['role']}\")\n", + " if 'tool_calls' in message:\n", + " print(f\"Tool Calls: {message['tool_calls']}\")\n", + " if 'tool_call_id' in message:\n", + " print(f\"Tool Call ID: {message['tool_call_id']}\")\n", + " print(f\"Content: {message['content']}\")\n", + " print('\\n')" + ] + }, + { + "cell_type": "markdown", + "id": "84ceb341", + "metadata": {}, + "source": [ + "We can compare the tool call result to the actual result by running the python code:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "id": "84f24416", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "ACTUAL RESULT: 0.04167364872973225\n" + ] + } + ], + "source": [ + "a = 409830948\n", + "print('ACTUAL RESULT:', a / 9834294824)" + ] + }, + { + "cell_type": "markdown", + "id": "69667e18", + "metadata": {}, + "source": [ + "Finally, we'll send our message list with the completed tool call back to the LLM to get a proper response:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "id": "f3b5513b", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "LLM RESPONSE: The result of the division is 0.04167364872973225.\n" + ] + } + ], + "source": [ + "response = client.chat.completions.create(\n", + " model=MODEL,\n", + " messages=messages,\n", + " tools=th.get_tools(),\n", + ")\n", + "\n", + "print('LLM RESPONSE:', response.choices[0].message.content)" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.3" + } + }, + "nbformat": 4, + "nbformat_minor": 5 +} From 473924a2f0302f67b8962e58ab4c774e25771739 Mon Sep 17 00:00:00 2001 From: dloman118 <99347459+dloman118@users.noreply.github.com> Date: Tue, 20 Aug 2024 13:24:05 -0400 Subject: [PATCH 2/2] PR feedback, rename --- .../Groq <> Toolhouse.ipynb | 29 +++++++++++-------- 1 file changed, 17 insertions(+), 12 deletions(-) rename {toolhouse => toolhouse-for-tool-use-with-groq-api}/Groq <> Toolhouse.ipynb (78%) diff --git a/toolhouse/Groq <> Toolhouse.ipynb b/toolhouse-for-tool-use-with-groq-api/Groq <> Toolhouse.ipynb similarity index 78% rename from toolhouse/Groq <> Toolhouse.ipynb rename to toolhouse-for-tool-use-with-groq-api/Groq <> Toolhouse.ipynb index 6c75b4e..0908413 100644 --- a/toolhouse/Groq <> Toolhouse.ipynb +++ b/toolhouse-for-tool-use-with-groq-api/Groq <> Toolhouse.ipynb @@ -13,7 +13,7 @@ "id": "626a645c", "metadata": {}, "source": [ - "[Toolhouse](https://app.toolhouse.ai/) is the first marketplace of AI tools, featuring tools like Code Interpreter, Web search and an Email tool. Rat . In this short demo we'll show how to use Toolhouse with Groq API, in particular Groq's finetuned for tool use `llama3-groq-70b-8192-tool-use-preview` model, for effective and fast tool calling." + "[Toolhouse](https://app.toolhouse.ai/) is the first marketplace of AI tools, featuring Code Interpreter, Web Search, and Email tools, among others. In this short demo, we'll show how to use Toolhouse with Groq API, in particular the Groq `llama3-groq-70b-8192-tool-use-preview` Large Language Model (LLM) that is fine-tuned for tool use, for effective and fast tool calling." ] }, { @@ -41,7 +41,12 @@ "id": "458ace84", "metadata": {}, "source": [ - "To use Groq and Toolhouse, set GROQ_API_KEY and TOOLHOUSE_API_KEY as environment variables:" + "To integrate Groq and Toolhouse, you'll need to set up two environment variables: `GROQ_API_KEY` and `TOOLHOUSE_API_KEY`. Follow these steps to obtain your API keys:\n", + "\n", + "* **Groq API Key**: Create a free Groq API key by visiting the [Groq Console](https://console.groq.com/keys).\n", + "* **Toolhouse API Key**: Sign up for Toolhouse using [this link](https://join.toolhouse.ai) to receive $150 in credits. Then, navigate to the [Toolhouse API Keys page](https://app.toolhouse.ai/settings/api-keys) to generate your API key.\n", + "\n", + "Once you have both API keys, set them as environment variables to start using Groq and Toolhouse." ] }, { @@ -60,7 +65,7 @@ "id": "32220c3f", "metadata": {}, "source": [ - "We will use the Groq custom finetuned [Llama3 70B Tool Use model](https://wow.groq.com/introducing-llama-3-groq-tool-use-models/). This model has been specifically finetuned off of Meta Llama3 70B for tool use and function calling, and achieves state-of-the-art performance on the Berkeley Function Calling Leaderboard:" + "We will use Groq's custom fine-tuned [Llama3 70B Tool Use model](https://wow.groq.com/introducing-llama-3-groq-tool-use-models/). This model is based on the Meta Llama3 70B model and was fine-tuned for tool use and function calling, achieving state-of-the-art performance on the Berkeley Function Calling Leaderboard:" ] }, { @@ -119,7 +124,7 @@ "id": "b5b2b413", "metadata": {}, "source": [ - "For this demo, we only have one tool available: `code_interpretor`. This tool takes in the `code_str` parameter, which it identifies from the user message, and runs the code. " + "For this demo, we will be using the [code_interpreter](https://app.toolhouse.ai/store/code_interpreter) tool. This tool takes in the `code_str` parameter, which it identifies from the user message, and runs the code. " ] }, { @@ -162,7 +167,7 @@ "id": "396cabd1", "metadata": {}, "source": [ - "We'll send this to our LLM via the Groq API. Note that this works just like a typical [Tool Use example in Groq](https://github.com/groq/groq-api-cookbook/blob/main/function-calling-101-ecommerce/Function-Calling-101-Ecommerce.ipynb), but instead of defining the tools ourselves we are using our existing tools in Toolhouse:" + "We'll send this to our LLM via Groq API. Note that this works just like a typical [Tool Use example in Groq](https://github.com/groq/groq-api-cookbook/blob/main/function-calling-101-ecommerce/Function-Calling-101-Ecommerce.ipynb), but instead of defining the tools ourselves we are using our existing tools in Toolhouse:" ] }, { @@ -186,7 +191,7 @@ "id": "f279224f", "metadata": {}, "source": [ - "As you can see, the LLM properly identified that we'd like to invoke the code_interpreter tool, and properly passed our code to it via the `code_str` parameter:" + "As you can see, the LLM properly identified that we'd like to invoke the `code_interpretor` tool, and properly passed our code to it via the `code_str` parameter:" ] }, { @@ -199,7 +204,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "ID: call_2s9a\n", + "ID: call_d47c\n", "Type: function\n", "Function: Function(arguments='{\"code_str\": \"a = 409830948\\\\nprint(a / 9834294824)\"}', name='code_interpreter')\n", "\n", @@ -245,12 +250,12 @@ "output_type": "stream", "text": [ "Role: assistant\n", - "Tool Calls: [{'id': 'call_2s9a', 'function': {'arguments': '{\"code_str\": \"a = 409830948\\\\nprint(a / 9834294824)\"}', 'name': 'code_interpreter'}, 'type': 'function'}]\n", + "Tool Calls: [{'id': 'call_d47c', 'function': {'arguments': '{\"code_str\": \"a = 409830948\\\\nprint(a / 9834294824)\"}', 'name': 'code_interpreter'}, 'type': 'function'}]\n", "Content: None\n", "\n", "\n", "Role: tool\n", - "Tool Call ID: call_2s9a\n", + "Tool Call ID: call_d47c\n", "Content: 0.04167364872973225\n", "\n", "\n", @@ -277,7 +282,7 @@ "id": "84ceb341", "metadata": {}, "source": [ - "We can compare the tool call result to the actual result by running the python code:" + "We can compare the tool call result to the actual result by running the Python code:" ] }, { @@ -317,7 +322,7 @@ "name": "stdout", "output_type": "stream", "text": [ - "LLM RESPONSE: The result of the division is 0.04167364872973225.\n" + "LLM RESPONSE: The result of the division is approximately 0.04167364872973225.\n" ] } ], @@ -348,7 +353,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.10.3" + "version": "3.11.9" } }, "nbformat": 4,