diff --git a/docs/docs/02-examples/02-api.md b/docs/docs/02-examples/02-api.md index 468fdd48..6c312c23 100644 --- a/docs/docs/02-examples/02-api.md +++ b/docs/docs/02-examples/02-api.md @@ -105,4 +105,4 @@ For a more advanced DigitalOcean chatbot, see our [DigitalOcean Agent](https://g To read more about OpenAPI tools in GPTScript, see the [OpenAPI Tools](../03-tools/03-openapi.md) article. -To read more about credential storage in GPTScript, see the [Credentials](../02-credentials.md) article. +To read more about credential storage in GPTScript, see the [Credentials](../06-credentials.md) article. diff --git a/docs/docs/03-tools/01-using.md b/docs/docs/03-tools/01-using.md index a2c8326b..27a59037 100644 --- a/docs/docs/03-tools/01-using.md +++ b/docs/docs/03-tools/01-using.md @@ -1,5 +1,9 @@ # Using Tools -In GPTScript, tools are used to extend the capabilities of a script. The idea behind them is that AI performs better when it has very specific instructions for a given task. Tools are a way to break-up the problem into smaller and more focused pieces where each tool is responsible for a specific task. A typical flow like this is to have a main script that imports a set of tools it can use to accomplish its goal. + +In GPTScript, tools are used to extend the capabilities of a script. +The idea behind them is that AI performs better when it has very specific instructions for a given task. +Tools are a way to break up the problem into smaller and more focused pieces where each tool is responsible for a specific task. +A typical pattern is to have a main script that imports a set of tools it can use to accomplish its goal. GPTScripts can utilize tools in one of three ways: 1. Built-in system tools @@ -7,6 +11,7 @@ GPTScripts can utilize tools in one of three ways: 3. External tools ### System Tools + All GPTScripts have access to system tools, like `sys.read` and `sys.write`, that can be used without any additional configuration. ```yaml @@ -16,11 +21,14 @@ Read all of the files in my current directory, do not recurse over any subdirect ``` System tools are a set of core tools that come packaged with GPTScript by default. +To see a list of the system tools, run `gptscript --list-tools`. ### In-Script Tools -Things get more interesting when you start to use custom tools. -The most basic example of this is an in-script tool that is defined in the same file as the main script. This is useful for breaking up a large script into smaller, more manageable pieces. +Things get more interesting when you start to write your own tools. + +The most basic example of this is an in-script tool that is defined in the same file as the main script. +This is useful for breaking up a large script into smaller, more manageable pieces. ```yaml tools: random-number @@ -35,7 +43,9 @@ Select a number at random between 1 and 100 and return only the number. ``` ### External Tools -You can refer to GPTScript tool files that are served on the web or stored locally. This is useful for sharing tools across multiple scripts or for using tools that are not part of the core GPTScript distribution. + +You can refer to GPTScript tool files that are served on the web or stored locally. +This is useful for sharing tools across multiple scripts or for using tools that are not part of the core GPTScript distribution. ```yaml tools: https://get.gptscript.ai/echo.gpt @@ -51,9 +61,11 @@ tools: echo.gpt Echo the phrase "Hello, World!". ``` -You can also refer to OpenAPI definition files as though they were GPTScript tool files. GPTScript will treat each operation in the file as a separate tool. For more details, see [OpenAPI Tools](03-openapi.md). +You can also refer to OpenAPI definition files as though they were GPTScript tool files. +GPTScript will treat each operation in the file as a separate tool. For more details, see [OpenAPI Tools](03-openapi.md). ### Packaged Tools on GitHub + GPTScript tools can be packaged and shared on GitHub, and referred to by their GitHub URL. For example: ```yaml @@ -64,5 +76,9 @@ Generate an image of a city skyline at night and write the resulting image to a Take this image and write a description of it in the style of pirate. ``` +:::important +The GitHub URL must not be prefixed with `http://` or `https://`. +::: + When this script is run, GPTScript will locally clone the referenced GitHub repos and run the tools referenced inside them. For more info on how this works, see [Authoring Tools](02-authoring.md). diff --git a/docs/docs/03-tools/02-authoring.md b/docs/docs/03-tools/02-authoring.md index 186147a6..b8757440 100644 --- a/docs/docs/03-tools/02-authoring.md +++ b/docs/docs/03-tools/02-authoring.md @@ -2,11 +2,12 @@ You can author your own tools for your use or to share with others. The process for authoring a tool is as simple as creating a `tool.gpt` file in the root directory of your project. -This file is itself a GPTScript that defines the tool's name, description, and what it should do. +This file is a GPTScript that defines the tool's name, description, and what it should do. ## Quickstart -This is a guide for writing portable tools for GPTScript. The supported languages currently are Python, Node.js, and Go. This guide uses Python, but you can see documentation for the other language below. +This is a guide for writing portable tools for GPTScript. The supported languages currently are Python, Node.js, and Go. +This guide uses Python, but you can see documentation for the other languages below. ### 1. Write the code @@ -65,7 +66,11 @@ gptscript github.com// '{"url": "https://github.com"}' ## Sharing Tools -GPTScript is designed to easily export and import tools. Doing this is currently based entirely around the use of GitHub repositories. You can export a tool by creating a GitHub repository and ensuring you have the `tool.gpt` file in the root of the repository. You can then import the tool into a GPTScript by specifying the URL of the repository in the `tools` section of the script. For example, we can leverage the `image-generation` tool by adding the following line to a GPTScript: +GPTScript is designed to easily export and import tools. +Doing this is currently based entirely around the use of GitHub repositories. +You can export a tool by creating a GitHub repository and ensuring you have the `tool.gpt` file in the root of the repository. +You can then import the tool into a GPTScript by specifying the URL of the repository in the `tools` section of the script. +For example, we can leverage the `image-generation` tool by adding the following line to a GPTScript: ```yaml tools: github.com/gptscript-ai/dalle-image-generation @@ -73,9 +78,12 @@ tools: github.com/gptscript-ai/dalle-image-generation Generate an image of a city skyline at night. ``` -### Supported Languages +## Supported Languages -GPTScript can execute any binary that you ask it to. However, it can also manage the installation of a language runtime and dependencies for you. Currently this is only supported for a few languages. Here are the supported languages and examples of tools written in those languages: +GPTScript can execute any binary that you ask it to. +However, it can also manage the installation of a language runtime and dependencies for you. +Currently, this is only supported for a few languages. +Here are the supported languages and examples of tools written in those languages: | Language | Example | |-----------|----------------------------------------------------------------------------------------------------------------| @@ -84,10 +92,13 @@ GPTScript can execute any binary that you ask it to. However, it can also manage | `Golang` | [Search](https://github.com/gptscript-ai/search) - Use various providers to search the internet | -### Automatic Documentation +## Automatic Documentation -Each GPTScript tool is self-documented using the `tool.gpt` file. You can automatically generate documentation for your tools by visiting `tools.gptscript.ai/`. This documentation site allows others to easily search and explore the tools that have been created. +Each GPTScript tool is self-documented using the `tool.gpt` file. +You can automatically generate documentation for your tools by visiting `https://tools.gptscript.ai/`. +This documentation site allows others to easily search and explore the tools that have been created. -You can add more information about how to use your tool by adding an `examples` directory to your repository and adding a collection of `.gpt` files that demonstrate how to use your tool. These examples will be automatically included in the documentation. +You can add more information about how to use your tool by adding an `examples` directory to your repository and adding a collection of `.gpt` files that demonstrate how to use your tool. +These examples will be automatically included in the documentation. For more information and to explore existing tools, visit [tools.gptscript.ai](https://tools.gptscript.ai). diff --git a/docs/docs/03-tools/03-openapi.md b/docs/docs/03-tools/03-openapi.md index b40284b2..0b0f4961 100644 --- a/docs/docs/03-tools/03-openapi.md +++ b/docs/docs/03-tools/03-openapi.md @@ -42,7 +42,7 @@ Will be resolved as `https://api.example.com/v1`. ## Authentication :::warning -All authentication options will be completely ignored if the server uses HTTP and not HTTPS. +All authentication options will be completely ignored if the server uses HTTP and not HTTPS, unless the request is for `localhost` or 127.0.0.1. This is to protect users from accidentally sending credentials in plain text. HTTP is only OK, if it's on localhost/127.0.0.1. ::: diff --git a/docs/docs/03-tools/04-credential-tools.md b/docs/docs/03-tools/04-credential-tools.md index 9aaa7601..3e6a678a 100644 --- a/docs/docs/03-tools/04-credential-tools.md +++ b/docs/docs/03-tools/04-credential-tools.md @@ -13,7 +13,7 @@ Here is a simple example of a credential provider tool that uses the builtin `sy ```yaml # my-credential-tool.gpt -name: my-credential-tool +Name: my-credential-tool #!/usr/bin/env bash @@ -27,13 +27,17 @@ echo "{\"env\":{\"MY_ENV_VAR\":\"$credential\"}}" Continuing with the above example, this is how you can use it in a script: ```yaml -credentials: my-credential-tool.gpt +Credentials: my-credential-tool.gpt as myCred #!/usr/bin/env bash echo "The value of MY_ENV_VAR is $MY_ENV_VAR" ``` +:::note +GPTScript accepts `Cred:`, `Creds:`, `Credential:`, and `Credentials:` as valid directives. +::: + When you run the script, GPTScript will call the credential provider tool first, set the environment variables from its output, and then run the script body. The credential provider tool is called by GPTScript itself. GPTScript does not ask the LLM about it or even tell the LLM about the tool. @@ -41,11 +45,13 @@ LLM about it or even tell the LLM about the tool. If GPTScript has called the credential provider tool in the same context (more on that later), then it will use the stored credential instead of fetching it again. +To delete the credential that just got stored, run `gptscript credential delete myCred`. + You can also specify multiple credential tools for the same script, but they must be on separate lines: ```yaml -credentials: credential-tool-1.gpt -credentials: credential-tool-2.gpt +Credentials: credential-tool-1.gpt +Credentials: credential-tool-2.gpt (tool stuff here) ``` @@ -56,7 +62,7 @@ GPTScript also provides a generic credential tool (`github.com/gptscript-ai/cred where you only need to set one environment variable. Here is an example of how to use it: ```yaml -credentials: github.com/gptscript-ai/credential as myCredentialName with MY_ENV_VAR as env and "this message will be displayed to the user" as message and key as field +Credentials: github.com/gptscript-ai/credential as myCredentialName with MY_ENV_VAR as env and "this message will be displayed to the user" as message and key as field (tool stuff here) ``` @@ -66,24 +72,24 @@ the environment variable `MY_ENV_VAR` and stored in a credential called `myCrede See [the repo](https://github.com/gptscript-ai/credential) for more information. -## Credential Tool Arguments +## Credential Tool Parameters -A credential tool may define arguments. Here is an example: +A credential tool may define parameters. Here is an example: ```yaml -name: my-credential-tool -args: env: the environment variable to set -args: val: the value to set it to +Name: my-credential-tool +Parameter: env: the environment variable to set +Parameter: val: the value to set it to #!/usr/bin/env bash echo "{\"env\":{\"$ENV\":\"$VAL\"}}" ``` -When you reference this credential tool in another file, you can use syntax like this to set both arguments: +When you reference this credential tool in another file, you can use syntax like this to set both parameters: ```yaml -credential: my-credential-tool.gpt with MY_ENV_VAR as env and "my value" as val +Credential: my-credential-tool.gpt with MY_ENV_VAR as env and "my value" as val (tool stuff here) ``` @@ -92,7 +98,7 @@ In this example, the tool's output would be `{"env":{"MY_ENV_VAR":"my value"}}` ## Storing Credentials -By default, credentials are automatically stored in the credential store. Read the [main credentials page](../02-credentials.md) +By default, credentials are automatically stored in the credential store. Read the [main credentials page](../06-credentials.md) for more information about the credential store. :::note @@ -105,7 +111,7 @@ will not be stored in the credentials store. When you reference a credential tool in your script, you can give it an alias using the `as` keyword like this: ```yaml -credentials: my-credential-tool.gpt as myAlias +Credentials: my-credential-tool.gpt as myAlias (tool stuff here) ``` @@ -121,8 +127,7 @@ A credential context is basically a namespace for credentials. If you have multi you can switch between them by defining them in different credential contexts. The default context is called `default`, and this is used if none is specified. -You can set the credential context to use with the `--credential-context` flag when running GPTScript. For -example: +You can set the credential context to use with the `--credential-context` flag when running GPTScript. For example: ```bash gptscript --credential-context my-azure-workspace my-azure-script.gpt @@ -181,3 +186,21 @@ In this example, `toolA` provides the variables `ENV_VAR_1` and `ENV_VAR_2`, This will read the values of `ENV_VAR_1` through `ENV_VAR_4` from the current environment and set them for the credential. This is a direct mapping of environment variable names. **This is not recommended when overriding credentials for multiple tools that use the same environment variable names.** + +## Credential Refresh (Advanced) + +Some use cases (such as OAuth) may involve the need to refresh expired credentials. +To support this, your credential tool can return other fields besides `env` in its JSON output. +This is the full list of supported fields in the credential tool output: + +- `env` (type: object) - The environment variables to set. +- `expiresAt` (type: string, timestamp in RFC3339 format) - The time when the credential expires. +- `refreshToken` (type: string) - The refresh token to use to refresh the credential. + +When GPTScript tries to use a credential that has a defined `expiresAt` time, it will check if the credential has expired. +If the credential has expired, it will run the credential tool again, and the current value of the credential will be +set to the environment variable `GPTSCRIPT_EXISTING_CREDENTIAL` as a JSON string. This way, the credential tool can check for +that environment variable, and if it is set, get the refresh token from the existing credential and use it to refresh and return a new credential, +typically without user interaction. + +For an example of a tool that uses the refresh feature, see the [Gateway OAuth2 tool](https://github.com/gptscript-ai/gateway-oauth2). diff --git a/docs/docs/03-tools/05-context.md b/docs/docs/03-tools/05-context.md index 6dd22ed1..15e600e4 100644 --- a/docs/docs/03-tools/05-context.md +++ b/docs/docs/03-tools/05-context.md @@ -1,97 +1,94 @@ # Context -GPTScript provides a mechanism to share prompt information across many tools using the tool parameter `context`. It is used to provide additional information to the calling tool on when to use a specific tool by prepending the `context` to the instruction of the calling tool. +GPTScript provides a mechanism to share prompt information across many tools using the tool directive `Context`. +It is used to provide additional information to the calling tool on when to use a specific tool by prepending the context to the instruction of the calling tool. - Context can point to a static text or a GPTScript. -- Context tools are just regular GPTScript tools, and any valid gptscript field can be used. -- Exported tools from a context tool are made available to the calling tool. +- Context tools are just regular GPTScript tools, and any valid GPTScript fields can be used in them. +- Shared tools from a context tool are made available to the calling tool. - When context points to a GPTScript tool, output from the context tool gets prepended to the instruction of the calling tool. ## Writing a Context Provider Tool as static text ```yaml -# my-search-context.txt +# my-context.txt -You are an expert web researcher with access to the Search tool.If the search tool fails to return any information stop execution of the script with message "Sorry! Search did not return any results". Feel free to get the contents of the returned URLs in order to get more information. Provide as much detail as you can. Also return the source of the search results. +You have access to run commands on the user's system. Please ask for confirmation from the user before running a command. ``` -## Using a Context Provider Tool +## Using a Context Tool -Continuing with the above example, this is how you can use the same context in tools that uses different search providers: +Continuing with the above example, this is how you can use the same context in different tools: ```yaml -# my-search-duduckgo.gpt -context: ./my-search-context.txt -tools: github.com/gptscript-ai/search/duckduckgo,sys.http.html2text - -What are some of the most popular tourist destinations in Scotland, and how many people visit them each year? +Context: ./my-context.txt +Tools: sys.exec, sys.write +Which processes on my system are using the most memory? Write their PIDs to a file called pids.txt. ``` ```yaml -# my-search-brave.gpt -context: ./my-search-context.txt -tools: github.com/gptscript-ai/search/brave,sys.http.html2text - -List out some of the main actors in the Christopher Nolan movie Inception, as well as the names of the other Christopher Nolan movies they have appeared in. +Context: ./my-context.txt +Tools: sys.exec +Which file in my current directory is the largest? ``` - ## Context Provider Tool with exported tools Here is a simple example of a context provider tool that provides additional context to search tool: ```yaml -# my-search-context-tool.gpt -share tools: sys.http.html2text? +# my-context-tool.gpt +Share Tools: sys.exec -#!/bin/bash -echo You are an expert web researcher with access to the Search tool.If the search tool fails to return any information stop execution of the script with message "Sorry! Search did not return any results". Feel free to get the contents of the returned URLs in order to get more information. Provide as much detail as you can. Also return the source of the search results. +#!sys.echo +You have access to run commands on the user's system. Please ask for confirmation from the user before running a command. ``` +The `#!sys.echo` at the start of the tool body tells GPTScript to return everything after it as the output of the tool. + Continuing with the above example, this is how you can use it in a script: ```yaml -context: ./my-search-context-tool.gpt -tools: github.com/gptscript-ai/search/duckduckgo - -What are some of the most popular tourist destinations in Scotland, and how many people visit them each year? +Context: ./my-context-tool.gpt +Tools: sys.write +Which processes on my system are using the most memory? Write their PIDs to a file called pids.txt. ``` When you run this script, GPTScript will use the output from the context tool and add it to the user message along with the existing prompt in this tool to provide additional context to LLM. -## Context Provider Tool with args +## Context Provider Tool with Parameters -Here is an example of a context provider tool that uses args to decide which search tool to use when answering the user provided queries: +Here is an example of a context provider tool that takes a parameter: ```yaml -# context_with_arg.gpt -share tools: github.com/gptscript-ai/search/duckduckgo, github.com/gptscript-ai/search/brave, sys.http.html2text? -args: search_tool: tool to search with +# context_with_param.gpt +Param: tone: the tone to use when responding to the user's request #!/bin/bash -echo You are an expert web researcher with access to the ${search_tool} Search tool.If the search tool fails to return any information stop execution of the script with message "Sorry! Search did not return any results". Feel free to get the contents of the returned URLs in order to get more information. Provide as much detail as you can. Also return the source of the search results. +echo "Respond to the user's request in a ${tone} tone." ``` Continuing with the above example, this is how you can use it in a script: ```yaml -# my_context_with_arg.gpt -context: ./context_with_arg.gpt with ${search} as search_tool -args: search: Search tool to use +# tool.gpt +Context: ./context_with_param.gpt with ${tone} as tone +Param: tone: the tone to use when responding to the user's request +Tools: sys.http.html2text -What are some of the most popular tourist destinations in Scotland, and how many people visit them each year? +What are the top stories on Hacker News right now? ``` -This script can be used to search with `brave` or `duckduckdb` tools depending on the search parameter passed to the tool. -Example usage for using brave search tool: +Here's how you can run the script and define the tone parameter: + ```yaml -gptscript --disable-cache my_context_with_arg.gpt '{"search": "brave"}' +gptscript tool.gpt '{"tone": "obnoxious"}' ``` diff --git a/docs/docs/03-tools/06-how-it-works.md b/docs/docs/03-tools/06-how-it-works.md index 29bf764d..31dd17ce 100644 --- a/docs/docs/03-tools/06-how-it-works.md +++ b/docs/docs/03-tools/06-how-it-works.md @@ -1,32 +1,33 @@ # How it works -**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have other tools available to them that can be invoked similar to a function call. While similar to a function, the tools are -primarily implemented with a natural language prompt. **_The interaction of the tools is determined by the AI model_**, -the model determines if the tool needs to be invoked and what arguments to pass. Tools are intended to be implemented -with a natural language prompt but can also be implemented with a command or HTTP call. +**_GPTScript is fundamentally composed of tools._** Each tool is either a natural language prompt for the LLM, or is +programmatic (i.e. a command, script, or program to be run). Tools that use a natural language prompt can also invoke +other tools, similar to function calls. The LLM decides when a tool needs to be invoked and sets the parameters to pass to it. ## Example -Below are two tool definitions, separated by `---`. The first tool does not require a name or description, but -every tool after name and description are required. The first tool, has the parameter `tools: bob` meaning that the tool named `bob` is available to be called if needed. +Below are two tool definitions, separated by `---`. +The first tool in the file (often referred to as the "entrypoint tool") does not need a name and description, +but a name is required for all other tools in the file, and a description is recommended. +The entrypoint tool also has the line `Tools: bob` meaning that the tool named `bob` is available to be called if needed. ```yaml -tools: bob +Tools: bob Ask Bob how he is doing and let me know exactly what he said. --- -name: bob -description: I'm Bob, a friendly guy. -args: question: The question to ask Bob. +Name: bob +Description: I'm Bob, a friendly guy. +Param: question: The question to ask Bob. When asked how I am doing, respond with "Thanks for asking "${question}", I'm doing great fellow friendly AI tool!" ``` Put the above content in a file named `bob.gpt` and run the following command: -```shell -$ gptscript bob.gpt +```bash +gptscript bob.gpt ``` ``` @@ -35,8 +36,8 @@ OUTPUT: Bob said, "Thanks for asking 'How are you doing?', I'm doing great fellow friendly AI tool!" ``` -Tools can be implemented by invoking a program instead of a natural language prompt. The below -example is the same as the previous example but implements Bob using python. +Tools can be implemented by invoking a program instead of a natural language prompt. +The below example is the same as the previous example but implements Bob using Python. ```yaml Tools: bob @@ -46,7 +47,7 @@ Ask Bob how he is doing and let me know exactly what he said. --- Name: bob Description: I'm Bob, a friendly guy. -Args: question: The question to ask Bob. +Param: question: The question to ask Bob. #!python3 @@ -55,6 +56,4 @@ import os print(f"Thanks for asking {os.environ['question']}, I'm doing great fellow friendly AI tool!") ``` -With these basic building blocks you can create complex scripts with AI interacting with AI, your local system, data, -or external services. - +With these basic building blocks you can create complex scripts with AI interacting with AI, your local system, data, or external services. diff --git a/docs/docs/03-tools/07-gpt-file-reference.md b/docs/docs/03-tools/07-gpt-file-reference.md index 6734bc59..fdc3b363 100644 --- a/docs/docs/03-tools/07-gpt-file-reference.md +++ b/docs/docs/03-tools/07-gpt-file-reference.md @@ -23,10 +23,10 @@ Do more sample tool stuff. ## Tool Definition -A tool starts with a preamble that defines the tool's name, description, args, available tools and additional parameters. +A tool starts with a preamble that defines the tool's name, description, parameters, available tools, and additional directives. The preamble is followed by the tool's body, which contains the instructions for the tool. Comments in the preamble are lines starting with `#` and are ignored by the parser. Comments are not really encouraged -as the text is typically more useful in the description, argument descriptions or instructions. +as the text is typically more useful in the description, parameter descriptions, or body. ```yaml Name: tool-name @@ -34,14 +34,15 @@ Name: tool-name Description: Tool description # This tool can invoke tool1 or tool2 if needed Tools: tool1, tool2 -Args: arg1: The description of arg1 +Param: param1: The description of param1 Tool instructions go here. ``` -## Tool Parameters +## Tool Directives -Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored): +Tool directives are key-value pairs defined at the beginning of a tool block, before the tool body. +They are specified in the format `Key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored): | Key | Description | |----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------| @@ -52,7 +53,7 @@ Tool parameters are key-value pairs defined at the beginning of a tool block, be | `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. | | `Tools` | A comma-separated list of tools that are available to be called by this tool. | | `Global Tools` | A comma-separated list of tools that are available to be called by all tools. | -| `Parameter` / `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. | +| `Parameter` / `Args` | Parameters for the tool. Each parameter is defined in the format `param-name: description`. | | `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. | | `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. | | `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. | @@ -63,26 +64,23 @@ Tool parameters are key-value pairs defined at the beginning of a tool block, be | `Context` | A comma-separated list of context tools available to the tool. | | `Share Context` | A comma-separated list of context tools shared by this tool with any tool including this tool in its context. | - - ## Tool Body -The tool body contains the instructions for the tool which can be a natural language prompt or -a command to execute. Commands must start with `#!` followed by the interpreter (e.g. `#!/bin/bash`, `#!python3`) -a text that will be placed in a file and passed to the interpreter. Arguments can be references in the instructions -using the format `${arg1}`. +The tool body contains the instructions for the tool. It can be a natural language prompt or +a command to execute. Commands must start with `#!` followed by the interpreter (e.g. `#!/bin/bash`, `#!python3`). +Parameters can be referenced in the body using the format `${param1}`. ```yaml -name: echo-ai -description: A tool that echos the input -args: input: The input +Name: echo-ai +Description: A tool that echos the input +Parameter: input: The input Just return only "${input}" --- -name: echo-command -description: A tool that echos the input -args: input: The input +Name: echo-command +Description: A tool that echos the input +Parameter: input: The input #!/bin/bash diff --git a/docs/docs/05-alternative-model-providers.md b/docs/docs/05-alternative-model-providers.md index 51818546..2c05e7dd 100644 --- a/docs/docs/05-alternative-model-providers.md +++ b/docs/docs/05-alternative-model-providers.md @@ -2,7 +2,7 @@ ## Usage -GPTScript can be used against alternative models that expose an OpenAI compatible API or have a provider shim available. +GPTScript can be used against alternative models that expose an OpenAI-compatible API or have a provider available. ### Using a model with an OpenAI compatible API @@ -13,9 +13,7 @@ Say hello world ``` :::note - - Mistral's La Plateforme has an OpenAI compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases. - +Mistral's La Plateforme has an OpenAI-compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases. ::: ### Using a model that requires a provider @@ -29,15 +27,15 @@ Say hello world For OpenAI compatible providers, GPTScript will look for an API key to be configured with the prefix `GPTSCRIPT_PROVIDER_`, the base domain converted to environment variable format, and a suffix of `_API_KEY`. -As an example if you are using `mistral-large-latest from https://api.mistral.ai/v1`, the environment variable would -be `GPTSCRIPT_PROVIDER_API_MISTRAL_AI_API_KEY` +For example, if you are using `mistral-large-latest from https://api.mistral.ai/v1`, the environment variable would +be `GPTSCRIPT_PROVIDER_API_MISTRAL_AI_API_KEY`. -Each provider shim has different requirements for authentication. Please check the readme for the provider you are +Each provider has different requirements for authentication. Please check the readme for the provider you are trying to use. ## Available Model Providers -The following shims are currently available: +The following providers are currently available: * [github.com/gptscript-ai/azure-openai-provider](https://github.com/gptscript-ai/azure-openai-provider) * [github.com/gptscript-ai/azure-other-provider](https://github.com/gptscript-ai/azure-other-provider) @@ -52,16 +50,16 @@ The following shims are currently available: For any provider that supports listing models, you can use this command: ```bash -# With a shim +# With a provider gptscript --list-models github.com/gptscript-ai/claude3-anthropic-provider -# To OpenAI compatible endpoint +# With an OpenAI-compatible endpoint gptscript --list-models https://api.mistral.ai/v1 ``` ## Compatibility -While the shims provide support for using GPTScript with other models, the effectiveness of using a +While the providers allow GPTScript to work with other models, the effectiveness of using a different model will depend on a combination of prompt engineering and the quality of the model. You may need to change wording or add more description if you are not getting the results you want. In some cases, the model might not be capable of intelligently handling the complex function calls. diff --git a/docs/docs/02-credentials.md b/docs/docs/06-credentials.md similarity index 96% rename from docs/docs/02-credentials.md rename to docs/docs/06-credentials.md index 5ba349b9..1d3431de 100644 --- a/docs/docs/02-credentials.md +++ b/docs/docs/06-credentials.md @@ -18,7 +18,9 @@ The configuration file is located in the following location based on your operat - macOS: `$HOME/Library/Application Support/gptscript/config.json` - Linux: `$XDG_CONFIG_HOME/gptscript/config.json` -(Note: if you set the `XDG_CONFIG_HOME` environment variable on macOS, then the same path as Linux will be used.) +:::note +If you set the `XDG_CONFIG_HOME` environment variable on macOS, then the same path as Linux will be used. +::: The configured credential store will be automatically downloaded and compiled from the [gptscript-ai/gptscript-credential-helpers](https://github.com/gptscript-ai/gptscript-credential-helpers) repository, other than the `file` store, which is built-in to GPTScript itself. diff --git a/docs/docs/09-faqs.md b/docs/docs/09-faqs.md index 00f26700..e8e15cb5 100644 --- a/docs/docs/09-faqs.md +++ b/docs/docs/09-faqs.md @@ -2,14 +2,15 @@ ### I don't have Homebrew, how can I install GPTScript? -On MacOS and Linux, you can alternatively install via: `curl https://get.gptscript.ai/install.sh | sh` +On macOS and Linux, you can alternatively install via: `curl https://get.gptscript.ai/install.sh | sh` On all supported systems, you download and install the archive for your platform and architecture from the [releases page](https://github.com/gptscript-ai/gptscript/releases). - ### Does GPTScript have an SDK or API I can program against? -Currently, there are three SDKs being maintained: [Python](https://github.com/gptscript-ai/py-gptscript), [Node](https://github.com/gptscript-ai/node-gptscript), and [Go](https://github.com/gptscript-ai/go-gptscript). They are currently under development and are being iterated on relatively rapidly. The READMEs in each repository contain the most up-to-date documentation for the functionality of each. +Currently, there are three SDKs being maintained: [Python](https://github.com/gptscript-ai/py-gptscript), [Node](https://github.com/gptscript-ai/node-gptscript), and [Go](https://github.com/gptscript-ai/go-gptscript). +They are under development and are being iterated on relatively rapidly. +The READMEs in each repository contain the most up-to-date documentation for the functionality of each. ### I see there's a --disable-cache flag. How does caching working in GPTScript? @@ -17,27 +18,43 @@ GPTScript leverages caching to speed up execution and reduce LLM costs. There ar - Git commit hash lookups for tools - LLM responses -Caching is enabled for both of these by default. It can be disabled via the `--disable-cache` flag. Below is an explanation of how these areas behave when caching is enabled and disabled. +Caching is enabled for both of these by default. It can be disabled via the `--disable-cache` flag. +Below is an explanation of how these areas behave when caching is enabled and disabled. #### Git commit hash lookups for tools -When a remote tool or context is included in your script (like so: `Tools: github.com/gptscript-ai/browser`) and then invoked during script execution, GPTScript will pull the Git repo for that tool and build it. The tool’s repo and build will be stored in your system’s cache directory (at [$XDG_CACHE_HOME](https://pkg.go.dev/os#UserCacheDir)/gptscript/repos). Subsequent invocations of the tool leverage that cache. When the cache is enabled, GPTScript will only check for a newer version of the tool once an hour; if an hour hasn’t passed since the last check, it will just use the one it has. If this is the first invocation and the tool doesn’t yet exist in the cache, it will be pulled and built as normal. +When a remote tool or context is included in your script (like so: `Tools: github.com/gptscript-ai/browser`) and then invoked during script execution, +GPTScript will pull the Git repo for that tool and build it. +The tool's repo and build will be stored in your system's cache directory (at [$XDG_CACHE_HOME](https://pkg.go.dev/os#UserCacheDir)/gptscript/repos). +Subsequent invocations of the tool leverage that cache. +When the cache is enabled, GPTScript will only check for a newer version of the tool once an hour; +if an hour hasn't passed since the last check, it will just use the one it has. +If this is the first invocation and the tool doesn't yet exist in the cache, it will be pulled and built as normal. -When the cache is disabled, GPTScript will check that it has the latest version of the tool (meaning the latest git commit for the repo) on every single invocation of the tool. If GPTScript determines it already has the latest version, that build will be used as-is. In other words, disabling the cache DOES NOT force GPTScript to rebuild the tool, it only forces GPTScript to always check if it has the latest version. +When the cache is disabled, GPTScript will check that it has the latest version of the tool (meaning the latest git commit for the repo) on every single invocation of the tool. +If GPTScript determines it already has the latest version, that build will be used as-is. +In other words, disabling the cache DOES NOT force GPTScript to rebuild the tool, it only forces GPTScript to always check if it has the latest version. #### LLM responses -With regards to LLM responses, when the cache is enabled GPTScript will cache the LLM’s response to a chat completion request. Each response is stored as a gob-encoded file in $XDG_CACHE_HOME/gptscript, where the file name is a hash of the chat completion request. +In regard to LLM responses, when the cache is enabled, GPTScript will cache the LLM's response to a chat completion request. +Each response is stored as a gob-encoded file in $XDG_CACHE_HOME/gptscript, where the file name is a hash of the chat completion request. -It is important to note that all [messages in chat completion request](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) are used to generate the hash that is used as the file name. This means that every message between user and LLM affects the cache lookup. So, when using GPTScript in chat mode, it is very unlikely you’ll receive a cached LLM response. Conversely, non-chat GPTScript automations are much more likely to be consistent and thus make use of cached LLM responses. +It is important to note that all [messages in chat completion request](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages) are used to generate the hash that is used as the file name. +This means that every message between user and LLM affects the cache lookup. +So, when using GPTScript in chat mode, it is very unlikely you'll receive a cached LLM response. +Conversely, non-chat GPTScript automations are much more likely to be consistent and thus make use of cached LLM responses. ### I see there's a --workspace flag. How do I make use of that? -Every invocation of GPTScript has a workspace directory available to it. By default, this directory is a one-off temp directory, but you can override this and explicitly set a workspace using the `--workspace` flag, like so: +Every invocation of GPTScript has a workspace directory available to it. +By default, this directory is a one-off temp directory, but you can override this and explicitly set a workspace using the `--workspace` flag, like so: + ``` gptscript --workspace . my-script.gpt ``` -In the above example, the user’s current directory (denoted by `.`) will be set as the workspace. Both absolute and relative paths are supported. + +In the above example, the user's current directory (denoted by `.`) will be set as the workspace. Both absolute and relative paths are supported. Regardless of whether it is set implicitly or explicitly, the workspace is then made available to the script execution as the `GPTSCRIPT_WORKSPACE_DIR` environment variable. @@ -45,17 +62,24 @@ Regardless of whether it is set implicitly or explicitly, the workspace is then GPTScript does not force scripts or tools to write to, read from, or otherwise use the workspace. The tools must decide to make use of the workspace environment variable. ::: -To make prompt-based tools workspace aware, you can add our workspace context, like so: +To make prompt-based tools workspace aware, you can reference our workspace context tool, like so: + ``` Context: github.com/gptscript-ai/context/workspace ``` -This tells the LLM (by way of a [system message](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)) what the workspace directory is, what its initial contents are, and that if it decides to create a file or directory, it should do so in the workspace directory. This will not, however, have any impact on code-based tools (ie python, bash, or go tools). Such tools will have the `GPTSCRIPT_WORKSPACE_DIR` environment variable available to them, but they must be written in such a way that they make use of it. -This context also automatically shares the `sys.ls`, `sys.read`, and `sys.write` tools with the tool that is using it as a context. This is because if a tool intends to interact with the workspace, it minimally needs these tools. +This tells the LLM (by way of a [system message](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)) what the workspace directory is, what its initial contents are, and that if it decides to create a file or directory, it should do so in the workspace directory. +This will not, however, have any impact on code-based tools (ie python, bash, or go tools). +Such tools will have the `GPTSCRIPT_WORKSPACE_DIR` environment variable available to them, but they must be written in such a way that they make use of it. + +This context also automatically shares the `sys.ls`, `sys.read`, and `sys.write` tools with the tool that is using it as a context. +This is because if a tool intends to interact with the workspace, it minimally needs these tools. ### I'm hitting GitHub's rate limit for unauthenticated requests when using GPTScript. -By default, GPTScript makes unauthenticated requests to GitHub when pulling tools. Since GitHub's rate limits for unauthenticated requests are fairly low, running into them when developing with GPTScript is a common issue. To avoid this, you can get GPTScript to make authenticated requests -- which have higher rate limits -- by setting the `GITHUB_AUTH_TOKEN` environment variable to your github account's PAT (Personal Access Token). +By default, GPTScript makes unauthenticated requests to GitHub when pulling tools. +Since GitHub's rate limits for unauthenticated requests are fairly low, running into them when developing with GPTScript is a common issue. +To avoid this, you can get GPTScript to make authenticated requests -- which have higher rate limits -- by setting the `GITHUB_AUTH_TOKEN` environment variable to your github account's PAT (Personal Access Token). If you're already authenticated with the `gh` CLI, you can use its token by running: ```bash