diff --git a/docs/docs/02-examples/01-cli.md b/docs/docs/02-examples/01-cli.md index 7a59f592..17c67244 100644 --- a/docs/docs/02-examples/01-cli.md +++ b/docs/docs/02-examples/01-cli.md @@ -157,7 +157,7 @@ Agents: k8s-agent, github-agent Context: shared-context Chat: true -Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them. +Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You don't need to start off by guiding them. ``` By being at the top of the file, this tool will serve as the script's entrypoint. Here are the parts of this tool that are worth additional explanation: @@ -201,14 +201,14 @@ Context: shared-context Agents: k8s-agent, github-agent Chat: true -Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them. +Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You don't need to start off by guiding them. --- Name: k8s-agent Description: An agent that can help you with your Kubernetes cluster by executing kubectl commands Context: shared-context Tools: sys.exec -Parameter: task: The kubectl releated task to accomplish +Parameter: task: The kubectl related task to accomplish Chat: true You have the kubectl cli available to you. Use it to accomplish the tasks that the user asks of you. @@ -268,15 +268,15 @@ By now you should notice a simple pattern emerging that you can follow to add yo ``` Name: {your cli}-agent Description: An agent to help you with {your taks} related tasks using the gh cli -Context: {here's your biggest decsion to make}, shared-context +Context: {here's your biggest decision to make}, shared-context Tools: sys.exec -Parameter: task: The {your task}The GitHub task to accomplish +Parameter: task: The {your task} to accomplish Chat: true You have the {your cli} cli available to you. Use it to accomplish the tasks that the user asks of you. ``` -You can drop in your task and CLI and have a fairly functional CLI-based chat agent. The biggest decision you'll need to make is what and how much context to give your agent. For well-known for CLIs/technologies like kubectl and Kubernetes, you probably won't need a custom context. For custom CLIs, you'll definitely need to help the LLM out. The best approach is to experiment and see what works best. +You can drop in your task and CLI and have a fairly functional CLI-based chat agent. The biggest decision you'll need to make is what and how much context to give your agent. For well-known CLIs/technologies like kubectl and Kubernetes, you probably won't need a custom context. For custom CLIs, you'll definitely need to help the LLM out. The best approach is to experiment and see what works best. ## Next steps diff --git a/docs/docs/02-examples/04-local-files.md b/docs/docs/02-examples/04-local-files.md index 522deb01..33471afa 100644 --- a/docs/docs/02-examples/04-local-files.md +++ b/docs/docs/02-examples/04-local-files.md @@ -45,11 +45,11 @@ This is actually the entirety of the script. We're packing a lot of power into j The **Tools: ...** stanza pulls two useful tools into this assistant. -The [structured-data-querier](https://github.com/gptscript-ai/structured-data-querier) makes it possible to query csv, xlsx, and json files as though they SQL databases (using an application called [DuckDB](https://duckdb.org/)). This is extremely powerful when combined with the power of LLMs because it let's you ask natural language questions that the LLM can then translate to SQL. +The [structured-data-querier](https://github.com/gptscript-ai/structured-data-querier) makes it possible to query csv, xlsx, and json files as though they were SQL databases (using an application called [DuckDB](https://duckdb.org/)). This is extremely powerful when combined with the power of LLMs because it let's you ask natural language questions that the LLM can then translate to SQL. The [pdf-reader](https://github.com/gptscript-ai/pdf-reader) isn't quite as exciting, but still useful. It parses and reads PDFs and returns the contents to the LLM. This will put the entire contents in your chat context, so it's not appropriate for extremely large PDFs, but it's handy for smaller ones. -**Context: github.com/gptscript-ai/context/workspace** introduces a context tool makes this assistant "workspace" aware. It's description reads: +**Context: github.com/gptscript-ai/context/workspace** introduces a context tool that makes this assistant "workspace" aware. Its description reads: > Adds the workspace and tools needed to access the workspace to the current context That translates to telling the LLM what the workspace directory is and instructing it to use that directory for reading and writing files. As we saw above, you can specify a workspace like this: diff --git a/docs/docs/03-tools/03-openapi.md b/docs/docs/03-tools/03-openapi.md index 2069b331..b40284b2 100644 --- a/docs/docs/03-tools/03-openapi.md +++ b/docs/docs/03-tools/03-openapi.md @@ -1,6 +1,6 @@ # OpenAPI Tools -GPTScript can treat OpenAPI v3 definition files as though they were tool files. +GPTScript can treat OpenAPI v2 and v3 definition files as though they were tool files. Each operation (a path and HTTP method) in the file will become a simple tool that makes an HTTP request. GPTScript will automatically and internally generate the necessary code to make the request and parse the response. @@ -44,6 +44,7 @@ Will be resolved as `https://api.example.com/v1`. :::warning All authentication options will be completely ignored if the server uses HTTP and not HTTPS. This is to protect users from accidentally sending credentials in plain text. +HTTP is only OK, if it's on localhost/127.0.0.1. ::: ### 1. Security Schemes diff --git a/docs/docs/03-tools/05-context.md b/docs/docs/03-tools/05-context.md index 3a4e8c15..6dd22ed1 100644 --- a/docs/docs/03-tools/05-context.md +++ b/docs/docs/03-tools/05-context.md @@ -45,7 +45,7 @@ Here is a simple example of a context provider tool that provides additional con ```yaml # my-search-context-tool.gpt -export: sys.http.html2text? +share tools: sys.http.html2text? #!/bin/bash echo You are an expert web researcher with access to the Search tool.If the search tool fails to return any information stop execution of the script with message "Sorry! Search did not return any results". Feel free to get the contents of the returned URLs in order to get more information. Provide as much detail as you can. Also return the source of the search results. @@ -71,7 +71,7 @@ Here is an example of a context provider tool that uses args to decide which sea ```yaml # context_with_arg.gpt -export: github.com/gptscript-ai/search/duckduckgo, github.com/gptscript-ai/search/brave, sys.http.html2text? +share tools: github.com/gptscript-ai/search/duckduckgo, github.com/gptscript-ai/search/brave, sys.http.html2text? args: search_tool: tool to search with #!/bin/bash @@ -84,7 +84,7 @@ Continuing with the above example, this is how you can use it in a script: ```yaml # my_context_with_arg.gpt context: ./context_with_arg.gpt with ${search} as search_tool -Args: search: Search tool to use +args: search: Search tool to use What are some of the most popular tourist destinations in Scotland, and how many people visit them each year? diff --git a/docs/docs/03-tools/06-how-it-works.md b/docs/docs/03-tools/06-how-it-works.md index c6538395..29bf764d 100644 --- a/docs/docs/03-tools/06-how-it-works.md +++ b/docs/docs/03-tools/06-how-it-works.md @@ -1,7 +1,6 @@ # How it works -**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have available -to them other tools that can be invoked similar to a function call. While similar to a function, the tools are +**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have other tools available to them that can be invoked similar to a function call. While similar to a function, the tools are primarily implemented with a natural language prompt. **_The interaction of the tools is determined by the AI model_**, the model determines if the tool needs to be invoked and what arguments to pass. Tools are intended to be implemented with a natural language prompt but can also be implemented with a command or HTTP call. diff --git a/docs/docs/03-tools/07-gpt-file-reference.md b/docs/docs/03-tools/07-gpt-file-reference.md index c6207ad2..6734bc59 100644 --- a/docs/docs/03-tools/07-gpt-file-reference.md +++ b/docs/docs/03-tools/07-gpt-file-reference.md @@ -43,21 +43,25 @@ Tool instructions go here. Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored): -| Key | Description | -|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------| -| `Name` | The name of the tool. | -| `Model Name` | The LLM model to use, by default it uses "gpt-4-turbo". | -| `Global Model Name`| The LLM model to use for all the tools. | -| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. | -| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. | -| `Tools` | A comma-separated list of tools that are available to be called by this tool. | -| `Global Tools` | A comma-separated list of tools that are available to be called by all tools. | -| `Credentials` | A comma-separated list of credential tools to run before the main tool. | -| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. | -| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. | -| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. | -| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. | -| `Chat` | Setting it to `true` will enable an interactive chat session for the tool. | +| Key | Description | +|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------| +| `Name` | The name of the tool. | +| `Model Name` | The LLM model to use, by default it uses "gpt-4-turbo". | +| `Global Model Name` | The LLM model to use for all the tools. | +| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. | +| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. | +| `Tools` | A comma-separated list of tools that are available to be called by this tool. | +| `Global Tools` | A comma-separated list of tools that are available to be called by all tools. | +| `Parameter` / `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. | +| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. | +| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. | +| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. | +| `Chat` | Setting it to `true` will enable an interactive chat session for the tool. | +| `Credential` | Credential tool to call to set credentials as environment variables before doing anything else. One per line. | +| `Agents` | A comma-separated list of agents that are available to the tool. | +| `Share Tools` | A comma-separated list of tools that are shared by the tool. | +| `Context` | A comma-separated list of context tools available to the tool. | +| `Share Context` | A comma-separated list of context tools shared by this tool with any tool including this tool in its context. | diff --git a/docs/docs/05-alternative-model-providers.md b/docs/docs/05-alternative-model-providers.md index aa637136..51818546 100644 --- a/docs/docs/05-alternative-model-providers.md +++ b/docs/docs/05-alternative-model-providers.md @@ -12,9 +12,11 @@ model: mistral-large-latest from https://api.mistral.ai/v1 Say hello world ``` -#### Note -Mistral's La Plateforme has an OpenAI compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases. +:::note + Mistral's La Plateforme has an OpenAI compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases. + +::: ### Using a model that requires a provider ```gptscript