diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md index d2331b4c60b9..b1f7caf39f96 100644 --- a/CODE_OF_CONDUCT.md +++ b/CODE_OF_CONDUCT.md @@ -1,4 +1,4 @@ -# Code of Conduct for auto-gpt +# Code of Conduct for Auto-GPT ## 1. Purpose @@ -37,4 +37,3 @@ This Code of Conduct is adapted from the [Contributor Covenant](https://www.cont ## 6. Contact If you have any questions or concerns, please contact the project maintainers. - diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index cdb84ca32a5f..10043ecb6ac6 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -4,21 +4,11 @@ First of all, thank you for considering contributing to our project! We apprecia This document provides guidelines and best practices to help you contribute effectively. -## Table of Contents - -- [Code of Conduct](#code-of-conduct) -- [Getting Started](#getting-started) -- [How to Contribute](#how-to-contribute) - - [Reporting Bugs](#reporting-bugs) - - [Suggesting Enhancements](#suggesting-enhancements) - - [Submitting Pull Requests](#submitting-pull-requests) -- [Style Guidelines](#style-guidelines) - - [Code Formatting](#code-formatting) - - [Pre-Commit Hooks](#pre-commit-hooks) - ## Code of Conduct -By participating in this project, you agree to abide by our [Code of Conduct](CODE_OF_CONDUCT.md). Please read it to understand the expectations we have for everyone who contributes to this project. +By participating in this project, you agree to abide by our [Code of Conduct]. Please read it to understand the expectations we have for everyone who contributes to this project. + +[Code of Conduct]: https://significant-gravitas.github.io/Auto-GPT/code-of-conduct.md ## πŸ“’ A Quick Word Right now we will not be accepting any Contributions that add non-essential commands to Auto-GPT. @@ -84,6 +74,7 @@ isort . ``` ### Pre-Commit Hooks + We use pre-commit hooks to ensure that code formatting and other checks are performed automatically before each commit. To set up pre-commit hooks for this project, follow these steps: Install the pre-commit package using pip: @@ -103,7 +94,14 @@ If you encounter any issues or have questions, feel free to reach out to the mai Happy coding, and once again, thank you for your contributions! Maintainers will look at PR that have no merge conflicts when deciding what to add to the project. Make sure your PR shows up here: +https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-label%3Aconflicts + +## Testing your changes + +If you add or change code, make sure the updated code is covered by tests. + +To increase coverage if necessary, [write tests using `pytest`]. -https://github.com/Significant-Gravitas/Auto-GPT/pulls?q=is%3Apr+is%3Aopen+-is%3Aconflict+ +For more info on running tests, please refer to ["Running tests"](https://significant-gravitas.github.io/Auto-GPT/testing/). -## Testing +[write tests using `pytest`]: https://realpython.com/pytest-python-testing/ diff --git a/README.md b/README.md index 9cfcda75d062..c915418b3f82 100644 --- a/README.md +++ b/README.md @@ -89,28 +89,20 @@ Your support is greatly appreciated. Development of this free, open-source proje - πŸ—ƒοΈ File storage and summarization with GPT-3.5 - πŸ”Œ Extensibility with Plugins -## πŸ“‹ Requirements - -Choose an environment to run Auto-GPT in (pick one): - - - [Docker](https://docs.docker.com/get-docker/) (*recommended*) - - Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows)) - - [VSCode + devcontainer](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) - ## Quickstart -1. Set up your OpenAI [API Keys](https://platform.openai.com/account/api-keys) +1. Get an OpenAI [API Key](https://platform.openai.com/account/api-keys) 2. Download the [latest release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) -3. Follow the [installation instructions][docs/install] +3. Follow the [installation instructions][docs/setup] 4. Configure any additional features you want, or install some [plugins][docs/plugins] 5. [Run][docs/usage] the app -Please see the [documentation][docs] linked below for full setup instructions and configuration options. +Please see the [documentation][docs] for full setup instructions and configuration options. [docs]: https://significant-gravitas.github.io/Auto-GPT/ ## πŸ“– Documentation -* [βš™οΈ Installation][docs/install] +* [βš™οΈ Setup][docs/setup] * [πŸ’» Usage][docs/usage] * [πŸ”Œ Plugins][docs/plugins] * Configuration @@ -119,7 +111,7 @@ Please see the [documentation][docs] linked below for full setup instructions an * [πŸ—£οΈ Voice (TTS)](https://significant-gravitas.github.io/Auto-GPT/configuration/voice/) * [πŸ–ΌοΈ Image Generation](https://significant-gravitas.github.io/Auto-GPT/configuration/imagegen/) -[docs/install]: https://significant-gravitas.github.io/Auto-GPT/installation/ +[docs/setup]: https://significant-gravitas.github.io/Auto-GPT/setup/ [docs/usage]: https://significant-gravitas.github.io/Auto-GPT/usage/ [docs/plugins]: https://significant-gravitas.github.io/Auto-GPT/plugins/ diff --git a/docs/LICENSE b/docs/LICENSE deleted file mode 120000 index ea5b60640b01..000000000000 --- a/docs/LICENSE +++ /dev/null @@ -1 +0,0 @@ -../LICENSE \ No newline at end of file diff --git a/docs/configuration/imagegen.md b/docs/configuration/imagegen.md index cf9d55fdced7..38fdcebb28bf 100644 --- a/docs/configuration/imagegen.md +++ b/docs/configuration/imagegen.md @@ -1,14 +1,58 @@ -## πŸ–Ό Image Generation +# πŸ–Ό Image Generation configuration -By default, Auto-GPT uses DALL-e for image generation. To use Stable Diffusion, a [Hugging Face API Token](https://huggingface.co/settings/tokens) is required. +| Config variable | Values | | +| ---------------- | ------------------------------- | -------------------- | +| `IMAGE_PROVIDER` | `dalle` `huggingface` `sdwebui` | **default: `dalle`** | -Once you have a token, set these variables in your `.env`: +## DALL-e +In `.env`, make sure `IMAGE_PROVIDER` is commented (or set to `dalle`): +``` ini +# IMAGE_PROVIDER=dalle # this is the default +``` + +Further optional configuration: + +| Config variable | Values | | +| ---------------- | ------------------ | -------------- | +| `IMAGE_SIZE` | `256` `512` `1024` | default: `256` | + +## Hugging Face + +To use text-to-image models from Hugging Face, you need a Hugging Face API token. +Link to the appropriate settings page: [Hugging Face > Settings > Tokens](https://huggingface.co/settings/tokens) + +Once you have an API token, uncomment and adjust these variables in your `.env`: ``` ini IMAGE_PROVIDER=huggingface -HUGGINGFACE_API_TOKEN=YOUR_HUGGINGFACE_API_TOKEN +HUGGINGFACE_API_TOKEN=your-huggingface-api-token ``` +Further optional configuration: + +| Config variable | Values | | +| ------------------------- | ---------------------- | ---------------------------------------- | +| `HUGGINGFACE_IMAGE_MODEL` | see [available models] | default: `CompVis/stable-diffusion-v1-4` | + +[available models]: https://huggingface.co/models?pipeline_tag=text-to-image + +## Stable Diffusion WebUI + +It is possible to use your own self-hosted Stable Diffusion WebUI with Auto-GPT: +``` ini +IMAGE_PROVIDER=sdwebui +``` + +!!! note + Make sure you are running WebUI with `--api` enabled. + +Further optional configuration: + +| Config variable | Values | | +| --------------- | ----------------------- | -------------------------------- | +| `SD_WEBUI_URL` | URL to your WebUI | default: `http://127.0.0.1:7860` | +| `SD_WEBUI_AUTH` | `{username}:{password}` | *Note: do not copy the braces!* | + ## Selenium ``` shell sudo Xvfb :10 -ac -screen 0 1024x768x24 & DISPLAY=:10 diff --git a/docs/configuration/memory.md b/docs/configuration/memory.md index 6fc80a75a617..7d7075986f07 100644 --- a/docs/configuration/memory.md +++ b/docs/configuration/memory.md @@ -1,10 +1,12 @@ ## Setting Your Cache Type -By default, Auto-GPT is going to use LocalCache instead of redis or Pinecone. +By default, Auto-GPT set up with Docker Compose will use Redis as its memory backend. +Otherwise, the default is LocalCache (which stores memory in a JSON file). -To switch to either, change the `MEMORY_BACKEND` env variable to the value that you want: +To switch to a different backend, change the `MEMORY_BACKEND` in `.env` +to the value that you want: -* `local` (default) uses a local JSON cache file +* `local` uses a local JSON cache file * `pinecone` uses the Pinecone.io account you configured in your ENV settings * `redis` will use the redis cache that you configured * `milvus` will use the milvus cache that you configured @@ -20,32 +22,39 @@ Links to memory backends - [Weaviate](https://weaviate.io) ### Redis Setup -> _**CAUTION**_ \ -This is not intended to be publicly accessible and lacks security measures. Therefore, avoid exposing Redis to the internet without a password or at all -1. Install docker (or Docker Desktop on Windows). -2. Launch Redis container. -``` shell - docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest -``` -> See https://hub.docker.com/r/redis/redis-stack-server for setting a password and additional configuration. +!!! important + If you have set up Auto-GPT using Docker Compose, then Redis is included, no further + setup needed. -3. Set the following settings in `.env`. - > Replace **PASSWORD** in angled brackets (<>) - -``` shell -MEMORY_BACKEND=redis -REDIS_HOST=localhost -REDIS_PORT=6379 -REDIS_PASSWORD= -``` +!!! caution + This setup is not intended to be publicly accessible and lacks security measures. + Avoid exposing Redis to the internet without a password or at all! - You can optionally set `WIPE_REDIS_ON_START=False` to persist memory stored in Redis. +1. Launch Redis container -You can specify the memory index for redis using the following: -``` shell -MEMORY_INDEX= -``` + :::shell + docker run -d --name redis-stack-server -p 6379:6379 redis/redis-stack-server:latest + +3. Set the following settings in `.env` + + :::ini + MEMORY_BACKEND=redis + REDIS_HOST=localhost + REDIS_PORT=6379 + REDIS_PASSWORD= + + Replace `` by your password, omitting the angled brackets (<>). + + Optional configuration: + + - `WIPE_REDIS_ON_START=False` to persist memory stored in Redis between runs. + - `MEMORY_INDEX=` to specify a name for the memory index in Redis. + The default is `auto-gpt`. + +!!! info + See [redis-stack-server](https://hub.docker.com/r/redis/redis-stack-server) for + setting a password and additional configuration. ### 🌲 Pinecone API Key Setup @@ -56,65 +65,57 @@ Pinecone lets you store vast amounts of vector-based memory, allowing the agent 3. Find your API key and region under the default project in the left sidebar. In the `.env` file set: + - `PINECONE_API_KEY` -- `PINECONE_ENV` (example: _"us-east4-gcp"_) +- `PINECONE_ENV` (example: `us-east4-gcp`) - `MEMORY_BACKEND=pinecone` -Alternatively, you can set them from the command line (advanced): - -For Windows Users: - -``` shell -setx PINECONE_API_KEY "" -setx PINECONE_ENV "" # e.g: "us-east4-gcp" -setx MEMORY_BACKEND "pinecone" -``` - -For macOS and Linux users: - -``` shell -export PINECONE_API_KEY="" -export PINECONE_ENV="" # e.g: "us-east4-gcp" -export MEMORY_BACKEND="pinecone" -``` - ### Milvus Setup -[Milvus](https://milvus.io/) is an open-source, highly scalable vector database to store huge amounts of vector-based memory and provide fast relevant search. And it can be quickly deployed by docker locally or as a cloud service provided by [Zilliz Cloud](https://zilliz.com/). +[Milvus](https://milvus.io/) is an open-source, highly scalable vector database to store +huge amounts of vector-based memory and provide fast relevant search. It can be quickly +deployed with docker, or as a cloud service provided by [Zilliz Cloud](https://zilliz.com/). -1. Deploy your Milvus service, either locally using docker or with a managed Zilliz Cloud database. +1. Deploy your Milvus service, either locally using docker or with a managed Zilliz Cloud database: - [Install and deploy Milvus locally](https://milvus.io/docs/install_standalone-operator.md) - -
Set up a managed Zilliz Cloud database (click to expand) - - 1. Go to [Zilliz Cloud](https://zilliz.com/) and sign up if you don't already have account. - 2. In the *Databases* tab, create a new database. - - Remember your username and password - - Wait until the database status is changed to RUNNING. - 3. In the *Database detail* tab of the database you have created, the public cloud endpoint, such as: - `https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443`. -
+ - Set up a managed Zilliz Cloud database + 1. Go to [Zilliz Cloud](https://zilliz.com/) and sign up if you don't already have account. + 2. In the *Databases* tab, create a new database. + - Remember your username and password + - Wait until the database status is changed to RUNNING. + 3. In the *Database detail* tab of the database you have created, the public cloud endpoint, such as: + `https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443`. 2. Run `pip3 install pymilvus` to install the required client library. - Make sure your PyMilvus version and Milvus version are [compatible](https://github.com/milvus-io/pymilvus#compatibility) to avoid issues. + Make sure your PyMilvus version and Milvus version are [compatible](https://github.com/milvus-io/pymilvus#compatibility) + to avoid issues. See also the [PyMilvus installation instructions](https://github.com/milvus-io/pymilvus#installation). -3. Update `.env` +3. Update `.env`: - `MEMORY_BACKEND=milvus` - One of: - - `MILVUS_ADDR=host:ip` (for local instance) - - `MILVUS_ADDR=https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443` (for Zilliz Cloud) + - `MILVUS_ADDR=host:ip` (for local instance) + - `MILVUS_ADDR=https://xxx-xxxx.xxxx.xxxx.zillizcloud.com:443` (for Zilliz Cloud) + + The following settings are **optional**: - *The following settings are **optional**:* - - Set `MILVUS_USERNAME='username-of-your-milvus-instance'` - - Set `MILVUS_PASSWORD='password-of-your-milvus-instance'` - - Set `MILVUS_SECURE=True` to use a secure connection. Only use if your Milvus instance has TLS enabled. - Setting `MILVUS_ADDR` to a `https://` URL will override this setting. - - Set `MILVUS_COLLECTION` if you want to change the collection name to use in Milvus. Defaults to `autogpt`. + - `MILVUS_USERNAME='username-of-your-milvus-instance'` + - `MILVUS_PASSWORD='password-of-your-milvus-instance'` + - `MILVUS_SECURE=True` to use a secure connection. + Only use if your Milvus instance has TLS enabled. + *Note: setting `MILVUS_ADDR` to a `https://` URL will override this setting.* + - `MILVUS_COLLECTION` to change the collection name to use in Milvus. + Defaults to `autogpt`. ### Weaviate Setup -[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store data objects and vector embeddings from ML-models and scales seamlessly to billion of data objects. [An instance of Weaviate can be created locally (using Docker), on Kubernetes or using Weaviate Cloud Services](https://weaviate.io/developers/weaviate/quickstart). -Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) is supported which allows the Auto-GPT process itself to start a Weaviate instance. To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`. +[Weaviate](https://weaviate.io/) is an open-source vector database. It allows to store +data objects and vector embeddings from ML-models and scales seamlessly to billion of +data objects. To set up a Weaviate database, check out their [Quickstart Tutorial](https://weaviate.io/developers/weaviate/quickstart). + +Although still experimental, [Embedded Weaviate](https://weaviate.io/developers/weaviate/installation/embedded) +is supported which allows the Auto-GPT process itself to start a Weaviate instance. +To enable it, set `USE_WEAVIATE_EMBEDDED` to `True` and make sure you `pip install "weaviate-client>=3.15.4"`. #### Install the Weaviate client @@ -128,7 +129,7 @@ $ pip install weaviate-client In your `.env` file set the following: -``` shell +``` ini MEMORY_BACKEND=weaviate WEAVIATE_HOST="127.0.0.1" # the IP or domain of the running Weaviate instance WEAVIATE_PORT="8080" @@ -140,7 +141,7 @@ WEAVIATE_EMBEDDED_PATH="/home/me/.local/share/weaviate" # this is optional and i USE_WEAVIATE_EMBEDDED=False # set to True to run Embedded Weaviate MEMORY_INDEX="Autogpt" # name of the index to create for the application ``` - + ## View Memory Usage View memory usage by using the `--debug` flag :) @@ -150,7 +151,7 @@ View memory usage by using the `--debug` flag :) Memory pre-seeding allows you to ingest files into memory and pre-seed it before running Auto-GPT. ``` shell -# python data_ingestion.py -h +$ python data_ingestion.py -h usage: data_ingestion.py [-h] (--file FILE | --dir DIR) [--init] [--overlap OVERLAP] [--max_length MAX_LENGTH] Ingest a file or a directory with multiple files into memory. Make sure to set your .env before running this script. @@ -172,15 +173,32 @@ Note that you can also use the `--file` argument to ingest a single file into me The DIR path is relative to the auto_gpt_workspace directory, so `python data_ingestion.py --dir . --init` will ingest everything in `auto_gpt_workspace` directory. -You can adjust the `max_length` and `overlap` parameters to fine-tune the way the documents are presented to the AI when it "recall" that memory: -- Adjusting the overlap value allows the AI to access more contextual information from each chunk when recalling information, but will result in more chunks being created and therefore increase memory backend usage and OpenAI API requests. -- Reducing the `max_length` value will create more chunks, which can save prompt tokens by allowing for more message history in the context, but will also increase the number of chunks. -- Increasing the `max_length` value will provide the AI with more contextual information from each chunk, reducing the number of chunks created and saving on OpenAI API requests. However, this may also use more prompt tokens and decrease the overall context available to the AI. - -Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data into its memory. Chunks of data are split and added to memory, allowing the AI to access them quickly and generate more accurate responses. It's useful for large datasets or when specific information needs to be accessed quickly. Examples include ingesting API or GitHub documentation before running Auto-GPT. - -⚠️ If you use Redis as your memory, make sure to run Auto-GPT with the `WIPE_REDIS_ON_START=False` in your `.env` file. - -⚠️For other memory backends, we currently forcefully wipe the memory when starting Auto-GPT. To ingest data with those memory backends, you can call the `data_ingestion.py` script anytime during an Auto-GPT run. - -Memories will be available to the AI immediately as they are ingested, even if ingested while Auto-GPT is running. +You can adjust the `max_length` and `overlap` parameters to fine-tune the way the + documents are presented to the AI when it "recall" that memory: + +- Adjusting the overlap value allows the AI to access more contextual information + from each chunk when recalling information, but will result in more chunks being + created and therefore increase memory backend usage and OpenAI API requests. +- Reducing the `max_length` value will create more chunks, which can save prompt + tokens by allowing for more message history in the context, but will also + increase the number of chunks. +- Increasing the `max_length` value will provide the AI with more contextual + information from each chunk, reducing the number of chunks created and saving on + OpenAI API requests. However, this may also use more prompt tokens and decrease + the overall context available to the AI. + +Memory pre-seeding is a technique for improving AI accuracy by ingesting relevant data +into its memory. Chunks of data are split and added to memory, allowing the AI to access +them quickly and generate more accurate responses. It's useful for large datasets or when +specific information needs to be accessed quickly. Examples include ingesting API or +GitHub documentation before running Auto-GPT. + +!!! attention + If you use Redis for memory, make sure to run Auto-GPT with `WIPE_REDIS_ON_START=False` + + For other memory backends, we currently forcefully wipe the memory when starting + Auto-GPT. To ingest data with those memory backends, you can call the + `data_ingestion.py` script anytime during an Auto-GPT run. + +Memories will be available to the AI immediately as they are ingested, even if ingested +while Auto-GPT is running. diff --git a/docs/configuration/search.md b/docs/configuration/search.md index 87e8e3abcb1f..4640d63c6e6b 100644 --- a/docs/configuration/search.md +++ b/docs/configuration/search.md @@ -1,49 +1,37 @@ ## πŸ” Google API Keys Configuration -Note: -This section is optional. use the official google api if you are having issues with error 429 when running a google search. -To use the `google_official_search` command, you need to set up your Google API keys in your environment variables. +!!! note + This section is optional. Use the official Google API if search attempts return + error 429. To use the `google_official_search` command, you need to set up your + Google API key in your environment variables. Create your project: 1. Go to the [Google Cloud Console](https://console.cloud.google.com/). -2. If you don't already have an account, create one and log in. -3. Create a new project by clicking on the "Select a Project" dropdown at the top of the page and clicking "New Project". -4. Give it a name and click "Create". -Set up a custom search API and add to your .env file: -5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard). -6. Click "Enable APIs and Services". -7. Search for "Custom Search API" and click on it. -8. Click "Enable". -9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page. -10. Click "Create Credentials". -11. Choose "API Key". -12. Copy the API key. -13. Set it as an environment variable named `GOOGLE_API_KEY` on your machine (see how to set up environment variables below). -14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com) the Custom Search API on your project. (Might need to wait few minutes to propagate) -Set up a custom search engine and add to your .env file: -15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page. -16. Click "Add". -17. Set up your search engine by following the prompts. You can choose to search the entire web or specific sites. -18. Once you've created your search engine, click on "Control Panel". -19. Click "Basics". -20. Copy the "Search engine ID". -21. Set it as an environment variable named `CUSTOM_SEARCH_ENGINE_ID` on your machine (see how to set up environment variables below). +2. If you don't already have an account, create one and log in +3. Create a new project by clicking on the *Select a Project* dropdown at the top of the + page and clicking *New Project* +4. Give it a name and click *Create* +5. Set up a custom search API and add to your .env file: + 5. Go to the [APIs & Services Dashboard](https://console.cloud.google.com/apis/dashboard) + 6. Click *Enable APIs and Services* + 7. Search for *Custom Search API* and click on it + 8. Click *Enable* + 9. Go to the [Credentials](https://console.cloud.google.com/apis/credentials) page + 10. Click *Create Credentials* + 11. Choose *API Key* + 12. Copy the API key + 13. Set it as the `GOOGLE_API_KEY` in your `.env` file +14. [Enable](https://console.developers.google.com/apis/api/customsearch.googleapis.com) + the Custom Search API on your project. (Might need to wait few minutes to propagate.) + Set up a custom search engine and add to your .env file: + 15. Go to the [Custom Search Engine](https://cse.google.com/cse/all) page + 16. Click *Add* + 17. Set up your search engine by following the prompts. + You can choose to search the entire web or specific sites + 18. Once you've created your search engine, click on *Control Panel* + 19. Click *Basics* + 20. Copy the *Search engine ID* + 21. Set it as the `CUSTOM_SEARCH_ENGINE_ID` in your `.env` file _Remember that your free daily custom search quota allows only up to 100 searches. To increase this limit, you need to assign a billing account to the project to profit from up to 10K daily searches._ - -### Setting up environment variables - -For Windows Users: - -``` -setx GOOGLE_API_KEY "YOUR_GOOGLE_API_KEY" -setx CUSTOM_SEARCH_ENGINE_ID "YOUR_CUSTOM_SEARCH_ENGINE_ID" -``` - -For macOS and Linux users: - -``` -export GOOGLE_API_KEY="YOUR_GOOGLE_API_KEY" -export CUSTOM_SEARCH_ENGINE_ID="YOUR_CUSTOM_SEARCH_ENGINE_ID" -``` diff --git a/docs/configuration/voice.md b/docs/configuration/voice.md index 8c9ab854a154..fcd487fd72ee 100644 --- a/docs/configuration/voice.md +++ b/docs/configuration/voice.md @@ -1,4 +1,4 @@ -## Voice +# Text to Speech Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT @@ -6,24 +6,32 @@ Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT python -m autogpt --speak ``` -Eleven Labs provides voice technologies such as voice design, speech synthesis, and premade voices that Auto-GPT can use for speech. +Eleven Labs provides voice technologies such as voice design, speech synthesis, and +premade voices that Auto-GPT can use for speech. -1. Go to [Eleven Labs](https://beta.elevenlabs.io/) and make an account if you don't already have one. +1. Go to [ElevenLabs](https://beta.elevenlabs.io/) and make an account if you don't + already have one. 2. Choose and setup the `Starter` plan. 3. Click the top right icon and find "Profile" to locate your API Key. In the `.env` file set: + - `ELEVENLABS_API_KEY` - `ELEVENLABS_VOICE_1_ID` (example: _"premade/Adam"_) -### List of IDs with names from eleven labs. You can use the name or ID: - -- Rachel : 21m00Tcm4TlvDq8ikWAM -- Domi : AZnzlk1XvdvUeBnXmlld -- Bella : EXAVITQu4vr4xnSDxMaL -- Antoni : ErXwobaYiN019PkySvjV -- Elli : MF3mGyEYCl7XYWbV9V6O -- Josh : TxGEqnHWrfWFTfGW9XjX -- Arnold : VR6AewLTigWG4xSOukaG -- Adam : pNInz6obpgDQGcFmaJgB -- Sam : yoZ06aMxZJJ28mfd3POQ +### List of available voices + +!!! note + You can use either the name or the voice ID to configure a voice + +| Name | Voice ID | +| ------ | -------- | +| Rachel | `21m00Tcm4TlvDq8ikWAM` | +| Domi | `AZnzlk1XvdvUeBnXmlld` | +| Bella | `EXAVITQu4vr4xnSDxMaL` | +| Antoni | `ErXwobaYiN019PkySvjV` | +| Elli | `MF3mGyEYCl7XYWbV9V6O` | +| Josh | `TxGEqnHWrfWFTfGW9XjX` | +| Arnold | `VR6AewLTigWG4xSOukaG` | +| Adam | `pNInz6obpgDQGcFmaJgB` | +| Sam | `yoZ06aMxZJJ28mfd3POQ` | diff --git a/docs/installation.md b/docs/installation.md deleted file mode 100644 index 034814d6a8b4..000000000000 --- a/docs/installation.md +++ /dev/null @@ -1,115 +0,0 @@ -# πŸ’Ύ Installation - -## ⚠️ OpenAI API Keys Configuration - -Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys). - -To use OpenAI API key for Auto-GPT, you **NEED** to have billing set up (AKA paid account). - -You can set up paid account at [https://platform.openai.com/account/billing/overview](https://platform.openai.com/account/billing/overview). - -Important: It's highly recommended that you track your usage on [the Usage page](https://platform.openai.com/account/usage). -You can also set limits on how much you spend on [the Usage limits page](https://platform.openai.com/account/billing/limits). - -![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png) - -**PLEASE ENSURE YOU HAVE DONE THIS STEP BEFORE PROCEEDING. OTHERWISE, NOTHING WILL WORK!** - -## General setup - -1. Make sure you have one of the environments listed under [**requirements**](https://github.com/Significant-Gravitas/Auto-GPT#-requirements) set up. - - _To execute the following commands, open a CMD, Bash, or Powershell window by navigating to a folder on your computer and typing `CMD` in the folder path at the top, then press enter. Make sure you have [Git](https://git-scm.com/downloads) installed for your O/S._ - -2. Clone the repository using Git, or download the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) (`Source code (zip)`, at the bottom of the page). - -``` shell - git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git -``` - -3. Navigate to the directory where you downloaded the repository. - -``` shell - cd Auto-GPT -``` - -5. Configure Auto-GPT: - 1. Find the file named `.env.template` in the main `Auto-GPT` folder. This file may be hidden by default in some operating systems due to the dot prefix. To reveal hidden files, follow the instructions for your specific operating system (e.g., in Windows, click on the "View" tab in File Explorer and check the "Hidden items" box; in macOS, press Cmd + Shift + .). - 2. Create a copy of this file and call it `.env` by removing the `template` extension. The easiest way is to do this in a command prompt/terminal window `cp .env.template .env`. - 3. Open the `.env` file in a text editor. - 4. Find the line that says `OPENAI_API_KEY=`. - 5. After the `"="`, enter your unique OpenAI API Key (without any quotes or spaces). - 6. Enter any other API keys or Tokens for services you would like to use. To activate and adjust a setting, remove the `# ` prefix. - 7. Save and close the `.env` file. - - You have now configured Auto-GPT. - - Notes: - - - See [OpenAI API Keys Configuration](#openai-api-keys-configuration) to get your OpenAI API key. - - Get your ElevenLabs API key from: [ElevenLabs](https://elevenlabs.io). You can view your xi-api-key using the "Profile" tab on the website. - - If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and then follow these steps: - - Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section: - - `fast_llm_model_deployment_id` - your gpt-3.5-turbo or gpt-4 deployment ID - - `smart_llm_model_deployment_id` - your gpt-4 deployment ID - - `embedding_model_deployment_id` - your text-embedding-ada-002 v2 deployment ID - -``` shell -# Please specify all of these values as double-quoted strings -# Replace string in angled brackets (<>) to your own ID -azure_model_map: - fast_llm_model_deployment_id: "" - ... -``` -Details can be found here: [https://pypi.org/project/openai/](https://pypi.org/project/openai/) in the `Microsoft Azure Endpoints` section and here: [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line) for the embedding model. -If you're on Windows you may need to install [msvc-170](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170). - -4. Follow the further instructions for running Auto-GPT with [Docker](#run-with-docker) (*recommended*), or [Docker-less](#run-docker-less) - -### Run with Docker - -Easiest is to run with `docker-compose`: -``` shell -docker-compose build auto-gpt -docker-compose run --rm auto-gpt -``` -By default, this will also start and attach a Redis memory backend. -For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup). - -You can also build and run it with "vanilla" docker commands: -``` shell -docker build -t auto-gpt . -docker run -it --env-file=.env -v $PWD:/app auto-gpt -``` - -You can pass extra arguments, for instance, running with `--gpt3only` and `--continuous` mode: -``` shell -docker-compose run --rm auto-gpt --gpt3only --continuous -``` -``` shell -docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous -``` - -Alternatively, you can pull the latest release directly from [Docker Hub](https://hub.docker.com/r/significantgravitas/auto-gpt) and run that: -``` shell -docker run -it --env OPENAI_API_KEY='your-key-here' --rm significantgravitas/auto-gpt -``` - -Or with `ai_settings.yml` presets mounted: -``` shell -docker run -it --env OPENAI_API_KEY='your-key-here' -v $PWD/ai_settings.yaml:/app/ai_settings.yaml --rm significantgravitas/auto-gpt -``` - - -### Run without Docker - -Simply run `./run.sh` (Linux/macOS) or `.\run.bat` (Windows) in your terminal. This will install any necessary Python packages and launch Auto-GPT. - -### Run with Dev Container - -1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension in VS Code. - -2. Open command palette and type in Dev Containers: Open Folder in Container. - -3. Run `./run.sh`. - diff --git a/docs/setup.md b/docs/setup.md new file mode 100644 index 000000000000..a5d0558c3ddf --- /dev/null +++ b/docs/setup.md @@ -0,0 +1,210 @@ +# Setting up Auto-GPT + +## πŸ“‹ Requirements + +Choose an environment to run Auto-GPT in (pick one): + + - [Docker](https://docs.docker.com/get-docker/) (*recommended*) + - Python 3.10 or later (instructions: [for Windows](https://www.tutorialspoint.com/how-to-install-python-in-windows)) + - [VSCode + devcontainer](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) + + +## πŸ—οΈ Getting an API key + +Get your OpenAI API key from: [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys). + +!!! attention + To use the OpenAI API with Auto-GPT, we strongly recommend **setting up billing** + (AKA paid account). Free accounts are [limited][openai/api limits] to 3 API calls per + minute, which can cause the application to crash. + + You can set up a paid account at [Manage account > Billing > Overview](https://platform.openai.com/account/billing/overview). + +[openai/api limits]: https://platform.openai.com/docs/guides/rate-limits/overview#:~:text=Free%20trial%20users,RPM%0A40%2C000%20TPM + +!!! important + It's highly recommended that you keep keep track of your API costs on [the Usage page](https://platform.openai.com/account/usage). + You can also set limits on how much you spend on [the Usage limits page](https://platform.openai.com/account/billing/limits). + +![For OpenAI API key to work, set up paid account at OpenAI API > Billing](./imgs/openai-api-key-billing-paid-account.png) + + +## Setting up Auto-GPT + +### Set up with Docker + +1. Make sure you have Docker installed, see [requirements](#requirements) +2. Pull the latest image from [Docker Hub] + + :::shell + docker pull significantgravitas/auto-gpt + +3. Create a folder for Auto-GPT +4. In the folder, create a file called `docker-compose.yml` with the following contents: + + :::yaml + version: "3.9" + services: + auto-gpt: + image: significantgravitas/auto-gpt + depends_on: + - redis + env_file: + - .env + environment: + MEMORY_BACKEND: ${MEMORY_BACKEND:-redis} + REDIS_HOST: ${REDIS_HOST:-redis} + volumes: + - ./:/app + profiles: ["exclude-from-up"] + redis: + image: "redis/redis-stack-server:latest" + +5. Create the necessary [configuration](#configuration) files. If needed, you can find + templates in the [repository]. +6. Continue to [Run with Docker](#run-with-docker) + +[Docker Hub]: https://hub.docker.com/r/significantgravitas/auto-gpt +[repository]: https://github.com/Significant-Gravitas/Auto-GPT + + +### Set up with Git + +!!! important + Make sure you have [Git](https://git-scm.com/downloads) installed for your OS. + +!!! info + To execute the given commands, open a CMD, Bash, or Powershell window. + On Windows: press ++win+x++ and pick *Terminal*, or ++win+r++ and enter `cmd` + +1. Clone the repository + + :::shell + git clone -b stable https://github.com/Significant-Gravitas/Auto-GPT.git + +2. Navigate to the directory where you downloaded the repository + + :::shell + cd Auto-GPT + + +### Set up without Git/Docker + +!!! warning + We recommend to use Git or Docker, to make updating easier. + +1. Download `Source code (zip)` from the [latest stable release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) +2. Extract the zip-file into a folder + + +### Configuration + +1. Find the file named `.env.template` in the main `Auto-GPT` folder. This file may + be hidden by default in some operating systems due to the dot prefix. To reveal + hidden files, follow the instructions for your specific operating system: + [Windows][show hidden files/Windows], [macOS][show hidden files/macOS]. +2. Create a copy of `.env.template` and call it `.env`; + if you're already in a command prompt/terminal window: `cp .env.template .env`. +3. Open the `.env` file in a text editor. +4. Find the line that says `OPENAI_API_KEY=`. +5. After the `=`, enter your unique OpenAI API Key *without any quotes or spaces*. +6. Enter any other API keys or tokens for services you would like to use. + + !!! note + To activate and adjust a setting, remove the `# ` prefix. + +7. Save and close the `.env` file. + +!!! info + Get your ElevenLabs API key from: [ElevenLabs](https://elevenlabs.io). You can view your xi-api-key using the "Profile" tab on the website. + +!!! info "Using a GPT Azure-instance" + If you want to use GPT on an Azure instance, set `USE_AZURE` to `True` and + make an Azure configuration file: + + - Rename `azure.yaml.template` to `azure.yaml` and provide the relevant `azure_api_base`, `azure_api_version` and all the deployment IDs for the relevant models in the `azure_model_map` section: + - `fast_llm_model_deployment_id`: your gpt-3.5-turbo or gpt-4 deployment ID + - `smart_llm_model_deployment_id`: your gpt-4 deployment ID + - `embedding_model_deployment_id`: your text-embedding-ada-002 v2 deployment ID + + Example: + + :::yaml + # Please specify all of these values as double-quoted strings + # Replace string in angled brackets (<>) to your own ID + azure_model_map: + fast_llm_model_deployment_id: "" + ... + + Details can be found in the [openai-python docs], and in the [Azure OpenAI docs] for the embedding model. + If you're on Windows you may need to install an [MSVC library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170). + +[show hidden files/Windows]: https://support.microsoft.com/en-us/windows/view-hidden-files-and-folders-in-windows-97fbc472-c603-9d90-91d0-1166d1d9f4b5 +[show hidden files/macOS]: https://www.pcmag.com/how-to/how-to-access-your-macs-hidden-files +[openai-python docs]: https://github.com/openai/openai-python#microsoft-azure-endpoints +[Azure OpenAI docs]: https://learn.microsoft.com/en-us/azure/cognitive-services/openai/tutorials/embeddings?tabs=command-line + + +## Running Auto-GPT + +### Run with Docker + +Easiest is to use `docker-compose`. Run the commands below in your Auto-GPT folder. + +1. Build the image. If you have pulled the image from Docker Hub, skip this step. + + :::shell + docker-compose build auto-gpt + +2. Run Auto-GPT + + :::shell + docker-compose run --rm auto-gpt + + By default, this will also start and attach a Redis memory backend. If you do not + want this, comment or remove the `depends: - redis` and `redis:` sections from + `docker-compose.yml`. + + For related settings, see [Memory > Redis setup](./configuration/memory.md#redis-setup). + +You can pass extra arguments, e.g. running with `--gpt3only` and `--continuous`: +``` shell +docker-compose run --rm auto-gpt --gpt3only --continuous +``` + +If you dare, you can also build and run it with "vanilla" docker commands: +``` shell +docker build -t auto-gpt . +docker run -it --env-file=.env -v $PWD:/app auto-gpt +docker run -it --env-file=.env -v $PWD:/app --rm auto-gpt --gpt3only --continuous +``` + +[docker-compose file]: https://github.com/Significant-Gravitas/Auto-GPT/blob/stable/docker-compose.yml + + +### Run with Dev Container + +1. Install the [Remote - Containers](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers) extension in VS Code. + +2. Open command palette with ++f1++ and type `Dev Containers: Open Folder in Container`. + +3. Run `./run.sh`. + + +### Run without Docker + +Simply run the startup script in your terminal. This will install any necessary Python +packages and launch Auto-GPT. + +- On Linux/MacOS: + + :::shell + ./run.sh + +- On Windows: + + :::shell + .\run.bat + +If this gives errors, make sure you have a compatible Python version installed. See also +the [requirements](./installation.md#requirements). diff --git a/docs/testing.md b/docs/testing.md index d87c9acd5787..47cbecafcd42 100644 --- a/docs/testing.md +++ b/docs/testing.md @@ -1,39 +1,46 @@ -## Run tests +# Running tests -To run all tests, run the following command: +To run all tests, use the following command: +``` shell +pytest ``` -pytest + +If `pytest` is not found: +``` shell +python -m pytest ``` -To run just without integration tests: +### Running specific test suites -``` -pytest --without-integration -``` +- To run without integration tests: -To run just without slow integration tests: + :::shell + pytest --without-integration -``` -pytest --without-slow-integration -``` +- To run without *slow* integration tests: -To run tests and see coverage, run the following command: + :::shell + pytest --without-slow-integration -``` -pytest --cov=autogpt --without-integration --without-slow-integration -``` +- To run tests and see coverage: -## Run linter + :::shell + pytest --cov=autogpt --without-integration --without-slow-integration -This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`. See the [flake8 rules](https://www.flake8rules.com/) for more information. +## Runing the linter -To run the linter, run the following command: +This project uses [flake8](https://flake8.pycqa.org/en/latest/) for linting. +We currently use the following rules: `E303,W293,W291,W292,E305,E231,E302`. +See the [flake8 rules](https://www.flake8rules.com/) for more information. -``` -flake8 autogpt/ tests/ +To run the linter: -# Or, if you want to run flake8 with the same configuration as the CI: +``` shell +flake8 . +``` -flake8 autogpt/ tests/ --select E303,W293,W291,W292,E305,E231,E302 -``` \ No newline at end of file +Or: +``` shell +python -m flake8 . +``` diff --git a/docs/usage.md b/docs/usage.md index 8a1bb63a26b5..80fa7985c509 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -1,67 +1,48 @@ # Usage -Open a terminal and run the startup script: - -- On Linux/MacOS: +## Command Line Arguments +Running with `--help` lists all the possible command line arguments you can pass: ``` shell -./run.sh -``` +./run.sh --help # on Linux / macOS -- On Windows: - -``` shell -.\run.bat +.\run.bat --help # on Windows ``` -- Using Docker: +!!! info + For use with Docker, replace the script in the examples with + `docker-compose run --rm auto-gpt`: -``` shell -docker-compose run --rm auto-gpt -``` + :::shell + docker-compose run --rm auto-gpt --help + docker-compose run --rm auto-gpt --ai-settings -Running with `--help` lists all the possible command line arguments you can pass: +!!! note + Replace anything in angled brackets (<>) to a value you want to specify -``` shell -./run.sh --help +Here are some common arguments you can use when running Auto-GPT: -# or with docker -docker-compose run --rm auto-gpt --help -``` +* Run Auto-GPT with a different AI Settings file -2. After each response from Auto-GPT, choose from the options to authorize command(s), -exit the program, or provide feedback to the AI. - 1. Authorize a single command by entering `y` - 2. Authorize a series of _N_ continuous commands by entering `y -N`. For example, entering `y -10` would run 10 automatic iterations. - 3. Enter any free text to give feedback to Auto-GPT. - 4. Exit the program by entering `n` + :::shell + ./run.sh --ai-settings +* Specify a memory backend -## Command Line Arguments -Here are some common arguments you can use when running Auto-GPT: -> Replace anything in angled brackets (<>) to a value you want to specify + :::shell + ./run.sh --use-memory -* View all available command line arguments -``` shell -python -m autogpt --help -``` -* Run Auto-GPT with a different AI Settings file -``` shell -python -m autogpt --ai-settings -``` -* Specify a memory backend -``` shell -python -m autogpt --use-memory -``` -> **NOTE**: There are shorthands for some of these flags, for example `-m` for `--use-memory`. Use `python -m autogpt --help` for more information +!!! note + There are shorthands for some of these flags, for example `-m` for `--use-memory`. + Use `./run.sh --help` for more information. -### Speak Mode +### Speak Mode Enter this command to use TTS _(Text-to-Speech)_ for Auto-GPT -``` -python -m autogpt --speak +``` shell +./run.sh --speak ``` ### πŸ’€ Continuous Mode ⚠️ @@ -71,34 +52,38 @@ Continuous mode is NOT recommended. It is potentially dangerous and may cause your AI to run forever or carry out actions you would not usually authorize. Use at your own risk. -1. Run the `autogpt` python module in your terminal: - ``` shell -python -m autogpt --continuous +./run.sh --continuous ``` - -2. To exit the program, press Ctrl + C +To exit the program, press ++ctrl+c++ ### ♻️ Self-Feedback Mode ⚠️ Running Self-Feedback will **INCREASE** token use and thus cost more. This feature enables the agent to provide self-feedback by verifying its own actions and checking if they align with its current goals. If not, it will provide better feedback for the next loop. To enable this feature for the current loop, input `S` into the input field. -### GPT3.5 ONLY Mode +### GPT-3.5 ONLY Mode -If you don't have access to the GPT4 api, this mode will allow you to use Auto-GPT! +If you don't have access to GPT-4, this mode allows you to use Auto-GPT! ``` shell -python -m autogpt --gpt3only +./run.sh --gpt3only ``` -### GPT4 ONLY Mode +You can achieve the same by setting `SMART_LLM_MODEL` in `.env` to `gpt-3.5-turbo`. + +### GPT-4 ONLY Mode -If you do have access to the GPT4 api, this mode will allow you to use Auto-GPT solely using the GPT-4 API for increased intelligence (and cost!) +If you have access to GPT-4, this mode allows you to use Auto-GPT solely with GPT-4. +This may give your bot increased intelligence. ``` shell -python -m autogpt --gpt4only +./run.sh --gpt4only ``` +!!! warning + Since GPT-4 is more expensive to use, running Auto-GPT in GPT-4-only mode will + increase your API costs. + ## Logs Activity and error logs are located in the `./output/logs` @@ -106,5 +91,5 @@ Activity and error logs are located in the `./output/logs` To print out debug logs: ``` shell -python -m autogpt --debug +./run.sh --debug ``` diff --git a/mkdocs.yml b/mkdocs.yml index 0b743e915e97..856a9d621201 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -1,20 +1,27 @@ site_name: Auto-GPT -site_url: https://github.com/Significant-Gravitas/Auto-GPT +site_url: https://significantgravitas.github.io/Auto-GPT/ repo_url: https://github.com/Significant-Gravitas/Auto-GPT nav: - - Home: index.md - - Installation: installation.md - - Usage: usage.md - - Plugins: plugins.md - - Testing: testing.md - - Configuration: - - Search: configuration/search.md - - Memory: configuration/memory.md - - Voice: configuration/voice.md - - Image Generation: configuration/imagegen.md + - Home: index.md + - Setup: setup.md + - Usage: usage.md + - Plugins: plugins.md + - Configuration: + - Search: configuration/search.md + - Memory: configuration/memory.md + - Voice: configuration/voice.md + - Image Generation: configuration/imagegen.md + - Contributing: + - Contribution guide: contributing.md + - Running tests: testing.md - Code of Conduct: code-of-conduct.md - - Contributing: contributing.md - - License: LICENSE + + - License: https://github.com/Significant-Gravitas/Auto-GPT/blob/master/LICENSE theme: readthedocs + +markdown_extensions: + admonition: + codehilite: + pymdownx.keys: diff --git a/requirements.txt b/requirements.txt index b7df2636551a..98530511314c 100644 --- a/requirements.txt +++ b/requirements.txt @@ -34,6 +34,7 @@ isort gitpython==3.1.31 auto-gpt-plugin-template mkdocs +pymdown-extensions # OpenAI and Generic plugins import