Skip to content

Commit

Permalink
Merge pull request #1132 from Agenta-AI/docs_v2
Browse files Browse the repository at this point in the history
Update Agenta documentation
  • Loading branch information
mmabrouk authored Jan 2, 2024
2 parents 04db72a + ed7a121 commit 8388225
Show file tree
Hide file tree
Showing 151 changed files with 876 additions and 267 deletions.
202 changes: 202 additions & 0 deletions docs/advanced_guides/custom_applications.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
---
title: 'Creating Custom Applications'
description: 'Learn how to use your custom application with Agenta'
---

Agenta comes with several pre-built template LLM applications for common use cases, such as single prompt and chatbot. However, you can also create your own custom application with Agenta. This could be a RAG application, a custom agent, a chain of prompts, or any custom logic.

This guide will show you how to create a custom application and use it with Agenta.

<Tip> We recommend reading ["How does Agenta work"](/developer_guides/how_does_agenta_work) beforehand to familiarize yourself with the main concepts of Agenta. </Tip>

## How to create a custom application in Agenta ?

To add your custom application in Agenta, you need to write the application code using the <Tooltip tip="The SDK is currently only available in Python">Agenta SDK</Tooltip>, then add the application to Agenta using the CLI.

The [Agenta SDK](/developer_guides/sdk/quick_start) takes care of specifying the configuration of your application (prompts, model parameters, chunk size, etc.), and integrates it with Agenta. The [Agenta CLI](/developer_guides/cli/quick-usage) takes care of building the application image, deploying it, and exposing it to Agenta.

## Converting an existing application to Agenta

Writing an application for Agenta involves adding a few lines of code to your existing application.

Let's consider the code of a simple application that calls OpenAI to generate a text for a blog post.

```python myapp.py before adding agenta

from openai import OpenAI

client = OpenAI()

def generate(subject:txt):
prompt = "Write an blog post about {subject}"
formatted_prompt = prompt.format(subject=subject)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}]
)
return chat_completion.choices[0].message.content
```

As you can see, the application is a simple function that takes a blog post subject as input, format the prompt using f-strings, then calls gpt-3.5-turbo with the formatted prompt and return its output.

To use the application in Agenta, we need to add a few lines of code. Here is the end result. We will go over each change in detail in the next sections.

```python myapp.py after adding agenta
import agenta as ag
from openai import OpenAI

ag.init()
ag.config.register_default(prompt=ag.TextParam("Write an blog post about {subject}"))
client = OpenAI()

@ag.entrypoint
def generate(subject:txt):
formatted_prompt = ag.config.prompt.format(subject=subject)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}]
)
return chat_completion.choices[0].message.content
```

Below are the modifications we made to the code:

### Importing and initializing the agenta SDK
```python
import agenta as ag
ag.init()
```

1. We added the `ag.init()` call to initialize the application. Note that this call is always needed before using any other agenta function.

### Specifying the default configuration
```python
ag.config.register_default(prompt=TextParam("Write an blog post about {subject}"))
```

Here, we informed Agenta that the configuration for this application is a single parameter of type text, and its default value is `"Write a blog post about {subject}"`.

This tells Agenta how to render the playground for this application. In this case, the playground will have a single text input with the default value `"Write a blog post about {subject}"`.

### Specifying the entrypoint of the application
```python
@ag.entrypoint
def generate(subject:txt):
```

We added the `@ag.entrypoint` decorator to the main function of the application. This decorator informs Agenta that this function is the entry point to the application. It converts it (using FastAPI) into an API endpoint, allowing it to be used from the web interface.

### Using the configuration in the application

```python
formatted_prompt = ag.config.prompt.format(subject=subject)
```

Instead of using the variable `prompt` directly, we are now using `ag.config.prompt`. This line tells the application to use the value set in the Agenta Here Agenta acts as a management system for the app configuration (a prompt management system). This allows you to change the application's configuration from the web interface without modifying the code.

When you call `ag.config.<var_name>`, the Agenta SDK calls the backend and retrieves the value of the variable for the requested variant.

## Adding the requirements and environment variables

Before serving the application in Agenta using the CLI, we need to add the application's requirements to the requirements.txt file.

```python requirements.txt
agenta
openai
```

Additionally, we need to add the .env file with any required environment variables. In this case, we need to add the OpenAI API key.

```bash .env
OPENAI_API_KEY=sk-...
```

The Agenta SDK will automatically load the environment variables from the .env file.

Both these files need to be in the same folder as the application code.

## Serving the application

To serve the application, we first need to initialize the project in Agenta. We run the following command in the folder containing the application code and the rest of the files.

```bash
agenta init
```

This command will prompt you to provide the name of the application, the host for Agenta (Agenta cloud), and whether to start from a blank project (yes in this case since we wrote the code) or to populate the folder with a template application (no in this case).

After running this command, you should see a new config.toml file containing the application's configuration in the folder. Additionally, you should see a new empty project in the Agenta web UI.

Now, we can serve the application by running the following command.

```bash
agenta variant serve myapp.py
```

This command will serve the application in Agenta. The application is now added to the Agenta web interface and can be used from there.

<Info> Under the hood, this command will build an image for the application, deploy a container with the image, and expose a REST API to the application which is used by Agenta to communicate with the application. </Info>

## Using the application in agenta

The application should now be visible in Agenta. A new application variant is always created under the name `<filename>.default`. Variants are always named in this format `<filename>.<variant_name>`. This allows you to determine which source code was used to create the application (`<filename>`). When first created, we always create a 'default' configuration. This is the configuration specified in the code (when using `register_default`).

## Adding other parameters

We are not limited to one configuration parameter in the playground. We can add as many as we'd like. These parameters can be prompts (TextParam), numbers (FloatParam, IntParam), or dropdowns (MultipleChoiceParam). You can read more about the types of parameters in the [parameters](/developer_guides/sdk/config_datatypes) section.

Here is a modified version of the application that adds a new parameter `temperature` to the playground.


```python
import agenta as ag
from openai import OpenAI

ag.init()
ag.config.register_default(prompt=TextParam("Write a blog post about {subject}"),
temperature=FloatParam(0.2))
client = OpenAI()

@ag.entrypoint
def generate(subject:txt):
formatted_prompt = ag.config.prompt.format(subject=subject)
chat_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
temperature=ag.config.temperature,
messages=[{"role": "user", "content": prompt}]
)
return chat_completion.choices[0].message.content

```

## Where to go from here?

Agenta provides the flexibility to add any LLM application to the platform, so that you collaborate on the prompt engineering, evaluation, and the management of the application's entire lifecycle all from one place.

We've merely touched on what Agenta can do. You're not limited to apps that consist of a single file or function. You can create chains of prompts, or even agents. You can use the SDK allows you to track costs and log traces of your application.

More information about the SDK can be found in the [SDK section in the developer guide](/developer_guides/sdk/quick_start). You can also explore a growing list of templates and tutorials in the [cookbook section](/cookbook/list_templates).

Finally, our team is always ready to assist you with any custom application. Simply reach out to us on Slack, or book a call to discuss your use case in detail.
You can read more about the SDK in the . You can also check the growing list of templates and tutorials in the . Last please note, that our team is always available to help you with any custom applicatoin, just reach out to use on [Slack](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) Or [book a call](https://cal.com/mahmoud-mabrouk-ogzgey/demo) to discuss your use case in details.




<CardGroup cols={2}>

<Card
title="SDK Reference"
icon="stars"
href="/developer_guides/sdk/quick_start"
color="#33FF57">
Learn how to use the SDK to create custom applications.
</Card>

<Card
title="Cookbook"
icon="code"
color="#337BFF"
href="/cookbook/list_templates">

</Card>

</CardGroup>
52 changes: 52 additions & 0 deletions docs/advanced_guides/using_agenta_from_cli.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: 'Using Agenta from CLI'
description: 'Create, experiment, and evaluate your applications all from the CLI'
---

Agenta was designed for use both from the CLI and from the web interface. This guide explains the basics of using Agenta from the CLI. For more details, refer to the [CLI developer guide](/developer_guides/cli/quick-usage).

## Installation
The agenta CLI can be easily installed through pip:
```bash
pip install -U agenta
````

## Creating an application
<Steps>
<Step title="Create a new project">

To create an application in Agenta, first, initialize an empty project. Run the following command in <Tooltip tip="If you would like to start somewhere you can clone https://github.com/Agenta-AI/simple_prompt">the folder containing your application code</Tooltip>:

```bash
agenta init
```

This will prompt you for the project name, the Agenta host, and the API key (if using the cloud or enterprise version).


Running `agenta init` creates a blank project in Agenta and generates a config.toml file in that folder, which contains all the information about your project.

</Step>
<Step title="Serve the first app variant">
With the project created, we need to add the first <Tooltip tip="An app variant is one 'version' of an app. It can be created by modifying the code, or the configuration of an existing code base">app variant</Tooltip> to it.
This can be done by running the following command:
```bash
agenta variant serve <filename.py>
```

This will create a new app variant in Agenta under the name filename.default. Here, filename is the name of the codebase containing the app logic, while default is a default configuration created for that codebase. Each new app variant created from the web interface or from the CLI will always have the name format `<codebase_name>.<configuration_name>`.

Running this comand will [create a container for the application](developer_guides/how_does_agenta_work) with a REST API endpoint. This endpoint is what is used by the agenta web interface to communicate with the application.

The CLI will also display the URL of the endpoint, which can be used to test the application.
</Step>
</Steps>

## Adding a new configuration

In addition to the default configuration specified in the code, you can add more configurations to the application from the CLI.

Configurations are specified in toml files and always named `<codebase_name>.<configuration_name>`. To add a new configuration, run the following command:

@devgenix please continue here

5 changes: 5 additions & 0 deletions docs/advanced_guides/using_custom_evaluators.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: 'Custom Evaluation'
---

<Warning> This page is under construction. Please reach out to us on [Slack](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) **#support** channel, [Book a call](https://cal.com/mahmoud-mabrouk-ogzgey/demo), through [email](mailto:[email protected]) if you need help with using custom evaluation.</Warning>
5 changes: 5 additions & 0 deletions docs/basic_guides/automatic_evaluation.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: 'Automatic Evaluation'
---

<Warning> This page is under construction. Please reach out to us on [Slack](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) **#support** channel, [Book a call](https://cal.com/mahmoud-mabrouk-ogzgey/demo), through [email](mailto:[email protected]) if you need help with using automatic evaluation.</Warning>
42 changes: 42 additions & 0 deletions docs/basic_guides/creating_an_app.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
title: 'Creating an LLM App'
---

<Note>You can create applications in Agenta either from the web interface or from code. This guide will focus on creating an application using a template from the UI. You can read more about creating a custom application using code [here](/advanced_guides/custom_applications)</Note>

## Step-by-step Guide

1. **Navigate to the main page**: This is where you can create a new application.

<img height="600" className="dark:hidden" src="/images/basic_guides/00_main_page_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/00_main_page_dark.png" />

2. **Choose a template**: Currently, we offer templates for single prompt applications and chat applications.

<img height="600" className="dark:hidden" src="/images/basic_guides/01_choose_template_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/01_choose_template_dark.png" />

### Single Prompt Application

The single prompt application is useful for single turn LLM applications such as question answering, text generation, entity extraction, and classification, etc.

This template is based on the OpenAI specification and uses both the system prompt and user prompt:

- **System Prompt**: The system prompt is a message written by the developer to guide the model's behavior. It provides a high-level instruction, such as defining the model's role or setting specific goals.

- **User Prompt**: The user message is the message sent to the model by the user.

While you could write the same application using only the user prompt or the system prompt, it is best to experiment with both approaches. Usually, the system-prompt is typically used for high-level instruction.

<img height="600" className="dark:hidden" src="/images/basic_guides/02_single_prompt_playground_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/02_single_prompt_playground_dark.png" />

### Chat Application
Like the single prompt application, the chat application is based on the OpenAI specification and uses both the system prompt and user prompt, but it is designed for multi-turn applications like chatbots.

<img height="600" className="dark:hidden" src="/images/basic_guides/03_chat_app_playground_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/03_chat_app_playground_dark.png" />

## Next steps

Now that you've created an application, you can learn how to do [prompt engineering in the playground](/basic_guides/prompt_engineering).
40 changes: 40 additions & 0 deletions docs/basic_guides/deployment.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
title: 'Deployment'
description: 'Learn how to integrate Agenta with your application'
---

After using the playground to find a good configuration for your application, it's time to deploy the application.
By deploying the application, you can integrate it with short code snippet in your applicatoin. You can later change the configuration from the UI without having to update the code.


Agenta provides a way to deploy an application to multiple environments: development, staging, and production. Each environment has its own unique configuration.

## Deploying an application to an environment:

1. Navigate to the playground for the variant you want to deploy.
2. Click on 'Publish'. Make sure the variant is saved before deployment.

<img height="600" className="dark:hidden" src="/images/basic_guides/04_save_before_deployment_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/04_save_before_deployment_dark.png" />

3. Select the environment and click on "Publish"

<img height="600" className="dark:hidden" src="/images/basic_guides/05_select_environment_to_publish_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/05_select_environment_to_publish_dark.png" />

The application is now deployed to the chosen environment and is accessible as an API endpoint.

## Accessing the API endpoint:

1. Go to the 'API endpoint' section.
2. Choose the environment where the application was deployed.

<img height="600" className="dark:hidden" src="/images/basic_guides/06_choose_evironment_to_deploy_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/06_choose_evironment_to_deploy_dark.png" />

3. Here, you'll find the application code in various languages, which can be embedded in your application code.

<img height="600" className="dark:hidden" src="/images/basic_guides/07_select_app_code_light.png" />
<img height="600" className="hidden dark:block" src="/images/basic_guides/07_select_app_code_dark.png" />

Additionally, you can use the configuration in your application code without using Agenta deployment. This can be achieved by using the Python SDK or the REST API. More information can be found on the page 'Integrating Agenta with Your Application'.`
5 changes: 5 additions & 0 deletions docs/basic_guides/human_evaluation.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: 'Human Evaluation'
---

<Warning> This page is under construction. Please reach out to us on [Slack](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) **#support** channel, [Book a call](https://cal.com/mahmoud-mabrouk-ogzgey/demo), or through [email](mailto:[email protected]) if you need help with using human evaluation.</Warning>
Loading

0 comments on commit 8388225

Please sign in to comment.