Skip to content

Commit

Permalink
Merge pull request #1026 from Agenta-AI/sdk_docs
Browse files Browse the repository at this point in the history
Update LLM app descriptions and workflow in Docs
  • Loading branch information
mmabrouk authored Dec 10, 2023
2 parents 8c351ed + c62a3b0 commit 181e121
Show file tree
Hide file tree
Showing 15 changed files with 260 additions and 212 deletions.
42 changes: 42 additions & 0 deletions docs/howto/use-a-custom-llm.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
title: How to use a Custom LLM in agenta
description: 'Learn how to write an LLM application that uses a custom LLM'
---

Using a custom LLM in Agenta is straightforward. The process involves writing the code for a custom application using the SDK, which then calls the LLM.

Below is the structure of a custom application that calls a [vllm hosted model on an API server](https://docs.vllm.ai/en/latest/getting_started/quickstart.html#api-server):

```python

import agenta as ag
import requests

default_prompt = "Please write a joke about {subject}"

url = "https://<api-server-url>/generate"
ag.config.default(prompt=default_prompt,
temperature=0.8)

@ag.entrypoint
def generate(subject:str)->str:
prompt = config.prompt.format(subject=subject)
data = {
"prompt": prompt,
"temperature": config.temperature
}
response = requests.post(url, data=json.dumps(data))
return response.json()
```

The above code is a simple LLM app that generates jokes about a given subject, using a vLLM hosted model. It is structured as follows:

`ag.config.default` sets the default values for the configuration of the LLM application. In this example, the default prompt is "Please write a joke about {subject}", and the temperature is set at 0.8.

The `@ag.entrypoint` decorator marks the function that will be called. The function `generate` accepts a subject as input and returns a joke as output. It calls the vLLM hosted model using the requests library.

To call any other LLM, you need to set up the configuration for the LLM (prompt, temperature, etc.) and then call the LLM in the main function.

After writing the code, it can be deployed using the CLI, as described in the [command line reference](/cli/quick-usage). This can be done by running `agenta init` followed by `agenta variant serve app.py` in the code folder.

<Warning> Note that if the LLM is hosted on your local machine and not accessible from outside, you will need to [self-host agenta locally](self-host/host-locally) to be able to call the LLM from the LLm app. </Warning>
8 changes: 3 additions & 5 deletions docs/learn/llm_app_architectures.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: 'LLM App Architectures'
description: 'The different types of LLM applications.'
description: 'The different types of LLM applications (that can be used in agenta).'
---


Expand All @@ -20,10 +20,8 @@ In agenta you can [create such LLM apps from the UI](/quickstart/getting-started

The chain of prompt architecture as its name suggest is based on calling an LLM and then injecting the output into a second call as shown in the figure.

<div style="text-align:center;">
<img className="dark:hidden" width="300" src="/images/learning/chain-of-prompts_light.png"/>
<img className="hidden dark:block" width="300" src="/images/learning/chain-of-prompts_dark.png" />
</div>
<img className="dark:hidden" width="300" src="/images/learning/chain-of-prompts_light.png"/>
<img className="hidden dark:block" width="300" src="/images/learning/chain-of-prompts_dark.png" />

## The Retrieval Augment Generation Architecture

Expand Down
4 changes: 3 additions & 1 deletion docs/learn/the_llmops_workflow.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ description: 'How the best teams build robust LLM applications.'

## The Difference between AI Applications and Traditional Software

Building AI applications powered with Large Language Models is very different than building traditional software. Traditional software is deterministic, running the command in python `1+1` will always return 2. However if we were to create a prompt asking about the result of `1+1` the result could be anything from `2`, `the answer is two`, `two`, or `I am an Large Language Model and I cannot answer mathematical question`.
Building AI applications powered with Large Language Models is very different than building traditional software.

Traditional software is deterministic, running the command in python `1+1` will always return 2. However if we were to create a prompt asking about the result of `1+1` the result could be anything from `2`, `the answer is two`, `two`, or `I am an Large Language Model and I cannot answer mathematical question`.

The main issue is that at the moment of designining the software, we have no idea how the LLM would respond to the question. The only way to know is to test the prompt.

Expand Down
47 changes: 35 additions & 12 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
},
"favicon": "/logo/favicon.png",
"colors": {
"primary": "#32bf40",
"light": "#00FF19",
"dark": "#00FF19"
"primary": "#34d399",
"light": "#34d399",
"dark": "#34d399"
},
"modeToggle": {
"default": "dark",
Expand Down Expand Up @@ -43,7 +43,7 @@
"url": "https://github.com/agenta-ai/agenta"
},
{
"name": "Book a Free Consulation",
"name": "Book a Demo",
"icon": "phone",
"url": "https://cal.com/mahmoud-mabrouk-ogzgey/demo"
}
Expand Down Expand Up @@ -73,26 +73,35 @@
{
"group": "How-to Guides",
"pages": [
"howto/use-a-custom-llm",
"howto/creating-multiple-app-variants",
"howto/how-to-debug"
]
},
{
"group": "Conceptual Guides",
"group": "Learn",
"pages": [
"learn/the_llmops_workflow",
"conceptual/evaluating_llm_apps",
"conceptual/concepts",
"conceptual/architecture"
]
},
{
"group": "Self-host agenta",
"group": "Python SDK",
"pages": [
"self-host/host-locally",
"self-host/host-remotely",
"self-host/host-on-aws",
"self-host/host-on-gcp",
"self-host/host-on-kubernetes"
"sdk/quick_start",
{
"group": "Core Functions",
"pages": [
"sdk/init",
"sdk/config_object",
"sdk/config_default",
"sdk/config_pull",
"sdk/config_push",
"sdk/config_datatypes"
]
}
]
},
{
Expand All @@ -111,6 +120,21 @@
}
]
},
{
"group": "Self-host agenta",
"pages": [
"self-host/host-locally",
{
"group": "Deploy Remotely",
"pages": [
"self-host/host-remotely",
"self-host/host-on-aws",
"self-host/host-on-gcp",
"self-host/host-on-kubernetes"
]
}
]
},
{
"group": "Contributing",
"pages": [
Expand All @@ -122,7 +146,6 @@
{
"group": "Reference",
"pages": [
"reference/sdk",
{
"group": "Backend API",
"pages": [
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart/getting-started-code.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Prefer video tutorial? Watch our 4-minute video [here](https://youtu.be/nggaRwDZ

This guide outlines building a basic LLM app with **langchain** and **agenta**. By the end, you'll have a functional LLM app and knowledge of how to use the agenta CLI.

To learn more about creating an LLM app from scratch, please visit our [advanced tutorial](/tutorials/your-first-llm-app).
To learn more about creating an LLM app from scratch, please visit our [advanced tutorial](/tutorials/first-app-with-langchain).

## Step 0: Installation

Expand Down
15 changes: 10 additions & 5 deletions docs/quickstart/getting-started-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,20 @@ Welcome! This beginner-friendly guide will walk you through building a simple LL

## Prerequisites

Make sure you have installed the Agenta web platform or are using the cloud version. If you haven't done this yet, follow our [installation guide](/installation).
This guide assumes you are using [agenta cloud](https://cloud.agenta.ai). If you would like to self-host the agenta platform, please refer to [the self-hosting guide](/self-host/host-locally) for deployment instructions.

<Accordion title="In case you are self-hosting agenta">

## Step 0: Add your OpenAI API keys

1. Access the Agenta web platform.
2. Select API keys from the left menu.
1. Access the agenta in your webbrowser.
2. Select "Settings" from the left sidebar.
3. Select "LLM Keys" from the top menu.
3. Add your OpenAI API key.

Your OpenAI API keys are saved locally on your browser and are sent to the Agenta server only when you create a new application (as an environment variable in the Docker container).

<img height="600" src="/images/getting-started-ui-screenshots/00_provide_openapi_token.png" />
</Accordion >

## Step 1: Create a New LLM App

Expand All @@ -47,7 +50,9 @@ The provided application template already includes a prompt that states the capi
<img height="600" src="/images/getting-started-ui-screenshots/03_playground.png" />

## Step 3: Deploy the application
The application was deployed as an API the moment you created it. You can find the API endpoint in the "Endpoints" menu. Copy and paste the code from the "Endpoints" menu to use it in your software.
To deploy the application as an API. You need to click on the button "Publish" in the playground and select an environemnt.

You can now find the API endpoint in the "Endpoints" menu. Copy and paste the code from the "Endpoints" menu to use it in your software.

<img height="600" src="/images/getting-started-ui-screenshots/06_deployment.png" />

Expand Down
56 changes: 4 additions & 52 deletions docs/quickstart/introduction.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,38 +19,11 @@ With agenta, you can:
Agenta focuses on increasing the speed of the development cycle of LLM applications by increasing the speed of iteration.

Agenta integrates with all frameworks and model providers in the ecosystem, such as [Langchain](https://langchain.com), [LlamaIndex](https://www.llamaindex.ai/), [OpenAI](https://openai.com), [Cohere](https://cohere.ai), [Mistral](https://mistral.ai/), [Huggingface](https://huggingface.co/), and self-hosted open source LLMs such as the one served using [vLLM](https://github.com/vllm-project/vllm)
## Overview of agenta

<img className="dark:hidden" height="600" src="/images/agenta_highlevel_whitebg.png" />
<img className="hidden dark:block" height="600" src="/images/agenta_highlevel_blackbg.png" />

## Why use agenta?

- If you need to **collaborate with domain experts** and want their feedback on your LLM apps, as well as their help experimenting with prompts and parameters without having to modify your code.
- If you want the flexibility of **using code for writing LLM app**, without being restricted by libraries, models, or frameworks.
- If you need to **save, version, and compare** different variants of your LLM apps **on your own data**.
- If you need a systematic way to **programmatically evaluate your LLM apps**.
- If you **care about your data privacy** and do not want to be proxied through third-party services.

## Features

- **Parameter Playground:** Define your app's parameters within your code and experiment with them through a user-friendly web platform.
- **Test Sets:** Build test sets using the UI, by uploading CSVs, or by connecting to your own data via our API.
- **Evaluation:** Evaluate your app on your test sets using different strategies (e.g., exact match, AI Critic, human evaluation, etc.).
- **Deployment:** Deploy your app as an API in just one click.
- **Collaboration:** Share your app with collaborators and receive feedback on it.


## Getting Started

<CardGroup cols={2}>
<Card
title="Installation"
icon="screwdriver-wrench"
href="/installation"
color="#FF5733">
Install Agenta on your local machine to get started.
</Card>

<Card
title="Getting Started from the UI"
Expand All @@ -67,43 +40,22 @@ Agenta integrates with all frameworks and model providers in the ecosystem, such

Build your first LLM apps from the UI in under 2 minutes.
</Card>
<Card
title="Try for Free"
icon="play"
color="#FF33F6"
href="https://cloud.agenta.ai">
Dive in to explore the platform.
</Card>

</CardGroup>

## Getting Help

If you have questions or need support, here's how you can reach us. We'd also ❤️ your support.

<CardGroup>
<Card title="Schedule an onboarding call" icon="phone" href="https://cal.com/mahmoud-mabrouk-ogzgey/demo">
Book a call with a founder for one-on-one guidance on using agenta.
</Card>

<Card
title="Join our Slack"
icon="slack"
href="https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA"
color="#5865F2">
Use the #support channel on Slack to ask questions and get assistance with Agenta.
</Card>
<Card
title="Follow us on Twitter"
icon="twitter"
href="https://twitter.com/agenta_ai"
color="#1DA1F2">
Follow us on Twitter to get the latest updates and news.
</Card>
<Card title="Schedule a call" icon="phone" href="https://cal.com/mahmoud-mabrouk-ogzgey/demo">
Book a call with a founder for one-on-one guidance on building your first LLM app.
</Card>
<Card
title="Give us a star on GitHub"
icon="star"
href="https://github.com/agenta-ai/agenta"
color="#fbbf24">
Find us on Github at agenta-ai/agenta
</Card>
</CardGroup>
Loading

0 comments on commit 181e121

Please sign in to comment.