From 76f1f874a8ac9dae0c0dd8000d9d4380ca324ed9 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 12:20:13 +0100 Subject: [PATCH 1/9] Added UTM to links --- README.md | 51 +++++++++++++++++++++++++-------------------------- 1 file changed, 25 insertions(+), 26 deletions(-) diff --git a/README.md b/README.md index 592eb69be3..8fccf4c768 100644 --- a/README.md +++ b/README.md @@ -1,18 +1,17 @@ -
- -
+

+ - - + + Shows the logo of agenta -

-

- Home Page | - Slack | - Documentation -

+

+

+ Documentation | + Website | + Slack +

Collaborate on prompts, evaluate, and deploy LLM applications with confidence

The open-source LLM developer platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps. @@ -20,7 +19,7 @@

MIT license. - + Doc @@ -51,7 +50,7 @@
- + @@ -91,12 +90,12 @@ Agenta allows developers and product teams to collaborate in building production ### With Agenta, you can: -- [๐Ÿงช **Experiment** and **compare** prompts](https://docs.agenta.ai/prompt_management/prompt_engineering) on [any LLM workflow](https://docs.agenta.ai/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) -- โœ๏ธ Collect and [**annotate golden test sets**](https://docs.agenta.ai/evaluation/test_sets) for evaluation -- ๐Ÿ“ˆ [**Evaluate** your application](https://docs.agenta.ai/evaluation/automatic_evaluation) with pre-existing or [**custom evaluators**](https://docs.agenta.ai/evaluation/custom_evaluator) -- [๐Ÿ” **Annotate** and **A/B test**](https://docs.agenta.ai/evaluation/human_evaluation) your applications with **human feedback** -- [๐Ÿค **Collaborate with product teams**](https://docs.agenta.ai/misc/team_management) for prompt engineering and evaluation -- [๐Ÿš€ **Deploy your application**](https://docs.agenta.ai/prompt_management/deployment) in one-click in the UI, through CLI, or through github workflows. +- [๐Ÿงช **Experiment** and **compare** prompts](https://docs.agenta.ai/prompt_management/prompt_engineering?utm_source=github&utm_medium=referral&utm_campaign=readme) on [any LLM workflow](https://docs.agenta.ai/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) +- โœ๏ธ Collect and [**annotate golden test sets**](https://docs.agenta.ai/evaluation/test_sets?utm_source=github&utm_medium=referral&utm_campaign=readme) for evaluation +- ๐Ÿ“ˆ [**Evaluate** your application](https://docs.agenta.ai/evaluation/automatic_evaluation?utm_source=github&utm_medium=referral&utm_campaign=readme) with pre-existing or [**custom evaluators**](https://docs.agenta.ai/evaluation/custom_evaluator?utm_source=github&utm_medium=referral&utm_campaign=readme) +- [๐Ÿ” **Annotate** and **A/B test**](https://docs.agenta.ai/evaluation/human_evaluation?utm_source=github&utm_medium=referral&utm_campaign=readme) your applications with **human feedback** +- [๐Ÿค **Collaborate with product teams**](https://docs.agenta.ai/misc/team_management?utm_source=github&utm_medium=referral&utm_campaign=readme) for prompt engineering and evaluation +- [๐Ÿš€ **Deploy your application**](https://docs.agenta.ai/prompt_management/deployment?utm_source=github&utm_medium=referral&utm_campaign=readme) in one-click in the UI, through CLI, or through github workflows. ### Works with any LLM app workflow @@ -110,17 +109,17 @@ It works with any framework such as [Langchain](https://www.langchain.com/), [Ll # Quick Start -### [Get started for free](https://cloud.agenta.ai?utm_source=github&utm_medium=readme&utm_campaign=github) +### [Get started for free](https://cloud.agenta.ai?utm_source=github&utm_medium=referral&utm_campaign=readme) -### [Explore the Docs](https://docs.agenta.ai/) +### [Explore the Docs](https://docs.agenta.ai?utm_source=github&utm_medium=referral&utm_campaign=readme) -### [Create your first application in one-minute](https://docs.agenta.ai/getting_started/quick-start) +### [Create your first application in one-minute](https://docs.agenta.ai/getting_started/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme) -### [Create an application using Langchain](https://docs.agenta.ai/guides/tutorials/first-app-with-langchain) +### [Create an application using Langchain](https://docs.agenta.ai/guides/tutorials/first-app-with-langchain?utm_source=github&utm_medium=referral&utm_campaign=readme) -### [Self-host agenta](https://docs.agenta.ai/self-host/host-locally) +### [Self-host agenta](https://docs.agenta.ai/self-host/host-locally?utm_source=github&utm_medium=referral&utm_campaign=readme) -### [Check the Cookbook](https://docs.agenta.ai/guides/cookbooks/evaluations_with_sdk) +### [Check the Cookbook](https://docs.agenta.ai/guides/cookbooks/evaluations_with_sdk?utm_source=github&utm_medium=referral&utm_campaign=readme) # Features @@ -156,7 +155,7 @@ We warmly welcome contributions to Agenta. Feel free to submit issues, fork the We are usually hanging in our Slack. Feel free to [join our Slack and ask us anything](https://join.slack.com/t/agenta-hq/shared_invite/zt-1zsafop5i-Y7~ZySbhRZvKVPV5DO_7IA) -Check out our [Contributing Guide](https://docs.agenta.ai/misc/contributing/getting-started) for more information. +Check out our [Contributing Guide](https://docs.agenta.ai/misc/contributing/getting-started?utm_source=github&utm_medium=referral&utm_campaign=readme) for more information. ## Contributors โœจ From 0fec23463bd83e15f7ffbba6c5234b5f7ecab4ba Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 12:21:04 +0100 Subject: [PATCH 2/9] Remove job --- README.md | 3 --- 1 file changed, 3 deletions(-) diff --git a/README.md b/README.md index 8fccf4c768..423c9fc1ef 100644 --- a/README.md +++ b/README.md @@ -145,9 +145,6 @@ To disable anonymized telemetry, follow these steps: After making this change, restart Agenta Compose. -# โญ๏ธ Join Our Team - -- [Founding Product Engineer Frontend](https://agentaai.notion.site/Founding-Product-Engineer-Frontend-b6d26a3e9b254be6b6c2bfffbf0b53c5) # Contributing From 5de12d2ce092f600f68e36c5122584dcf5924dad Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 13:54:32 +0100 Subject: [PATCH 3/9] Fixed button and header --- README.md | 32 ++++++++++++++++++-------------- 1 file changed, 18 insertions(+), 14 deletions(-) diff --git a/README.md b/README.md index 423c9fc1ef..958dbbb826 100644 --- a/README.md +++ b/README.md @@ -13,8 +13,8 @@ Slack

-

Collaborate on prompts, evaluate, and deploy LLM applications with confidence

- The open-source LLM developer platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps. +

The Open-source LLMOps Platform

+ Prompt playground, prompt management, evaluation, and observability

@@ -48,21 +48,25 @@

-
- - - - - - - -

-
-
+

+ - Glamour Shot + + + Try Agenta Live Demo + +

+ +
+
+
+ + + Glamour Shot + +

From 6d6483b63cf96635d226c36c0008636be25cefc2 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 14:30:21 +0100 Subject: [PATCH 4/9] Checkpoint --- README.md | 49 +++++++++++++++++++++++++++++++++++++------------ 1 file changed, 37 insertions(+), 12 deletions(-) diff --git a/README.md b/README.md index 958dbbb826..012dd01558 100644 --- a/README.md +++ b/README.md @@ -86,12 +86,46 @@ --- -# โญ๏ธ Why Agenta? +# What is Agenta? -Agenta is an end-to-end LLM developer platform. It provides the tools for **prompt engineering and management**, โš–๏ธ **evaluation**, **human annotation**, and :rocket: **deployment**. All without imposing any restrictions on your choice of framework, library, or model. +Agenta is a LLM developer platform that helps teams quickly build and refine reliable LLM applications. -Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time. +Agenta is end-to-end, it provides all the tools around the LLMOps worfklow: From building (**LLM playground**, **automatic and human evaluation**) to deploying (**prompt and configuration management**) up to monitoring (**LLM Observability and tracing**) +# Features +- Prompt Playground +- Custom Workflows +- LLM evaluation +- Human evaluation +- Prompt Management +- LLM Tracing +- LLM Monitoring + + +# Why choose Agenta? +- Strong focus on enabling collaboration between developers and subject matter experts. Subject matter experts are first class citizens. This means a strong playground for prompt engineering +- Strong focus on the prompt engineering workflow: we are working to enable the best playground to iterate quickly on the prompts +- Strong focus on evaluation: Our evalution workflow is best in class. It comes with many evaluators out of the box and strong comparison views +- Open-telemetry native Observability SDK: means ... + +# Getting Started +## Agenta Cloud: +The easiest way to get started is through Agenta Cloud. It is free to signup, does not require credit card, and comes with a generous free-tier. + + + + + + Get Started with Agenta Cloud + + + +## Self-host: +``` +mkdir agenta && cd agenta +curl -L https://raw.githubusercontent.com/agenta-ai/agenta/main/docker-compose.gh.yml -o docker-compose.gh.yml +docker compose -f docker-compose.gh.yml up -d +``` ### With Agenta, you can: - [๐Ÿงช **Experiment** and **compare** prompts](https://docs.agenta.ai/prompt_management/prompt_engineering?utm_source=github&utm_medium=referral&utm_campaign=readme) on [any LLM workflow](https://docs.agenta.ai/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) @@ -101,15 +135,6 @@ Agenta allows developers and product teams to collaborate in building production - [๐Ÿค **Collaborate with product teams**](https://docs.agenta.ai/misc/team_management?utm_source=github&utm_medium=referral&utm_campaign=readme) for prompt engineering and evaluation - [๐Ÿš€ **Deploy your application**](https://docs.agenta.ai/prompt_management/deployment?utm_source=github&utm_medium=referral&utm_campaign=readme) in one-click in the UI, through CLI, or through github workflows. -### Works with any LLM app workflow - -Agenta enables prompt engineering and evaluation on any LLM app architecture: - -- Chain of prompts -- RAG -- Agents - -It works with any framework such as [Langchain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/) and any LLM provider (openAI, Cohere, Mistral). # Quick Start From aaf73329801ada5475cd5bd6e494e9ed787931e4 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 17:14:20 +0100 Subject: [PATCH 5/9] Update README.md --- README.md | 83 ++++++++++++------------------------------------------- 1 file changed, 18 insertions(+), 65 deletions(-) diff --git a/README.md b/README.md index 012dd01558..374f54cf58 100644 --- a/README.md +++ b/README.md @@ -64,7 +64,7 @@ @@ -75,42 +75,35 @@ ---

- Quick Start • - Features • - Documentation • - Enterprise • - Roadmap • - Join Our Slack • - Contributing + Documentation • + Changelog • + Website • + Agenta Cloud +

--- # What is Agenta? -Agenta is a LLM developer platform that helps teams quickly build and refine reliable LLM applications. +Agenta is a platform for building production-grade LLM applications. It helps **engineering and product teams** create reliable LLM apps faster. + -Agenta is end-to-end, it provides all the tools around the LLMOps worfklow: From building (**LLM playground**, **automatic and human evaluation**) to deploying (**prompt and configuration management**) up to monitoring (**LLM Observability and tracing**) +Agenta provides end-to-end tools for the entire LLMOps workflow: building (**LLM playground**, **evaluation**), deploying (**prompt and configuration management**), and monitoring (**LLM observability and tracing**). # Features -- Prompt Playground -- Custom Workflows -- LLM evaluation -- Human evaluation -- Prompt Management -- LLM Tracing -- LLM Monitoring +- **Prompt Playground**: Experiment, iterate on prompts, and compare outputs from over 50 LLM models side by side ([documentation](https://docs.agenta.ai/prompt-management/using-the-playground?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **Custom Workflows**: Build a playground for any custom LLM workflow, such as RAG or agents. Enable all the team to easily iterate on its parameters and evaluate it from the web UI. +- **LLM evaluation**: Run evaluation suite from the webUI using predefined evaluators like LLM-as-a-judge, RAG evaluators, or custom code evaluators. ([documentation](https://docs.agenta.ai/evaluation/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **Human evaluation**: Collaborate with subject matter experts for human annotation evaluation, including A/B testing and annotating golden test sets. +- **Prompt Management**: Version your prompts and manage them across different environments ([Documentation](https://docs.agenta.ai/prompt-management/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [Quick start](https://docs.agenta.ai/prompt-management/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **LLM Tracing**: Observe and debug your apps with integrations to most provider and frameworks ([Documentation](https://docs.agenta.ai/observability/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [Quick start](https://docs.agenta.ai/observability/quickstart?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **LLM Monitoring**: Track cost and latency and compare different deployments. -# Why choose Agenta? -- Strong focus on enabling collaboration between developers and subject matter experts. Subject matter experts are first class citizens. This means a strong playground for prompt engineering -- Strong focus on the prompt engineering workflow: we are working to enable the best playground to iterate quickly on the prompts -- Strong focus on evaluation: Our evalution workflow is best in class. It comes with many evaluators out of the box and strong comparison views -- Open-telemetry native Observability SDK: means ... - # Getting Started ## Agenta Cloud: -The easiest way to get started is through Agenta Cloud. It is free to signup, does not require credit card, and comes with a generous free-tier. +The easiest way to get started is through Agenta Cloud. It is free to signup, and comes with a generous free-tier. @@ -126,54 +119,14 @@ mkdir agenta && cd agenta curl -L https://raw.githubusercontent.com/agenta-ai/agenta/main/docker-compose.gh.yml -o docker-compose.gh.yml docker compose -f docker-compose.gh.yml up -d ``` -### With Agenta, you can: - -- [๐Ÿงช **Experiment** and **compare** prompts](https://docs.agenta.ai/prompt_management/prompt_engineering?utm_source=github&utm_medium=referral&utm_campaign=readme) on [any LLM workflow](https://docs.agenta.ai/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) -- โœ๏ธ Collect and [**annotate golden test sets**](https://docs.agenta.ai/evaluation/test_sets?utm_source=github&utm_medium=referral&utm_campaign=readme) for evaluation -- ๐Ÿ“ˆ [**Evaluate** your application](https://docs.agenta.ai/evaluation/automatic_evaluation?utm_source=github&utm_medium=referral&utm_campaign=readme) with pre-existing or [**custom evaluators**](https://docs.agenta.ai/evaluation/custom_evaluator?utm_source=github&utm_medium=referral&utm_campaign=readme) -- [๐Ÿ” **Annotate** and **A/B test**](https://docs.agenta.ai/evaluation/human_evaluation?utm_source=github&utm_medium=referral&utm_campaign=readme) your applications with **human feedback** -- [๐Ÿค **Collaborate with product teams**](https://docs.agenta.ai/misc/team_management?utm_source=github&utm_medium=referral&utm_campaign=readme) for prompt engineering and evaluation -- [๐Ÿš€ **Deploy your application**](https://docs.agenta.ai/prompt_management/deployment?utm_source=github&utm_medium=referral&utm_campaign=readme) in one-click in the UI, through CLI, or through github workflows. - - -# Quick Start - -### [Get started for free](https://cloud.agenta.ai?utm_source=github&utm_medium=referral&utm_campaign=readme) - -### [Explore the Docs](https://docs.agenta.ai?utm_source=github&utm_medium=referral&utm_campaign=readme) - -### [Create your first application in one-minute](https://docs.agenta.ai/getting_started/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme) - -### [Create an application using Langchain](https://docs.agenta.ai/guides/tutorials/first-app-with-langchain?utm_source=github&utm_medium=referral&utm_campaign=readme) - -### [Self-host agenta](https://docs.agenta.ai/self-host/host-locally?utm_source=github&utm_medium=referral&utm_campaign=readme) - -### [Check the Cookbook](https://docs.agenta.ai/guides/cookbooks/evaluations_with_sdk?utm_source=github&utm_medium=referral&utm_campaign=readme) - -# Features - -| Playground | Evaluation | -| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Compare and version prompts for any LLM app, from single prompt to agents.
Book us # Disabling Anonymized Tracking -By default, Agenta automatically reports anonymized basic usage statistics. This helps us understand how Agenta is used and track its overall usage and growth. This data does not include any sensitive information. - -To disable anonymized telemetry, follow these steps: +By default, Agenta automatically reports anonymized basic usage statistics. This helps us understand how Agenta is used and track its overall usage and growth. This data does not include any sensitive information. To disable anonymized telemetry, follow these steps: - For web: Set `TELEMETRY_TRACKING_ENABLED` to `false` in your `agenta-web/.env` file. - For CLI: Set `telemetry_tracking_enabled` to `false` in your `~/.agenta/config.toml` file. -After making this change, restart Agenta Compose. - # Contributing From bcebedb531bcfc98a090da88253bfd60a92037e4 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 17:15:27 +0100 Subject: [PATCH 6/9] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 374f54cf58..e9d5afc65b 100644 --- a/README.md +++ b/README.md @@ -94,10 +94,10 @@ Agenta provides end-to-end tools for the entire LLMOps workflow: building (**LL # Features - **Prompt Playground**: Experiment, iterate on prompts, and compare outputs from over 50 LLM models side by side ([documentation](https://docs.agenta.ai/prompt-management/using-the-playground?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **Custom Workflows**: Build a playground for any custom LLM workflow, such as RAG or agents. Enable all the team to easily iterate on its parameters and evaluate it from the web UI. -- **LLM evaluation**: Run evaluation suite from the webUI using predefined evaluators like LLM-as-a-judge, RAG evaluators, or custom code evaluators. ([documentation](https://docs.agenta.ai/evaluation/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **LLM evaluation**: Run evaluation suite from the webUI using predefined evaluators like LLM-as-a-judge, RAG evaluators, or custom code evaluators. ([docs](https://docs.agenta.ai/evaluation/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **Human evaluation**: Collaborate with subject matter experts for human annotation evaluation, including A/B testing and annotating golden test sets. -- **Prompt Management**: Version your prompts and manage them across different environments ([Documentation](https://docs.agenta.ai/prompt-management/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [Quick start](https://docs.agenta.ai/prompt-management/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme)) -- **LLM Tracing**: Observe and debug your apps with integrations to most provider and frameworks ([Documentation](https://docs.agenta.ai/observability/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [Quick start](https://docs.agenta.ai/observability/quickstart?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **Prompt Management**: Version your prompts and manage them across different environments ([docs](https://docs.agenta.ai/prompt-management/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [quick start](https://docs.agenta.ai/prompt-management/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **LLM Tracing**: Observe and debug your apps with integrations to most provider and frameworks ([docs](https://docs.agenta.ai/observability/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [quick start](https://docs.agenta.ai/observability/quickstart?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **LLM Monitoring**: Track cost and latency and compare different deployments. From fb575c1e44fc45f4def8fe2844ac6f54cf84a198 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 17:16:09 +0100 Subject: [PATCH 7/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e9d5afc65b..00d019233e 100644 --- a/README.md +++ b/README.md @@ -92,7 +92,7 @@ Agenta is a platform for building production-grade LLM applications. It helps ** Agenta provides end-to-end tools for the entire LLMOps workflow: building (**LLM playground**, **evaluation**), deploying (**prompt and configuration management**), and monitoring (**LLM observability and tracing**). # Features -- **Prompt Playground**: Experiment, iterate on prompts, and compare outputs from over 50 LLM models side by side ([documentation](https://docs.agenta.ai/prompt-management/using-the-playground?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **Prompt Playground**: Experiment, iterate on prompts, and compare outputs from over 50 LLM models side by side ([docs](https://docs.agenta.ai/prompt-management/using-the-playground?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **Custom Workflows**: Build a playground for any custom LLM workflow, such as RAG or agents. Enable all the team to easily iterate on its parameters and evaluate it from the web UI. - **LLM evaluation**: Run evaluation suite from the webUI using predefined evaluators like LLM-as-a-judge, RAG evaluators, or custom code evaluators. ([docs](https://docs.agenta.ai/evaluation/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **Human evaluation**: Collaborate with subject matter experts for human annotation evaluation, including A/B testing and annotating golden test sets. From b535713b00c485b1356d6628f2e8d665ef0af8c4 Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 17:28:32 +0100 Subject: [PATCH 8/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 00d019233e..77281f96cf 100644 --- a/README.md +++ b/README.md @@ -97,7 +97,7 @@ Agenta provides end-to-end tools for the entire LLMOps workflow: building (**LL - **LLM evaluation**: Run evaluation suite from the webUI using predefined evaluators like LLM-as-a-judge, RAG evaluators, or custom code evaluators. ([docs](https://docs.agenta.ai/evaluation/overview?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **Human evaluation**: Collaborate with subject matter experts for human annotation evaluation, including A/B testing and annotating golden test sets. - **Prompt Management**: Version your prompts and manage them across different environments ([docs](https://docs.agenta.ai/prompt-management/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [quick start](https://docs.agenta.ai/prompt-management/quick-start?utm_source=github&utm_medium=referral&utm_campaign=readme)) -- **LLM Tracing**: Observe and debug your apps with integrations to most provider and frameworks ([docs](https://docs.agenta.ai/observability/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [quick start](https://docs.agenta.ai/observability/quickstart?utm_source=github&utm_medium=referral&utm_campaign=readme)) +- **LLM Tracing**: Observe and debug your apps with integrations to most providers and frameworks ([docs](https://docs.agenta.ai/observability/overview?utm_source=github&utm_medium=referral&utm_campaign=readme), [quick start](https://docs.agenta.ai/observability/quickstart?utm_source=github&utm_medium=referral&utm_campaign=readme)) - **LLM Monitoring**: Track cost and latency and compare different deployments. From 035ae60308afff40c61f7aee5c8e5c2a262ca84d Mon Sep 17 00:00:00 2001 From: Mahmoud Mabrouk Date: Wed, 20 Nov 2024 17:36:04 +0100 Subject: [PATCH 9/9] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 77281f96cf..b32e8f506e 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ Slack

-

The Open-source LLMOps Platform

+

The Open source LLMOps Platform

Prompt playground, prompt management, evaluation, and observability