diff --git a/README.md b/README.md index 592eb69be3..b32e8f506e 100644 --- a/README.md +++ b/README.md @@ -1,26 +1,25 @@ -
- -
+

+ - - + + Shows the logo of agenta -

-

- Home Page | - Slack | - Documentation -

+

+

+ Documentation | + Website | + Slack +

-

Collaborate on prompts, evaluate, and deploy LLM applications with confidence

- The open-source LLM developer platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps. +

The Open source LLMOps Platform

+ Prompt playground, prompt management, evaluation, and observability

MIT license. - + Doc @@ -49,21 +48,25 @@

-
- - - - - - - -
-

-
+

+ - Glamour Shot + + + Try Agenta Live Demo + +

+ +
+
+
+ + + Screenshot Agenta + +

@@ -72,83 +75,58 @@ ---

- Quick Start • - Features • - Documentation • - Enterprise • - Roadmap • - Join Our Slack • - Contributing + Documentation • + Changelog • + Website • + Agenta Cloud +

--- -# โญ๏ธ Why Agenta? - -Agenta is an end-to-end LLM developer platform. It provides the tools for **prompt engineering and management**, โš–๏ธ **evaluation**, **human annotation**, and :rocket: **deployment**. All without imposing any restrictions on your choice of framework, library, or model. - -Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time. - -### With Agenta, you can: - -- [๐Ÿงช **Experiment** and **compare** prompts](https://docs.agenta.ai/prompt_management/prompt_engineering) on [any LLM workflow](https://docs.agenta.ai/prompt_management/setting_up/custom_applications) (chain-of-prompts, Retrieval Augmented Generation (RAG), LLM agents...) -- โœ๏ธ Collect and [**annotate golden test sets**](https://docs.agenta.ai/evaluation/test_sets) for evaluation -- ๐Ÿ“ˆ [**Evaluate** your application](https://docs.agenta.ai/evaluation/automatic_evaluation) with pre-existing or [**custom evaluators**](https://docs.agenta.ai/evaluation/custom_evaluator) -- [๐Ÿ” **Annotate** and **A/B test**](https://docs.agenta.ai/evaluation/human_evaluation) your applications with **human feedback** -- [๐Ÿค **Collaborate with product teams**](https://docs.agenta.ai/misc/team_management) for prompt engineering and evaluation -- [๐Ÿš€ **Deploy your application**](https://docs.agenta.ai/prompt_management/deployment) in one-click in the UI, through CLI, or through github workflows. +# What is Agenta? -### Works with any LLM app workflow +Agenta is a platform for building production-grade LLM applications. It helps **engineering and product teams** create reliable LLM apps faster. -Agenta enables prompt engineering and evaluation on any LLM app architecture: -- Chain of prompts -- RAG -- Agents - -It works with any framework such as [Langchain](https://www.langchain.com/), [LlamaIndex](https://www.llamaindex.ai/) and any LLM provider (openAI, Cohere, Mistral). - -# Quick Start - -### [Get started for free](https://cloud.agenta.ai?utm_source=github&utm_medium=readme&utm_campaign=github) - -### [Explore the Docs](https://docs.agenta.ai/) - -### [Create your first application in one-minute](https://docs.agenta.ai/getting_started/quick-start) - -### [Create an application using Langchain](https://docs.agenta.ai/guides/tutorials/first-app-with-langchain) - -### [Self-host agenta](https://docs.agenta.ai/self-host/host-locally) - -### [Check the Cookbook](https://docs.agenta.ai/guides/cookbooks/evaluations_with_sdk) +Agenta provides end-to-end tools for the entire LLMOps workflow: building (**LLM playground**, **evaluation**), deploying (**prompt and configuration management**), and monitoring (**LLM observability and tracing**). # Features - -| Playground | Evaluation | -| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Compare and version prompts for any LLM app, from single prompt to agents.