Skip to content

Commit

Permalink
Update main.mdx
Browse files Browse the repository at this point in the history
  • Loading branch information
mmabrouk authored Apr 24, 2024
1 parent f2d450e commit 92c8100
Showing 1 changed file with 26 additions and 9 deletions.
35 changes: 26 additions & 9 deletions docs/changelog/main.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,40 @@
title: "Changelog"
---

## v0.13.5 - Observability (beta) and Evaluation speedup
## v0.13.1-5 - Evaluation Speed Increase and Numerous Quality of Life Improvements
*23rd April 2024*

**Observability**
- We've improved the speed of evaluations by 3x through the use of asynchronous batching of calls.
- We've added Groq as a new provider along with Llama3 to our playground.

**Bug Fixes**

- Resolved a rendering UI bug in Testset view.
- Fixed incorrect URLs displayed when running the 'agenta variant serve' command.
- Corrected timestamps in the configuration.
- Resolved errors when using the chat template with empty input.
- Fixed latency format in evaluation view.
- Added a spinner to the Human Evaluation results table.
- Resolved an issue where the gitignore was being overwritten when running 'agenta init'.


## v0.13.0 - Observability (beta)
*14th April 2024*

You can now monitor your application usage in production. We've added a new observability feature (currently in beta), which allows you to:

- Monitor cost, latency, and the number of calls to your applications in real-time.
- View the logs of your LLM calls, including inputs, outputs, and used configurations. You can also add any interesting logs to your test set.
- Trace your more complex LLM applications to understand the logic within and debug it.

As of now, all new applications created will include observability by default. We are working towards a GA version in the next weeks, which will be scalable and better integrated with your applications. We will also be adding tutorials and documentation about it.

<img height="600" className="dark:hidden" src="/images/changelog/observability_beta_light.png" />
<img height="600" className="hidden dark:block" src="/images/changelog/observability_beta_dark.png" />

- Introducing Observability (beta) for better insight into your LLM applications.
- Monitor your LLM application's model performance and resource usage in real-time.
- Log relevant events and warnings to facilitate post-mortem analysis and audit trails.
- Instrument your LLM code to capture fine-grained data for analysis, such as input features, configurations.

Find examples of llm apps that have observability <a href="https://github.com/Agenta-AI/agenta/tree/main/examples/app_with_observability" _target="_blank">here</a>.
Find examples of LLM apps created from code with observability <a href="https://github.com/Agenta-AI/agenta/tree/main/examples/app_with_observability" _target="_blank">here</a>.

**Improvements**
- Improved batch processing performance when running evaluation

## v0.12.6 - Compare latency and costs
*1st April 2024*
Expand Down

0 comments on commit 92c8100

Please sign in to comment.