Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration: README Steps Improvement #1245

Closed
wants to merge 13 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 0 additions & 6 deletions agenta-backend/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,6 @@ RUN touch /app/agenta_backend/__init__.py
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi

# Install git and clone the necessary repository
RUN apt-get update -y \
&& apt-get install -y git \
&& git clone https://github.com/mmabrouk/beanie \
&& cd beanie && pip install .

# remove dummy module
RUN rm -r /app/agenta_backend
EXPOSE 8000
12 changes: 10 additions & 2 deletions agenta-backend/agenta_backend/migrations/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ To access the backend Docker container:

2. **Identify the `agenta-backend` Container ID**: Note down the container ID from the output. Example output:

```
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ae0c56933636 agenta-backend "uvicorn agenta_back…" 3 hours ago Up 3 hours 8000/tcp agenta-backend-1
e35f6c8b7fcb agenta-agenta-web "docker-entrypoint.s…" 3 hours ago Up 3 hours 0.0.0.0:3000->3000/tcp agenta-agenta-web-1
Expand All @@ -30,6 +30,12 @@ To access the backend Docker container:
docker exec -it CONTAINER_ID bash
```

4. **Install Required Beanie Version (This is only temporary)**: Run the following command:

```bash
sh install_forked_beanie.sh
```

### Performing the Migration

To perform the database migration:
Expand All @@ -39,13 +45,15 @@ To perform the database migration:
```sh
cd agenta_backend/migrations/{migration_name}
```

Replace `{migration_name}` with the actual migration name, e.g., `17_01_24_pydantic_and_evaluations`.

2. **Run Beanie Migration**: Execute the migration command:

```sh
beanie migrate --no-use-transaction -uri 'mongodb://username:password@mongo' -db 'agenta_v2' -p .
```

Ensure to replace `username`, `password`, and other placeholders with actual values.

Follow these steps for a successful database migration in your Agenta backend system.
Follow these steps for a successful database migration in your Agenta backend system.
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#!/bin/sh

# Install git and clone the necessary repository
apt-get update -y \
&& apt-get install -y git \
&& git clone https://github.com/mmabrouk/beanie /app/beanie \
&& cd /app/beanie && pip install .
Original file line number Diff line number Diff line change
Expand Up @@ -322,8 +322,6 @@ def auto_ai_critique_evaluator_config():
"settings_values": {
"open_ai_key": OPEN_AI_KEY,
"temperature": 0.9,
"evaluation_prompt_template": "We have an LLM App that we want to evaluate its outputs. Based on the prompt and the parameters provided below evaluate the output based on the evaluation strategy below: Evaluation strategy: 0 to 10 0 is very bad and 10 is very good. Prompt: {llm_app_prompt_template} Inputs: country: {country} Correct Answer:{correct_answer} Evaluate this: {variant_output} Answer ONLY with one of the given grading or evaluation options.",
"llm_app_prompt_template": "",
"llm_app_inputs": [{"input_name": "country", "input_value": "tunisia"}],
"prompt_template": "We have an LLM App that we want to evaluate its outputs. Based on the prompt and the parameters provided below evaluate the output based on the evaluation strategy below: Evaluation strategy: 0 to 10 0 is very bad and 10 is very good. Prompt: {llm_app_prompt_template} Inputs: country: {country} Correct Answer:{correct_answer} Evaluate this: {variant_output} Answer ONLY with one of the given grading or evaluation options.",
},
}
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
from agenta_backend.models.api.evaluation_model import EvaluationStatusEnum
from agenta_backend.models.db_models import (
AppDB,
ConfigDB,
TestSetDB,
AppVariantDB,
EvaluationDB,
Expand All @@ -23,6 +22,7 @@
# Set global variables
APP_NAME = "evaluation_in_backend"
ENVIRONMENT = os.environ.get("ENVIRONMENT")
OPEN_AI_KEY = os.environ.get("OPENAI_API_KEY")
if ENVIRONMENT == "development":
BACKEND_API_HOST = "http://host.docker.internal/api"
elif ENVIRONMENT == "github":
Expand Down Expand Up @@ -178,6 +178,7 @@ async def test_create_evaluation():
"variant_ids": [str(app_variant.id)],
"evaluators_configs": [],
"testset_id": str(testset.id),
"lm_providers_keys": {"openai": OPEN_AI_KEY},
"rate_limit": {
"batch_size": 10,
"max_retries": 3,
Expand Down
Loading