This project implements a data processing pipeline that analyzes customer feedback using a Language Model (LLM). It evaluates the results and presents the findings on a simple web page.
Build a data processing pipeline that analyzes customer feedback using a Language Model (LLM), evaluates the results, and presents the findings on a simple web page. The project will involve developing a Python script for data processing, crafting effective prompts for LLM integration, implementing a sentiment analysis evaluation, and creating a user-friendly HTML presentation of the results. By leveraging Temporal, we can easily handle retries, state management, and orchestration of tasks, which enhances the overall efficiency and resilience of the application.
- Python 3.x
- Docker (optional)
- Docker compose (optional)
- Clone the repository:
git clone https://github.com/Danieloni1/langgraph-feedback-ingestor.git cd langgraph-feedback-ingestor/src
- Install the required libraries using pip:
pip install -r requirements.txt
-
Start the Temporal server:
temporal server start-dev
-
Run the worker:
python worker.py
-
Run the main application:
python app.py
-
Prepare your CSV file with the following columns:
feedback_id
(integer)customer_name
(string)feedback_text
(string)submission_date
(date)
-
After processing, the results will be disaplyed, a graph image will be saved as
graph.png
and evaluation will be saved toevaluation/evaluation.txt
.
You can also run the application containerized:
- Build and run the application, simply use docker compose:
Then visit
docker compose up --build
localhost:5001
for the app andlocalhost:8080
for the Temporal dashboard.