This application is designed to provide a comprehensive analysis of a given stock ticker. The workflow involves checking the current price of the ticker, searching for relevant articles, filtering for the most relevant articles, and generating a summary report with sentiment analysis. The application is built using FastAPI and Ngrok to run and expose the API endpoint.
llm-stock-demo.mp4
This is a self hosted LLM model locally using Ollama. The model I will be using is llama3
Prerequisite: Ollama installed on your machine
ollama run llama3
Step 1: Install dependencies
cd llm_service
pip install -r requirements.txt
Step 2: Run scripts to start the fastapi server:
Linux
./script.sh
Window
./script.bat
- Server will run on localhost:8001
This is the main API for the stock analysis utilizing search engine Tavily and our own LLM API in LLM service
Configure environment variable .env.example
TAVILY_API_KEY=""
STOCK_API_KEY=""
LLM_SERVICE_URL=http://127.0.0.1:8001
Visit twelveapi for the STOCK_API_KEY
Visit Tavily for TAVILY_API_KEY
Step 1: Install dependencies
cd master_agent
pip install -r requirements.txt
Step 2: Run scripts to start the fastapi server:
Linux
./script.sh
Window
./script.bat
- Server will run on localhost:8000
I use streamlit for a quick prototype for this application. Step 1: Install dependencies
cd frontend
pip install -r requirements.txt
Step 2: Run scripts to start the fastapi server:
Linux
./script.sh
Window
./script.bat
-
Price Agent: Use Twelve API to retrieve the current stock price
-
Search: Search for relevant articles for analysis
-
Filter Agent: Filter the article by score based on search results
-
LLM Agent: Use RAG technique with self-hosted llama3 model for sentiment analysis
The Text Analysis API allows users to send a text message and receive a generated analysis based on the llama3 model from Ollama.
curl -X POST "http://127.0.0.1:8001/generate" -H "Content-Type: application/json" -d '{"content": "Analyze this text"}'
import requests
url = "http://127.0.0.1:8001/generate"
res = requests.post(url, json={"content": "Hi, how are you today?"})
Run the workflow to retrieve the stock analysis utilizing the llama3 service and RAG technique
curl -X 'GET' \
'http://example.com/ticker?ticker=AAPL' \
-H 'accept: application/json'
import requests
url = 'http://127.0.0.1:8000/ticker'
params = {'ticker': 'AAPL'}
headers = {'accept': 'application/json'}
response = requests.get(url, params=params, headers=headers)
data = response.json()
print(data)
-
User Input:
- The user provides a stock ticker.
-
Price Agent:
- The application checks the current price of the ticker. If the ticker does not exist, the application returns an error and ends the process.
-
Search Agent:
- If the ticker exists, the application searches for articles and sources related to the stock ticker.
-
Article Filtering:
- The articles are filtered based on the score from Tavily search tool
-
Summary Report:
- The most relevant article is then passed into the LLM again to write a summary report and sentiment analysis.
- The summary report is returned to the user.
This project is licensed under the MIT License.