Open source AI assistant based in RAG (Retrieval Augmented Generation), to help users resolve their SpeedRunEthereum questions, and let users "chat" with Scaffold-ETH docs.
This assistant MVP uses LangChain framework with these providers:
- GroqCloud: to interact with LLM via API. We can easily plug different Models like:
- llama3-70b-8192 (current model)
- llama3-8b-8192
- mixtral-8x7b-32768
- gemma-7b-it
- Google: to create the embeddings
- FAISS: for vector store and to search for embeddings that are similar to user prompt
Everything is wrapped in Streamlit to transform the Python script into a web app.
Each Challenge will have it's own set of documents in a [Challenge #] folder. It will be loaded when the user selects the Challenge in the dropdown menu. By default, [Challenge 0] docs are loaded.
Document list for each challenge:
- Scaffold ETH docs
- Challenge readme
- Telegram Q/A extracted from Challenge chat group