-
Notifications
You must be signed in to change notification settings - Fork 183
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
77ce124
commit 216a8fd
Showing
3 changed files
with
132 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
Week 6: UI Improvements and RAG performance evaluation | ||
====================================================== | ||
|
||
.. post:: July 27 2024 | ||
:author: Robin Roy | ||
:tags: google | ||
:category: gsoc | ||
|
||
Hi, I'm `Robin <https://github.com/robinroy03>`_ and this is my blog about week 6. | ||
|
||
This week, I worked on some UI improvements and studied and evaluated the RAG performance. | ||
|
||
Things I did in week 6 | ||
---------------------- | ||
|
||
1) **Line number references** | ||
|
||
Earlier, the bot used to reference the Python file directly. This made it difficult to search and find the particular function/class. We had to manually go and search. I modified the code to include a link with line numbers. Now the references section will give a link which wraps around the function/class. To do this I had to re-index the whole library again using the new parser code. The present model points to the latest stable release of FURY. | ||
|
||
I also tried to compress it all into one Discord message, reducing one extra ping :) | ||
|
||
|
||
2) **RAG Performance Evaluation** | ||
|
||
I added a new benchmark to measure RAG performance. It essentially checks whether certain key information was retrieved from the database. There are certain situations where the model fetches data irrelevant to the question, this could help in fixing that. | ||
|
||
The RAG benchmark dataset consists of a prompt to the LLM and expected references to be fetched from the database. I'll give a score based on the % of correct fetches. | ||
|
||
|
||
3) **Fine-tuning feasibility study** | ||
|
||
It was time to start thinking about fine-tuning. Gemini had a generous free tier and it was possible to fine-tune Gemini-1.0-pro. I looked into it and started collecting data for it. For fine-tuning Gemini, I had to format the data as an input/output pair. Most of the data were planned to be collected from Discord and GitHub. | ||
|
||
I also checked into fine-tuning models like phi-3 and llama 7b. It is possible to do the fine-tuning on google colab/kaggle. We could use a small quantized model and fine-tune that without much performance loss. | ||
|
||
|
||
What is coming up next week? | ||
---------------------------- | ||
|
||
I'll be taking a break next week due to my semester final examinations. I'll study model finetuning and keep brainstorming interesting trajectories for FURY. | ||
|
||
|
||
Did you get stuck anywhere? | ||
--------------------------- | ||
|
||
No, I did not get stuck anywhere. | ||
|
||
|
||
Thank you for reading! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
Week 7: Surviving final examinations | ||
==================================== | ||
|
||
.. post:: July 27 2024 | ||
:author: Robin Roy | ||
:tags: google | ||
:category: gsoc | ||
|
||
Hi, I'm `Robin <https://github.com/robinroy03>`_ and this is my blog about week 7. | ||
|
||
I majorly took this week off due to my semester final examinations :) They were fun. Major topics were x86, ARM and 8051. I had not written a lot of assembly apart from school work. I took the week to experiment with some assembly. The course was more into hardware architecture than programming. I've now enough knowledge to read a given piece of ASM code with a wiki to look up mnemonics (and Gemini/Claude to help). I'm not fast in writing ASM (yet), one day I'll find a project to dive into, or maybe some reverse engineering and CTFs. GPU instruction sets are also something interesting. | ||
|
||
|
||
**Discord data collection** | ||
|
||
I collected some Q&A questions from the FURY discord server. I did it manually because the volume wasn't high, and I wanted it to be correct. Had to cross-check with GitHub also to check whether the answer/code mentioned still stands. The format I used was [User question, Answer]. If the answer/question is spread across multiple conversations, I'll adjust it to this format. | ||
|
||
|
||
What is coming up next week? | ||
---------------------------- | ||
|
||
- Gemini Finetuning | ||
- Collect more Discord data. | ||
|
||
|
||
Did you get stuck anywhere? | ||
--------------------------- | ||
|
||
Not really apart from some silly ASM bugs. | ||
|
||
Thank you for reading! |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
Week 8: Gemini Finetuning | ||
========================= | ||
|
||
.. post:: July 27 2024 | ||
:author: Robin Roy | ||
:tags: google | ||
:category: gsoc | ||
|
||
Hi, I'm `Robin <https://github.com/robinroy03>`_ and this is my blog about week 8. | ||
|
||
This week I worked on finalizing the Discord chat QnA data collection and using it to Fine-Tune the Gemini-1.0-Pro model. | ||
|
||
Things I did in Week 8 | ||
---------------------- | ||
|
||
1) **Discord Data Collection** | ||
|
||
I finished collecting data from all the channels in the Discord server, cross-verifying to check whether they still work. I also added some questions which were on the FURY bot testing server. These QnA pairs were later converted to a CSV with input/output pairs and fed to Gemini for finetuning. | ||
|
||
2) **Gemini Finetuning** | ||
|
||
Finetuning is essentially training the model on the input/output. RAG is giving context and asking the model to form an answer using that. Finetuning updates the model weights as per the input/output. Gemini uses `Parameter-Efficient Fine-Tuning <https://huggingface.co/blog/peft>`_ in AI Studio as per some reports. It makes sense because the tuning only takes minutes and PEFT is a good strategy to prevent issues like `catastrophic forgetting <https://arxiv.org/abs/1312.6211>`_. | ||
|
||
Finetuning and RAG are complementary to each other. The difference between them can be summarized as follows: | ||
|
||
RAG is like giving an LLM with no prior knowledge about FURY access to some important functions/classes as per the user prompt. It'll use this given context and its knowledge of graphics libraries (knowledge from pretraining) to form an answer. | ||
|
||
Finetuning is used to make the model follow a certain style or behaviour. It is a form of mimicking the input-output. This will help in increasing the model's performance. An interesting thing is I had to train the model 1) with RAG and 2) without RAG. | ||
|
||
For finetuning, the input must be in the format the LLM will get the answer from the user. When you ask a question to the FURY bot, the bot does not get your question directly. We are processing it to add additional information. Therefore I had to process all the collected data with RAG. | ||
|
||
This is an interesting direction, and I have a lot of cool things to try out here. I'll spend the next few weeks trying different ideas. | ||
|
||
|
||
What is coming up next week? | ||
---------------------------- | ||
|
||
- Finetuning strategies. | ||
- Hosting the model on API. | ||
|
||
|
||
Did you get stuck anywhere? | ||
--------------------------- | ||
|
||
No, I did not get stuck anywhere. | ||
|
||
LINKS: | ||
|
||
- `Parameter-Efficient Fine-Tuning <https://huggingface.co/blog/peft>`_ | ||
- `catastrophic forgetting <https://arxiv.org/abs/1312.6211>`_ | ||
|
||
Thank you for reading! |