Welcome to the QLora project, a biomedical language processing model based on instruction tuning. This project is inspired by the research paper "Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing".
Large Language Models (LLMs), particularly those similar to ChatGPT, have significantly influenced the field of Natural Language Processing (NLP). While these models excel in general language tasks, their performance in domain-specific downstream tasks such as biomedical and clinical Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI) is still evolving.
nlpie/Llama2-MedTuned-Instructions.
The implementation of the project was carried out using the Kaggle platform, which provides a remote and computational work environment for data analysis and machine learning. This platform allows us to perform the training and testing of the model remotely, while maintaining efficiency and speed.