We are researchers from NVIDIA and MIT working on GPU accelerated generative AI.
🚀 Introduction
Efficient Large Model Team is a collaboration between researchers from NVIDIA and MIT dedicated to the development and optimization of GPU-accelerated efficient AI computing. We focuses on pushing the boundaries of generative AI by designing models that are not only powerful but also efficient in terms of computational resources. We are committed to advancing the field of AI by making state-of-the-art models deployable, scalable and accessible.
🌈 Contribution Guidelines
We welcome contributions from the community to help us further improve and expand our research efforts. Whether you're an experienced researcher, a student eager to learn, or a developer passionate about efficiency in AI, there are several ways to get involved:
- Contribute Code: Help us develop and optimize efficient large models by contributing code to our GitHub repositories.
- Report Issues: If you encounter any bugs or have suggestions for improvement, please open an issue on the respective repository.
- Provide Feedback: Share your insights and ideas through discussions on our GitHub repositories or join our community forums.
- Spread the Word: Let others know about our work and encourage them to join our community.
- Internship: We have openings at both MIT and NVIDIA for excellent contributors with proven track record.
🍿 Fun Facts
Our team comprises researchers from diverse backgrounds, bringing together expertise from both industry and academia. We're passionate about optimizing AI models not just for performance but also for efficiency and sustainability. In our spare time, we love experimenting with new algorithms and techniques to enhance the efficiency of our models, and skiing at the speed of GPU. Join us on this exciting journey of building the next generation of efficient large models! 🌟
👩💻 Useful Resources
MIT HAN Lab: https://hanlab.mit.edu
NVIDIA Research: https://www.nvidia.com/en-us/research
NVIDIA TensorRT-LLM: https://github.com/NVIDIA/TensorRT-LLM