中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
-
Updated
Apr 30, 2024 - Python
中文LLaMA&Alpaca大语言模型+本地CPU/GPU训练部署 (Chinese LLaMA & Alpaca LLMs)
The official GitHub page for the survey paper "A Survey of Large Language Models".
An Open-Source Framework for Prompt-Learning.
Must-read papers on prompt-based tuning for pre-trained language models.
Top2Vec learns jointly embedded topic, document and word vectors.
RoBERTa中文预训练模型: RoBERTa for Chinese
An Open-sourced Knowledgable Large Language Model Framework.
A curated list of NLP resources focused on Transformer networks, attention mechanism, GPT, BERT, ChatGPT, LLMs, and transfer learning.
Must-read Papers on Knowledge Editing for Large Language Models.
A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.
Awesome papers on Language-Model-as-a-Service (LMaaS)
Keyphrase or Keyword Extraction 基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
A PyTorch-based model pruning toolkit for pre-trained language models
HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab
[ICLR 2024] Domain-Agnostic Molecular Generation with Chemical Feedback
[ICLR 2022] Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
The code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
Must-read papers on improving efficiency for pre-trained language models.
[ICLR 2023] Multimodal Analogical Reasoning over Knowledge Graphs
We start a company-name recognition task with a small scale and low quality training data, then using skills to enhanced model training speed and predicting performance with least artificial participation. The methods we use involve lite pre-training models such as Albert-small or Electra-small with financial corpus, knowledge of distillation an…
Add a description, image, and links to the pre-trained-language-models topic page so that developers can more easily learn about it.
To associate your repository with the pre-trained-language-models topic, visit your repo's landing page and select "manage topics."