Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ctx_len and max_tokens #1035

Merged
merged 2 commits into from
Dec 16, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions models/capybara-34b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Nous Capybara 34B, a variant of the Yi-34B model, is the first Nous model with a 200K context length, trained for three epochs on the innovative Capybara dataset.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "USER:\n{prompt}\nASSISTANT:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "NousResearch, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/deepseek-coder-1.3b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@
"description": "Deepseek Coder trained on 2T tokens (87% code, 13% English/Chinese), excelling in project-level code completion with advanced capabilities across multiple programming languages.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### Instruction:\n{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Deepseek, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/deepseek-coder-34b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Deepseek Coder trained on 2T tokens (87% code, 13% English/Chinese), excelling in project-level code completion with advanced capabilities across multiple programming languages.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### Instruction:\n{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Deepseek, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/llama2-chat-70b-q4/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "This is a 4-bit quantized version of Meta AI's Llama 2 Chat 70b model.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "[INST] <<SYS>>\n{system_message}<</SYS>>\n{prompt}[/INST]"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "MetaAI, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/llama2-chat-7b-q4/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "This is a 4-bit quantized iteration of Meta AI's Llama 2 Chat 7b model, specifically designed for a comprehensive understanding through training on extensive internet data.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "[INST] <<SYS>>\n{system_message}<</SYS>>\n{prompt}[/INST]"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "MetaAI, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/lzlv-70b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "lzlv_70B is a sophisticated AI model designed for roleplaying and creative tasks. This merge aims to combine intelligence with creativity, seemingly outperforming its individual components in complex scenarios and creative outputs.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "USER:\n{prompt}\nASSISTANT:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Lizpreciatior, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/mistral-ins-7b-q4/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,14 @@
"description": "This is a 4-bit quantized iteration of MistralAI's Mistral Instruct 7b model, specifically designed for a comprehensive understanding through training on extensive internet data.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"system_prompt": "",
"user_prompt": "<s>[INST]",
"ai_prompt": "[/INST]",
"prompt_template": "<s>[INST]{prompt}\n[/INST]"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "MistralAI, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/mixtral-8x7b-instruct/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "[INST] {prompt} [/INST]"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "MistralAI, TheBloke",
Expand Down
4 changes: 2 additions & 2 deletions models/noromaid-20b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "The Noromaid 20b model is designed for role-playing and general use, featuring a unique touch with the no_robots dataset that enhances human-like behavior.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### Instruction:{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "NeverSleep, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/openhermes-neural-7b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "OpenHermes Neural is a merged model using the TIES method.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Intel, Jan",
Expand Down
4 changes: 2 additions & 2 deletions models/pandora-10.7b-v1/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Pandora, our research model, employs the Passthrough merging technique to merge 2x7B models into 1.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
Expand Down
4 changes: 2 additions & 2 deletions models/phind-34b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Phind-CodeLlama-34B-v2 is an AI model fine-tuned on 1.5B tokens of high-quality programming data. It's a SOTA open-source model in coding. This multi-lingual model excels in various programming languages, including Python, C/C++, TypeScript, Java, and is designed to be steerable and user-friendly.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### System Prompt\n{system_message}\n### User Message\n{prompt}\n### Assistant"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Phind, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/solar-10.7b-instruct/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "SOLAR-10.7B model built on the Llama2 architecture with Depth Up-Scaling and integrated Mistral 7B weights. Its robustness and adaptability make it ideal for fine-tuning applications, significantly enhancing performance with simple instruction-based techniques.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### User: {prompt}\n### Assistant:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Upstage, Jan",
Expand Down
4 changes: 2 additions & 2 deletions models/solar-10.7b-slerp/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "This model uses the Slerp merge method from SOLAR Instruct and Pandora-v1",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### User: {prompt}\n### Assistant:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
Expand Down
4 changes: 2 additions & 2 deletions models/starling-7b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Starling-RM-7B-alpha is a language model finetuned with Reinforcement Learning from AI Feedback from Openchat 3.5. It stands out for its impressive performance using GPT-4 as a judge, making it one of the top-performing models in its category.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "GPT4 User: {prompt}<|end_of_turn|>GPT4 Assistant:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Berkeley-nest, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/trinity-v1-7b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Trinity is an experimental model merge of GreenNodeLM & LeoScorpius using the Slerp method. Recommended for daily assistance purposes.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "Jan",
Expand Down
4 changes: 2 additions & 2 deletions models/wizardcoder-13b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "WizardCoder-Python-13B is a Python coding model major models like ChatGPT-3.5. This model based on the Llama2 architecture, demonstrate high proficiency in specific domains like coding and mathematics.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "### Instruction:\n{prompt}\n### Response:"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "WizardLM, The Bloke",
Expand Down
4 changes: 2 additions & 2 deletions models/yi-34b/model.json
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@
"description": "Yi-34B, a specialized chat model, is known for its diverse and creative responses and excels across various NLP tasks and benchmarks.",
"format": "gguf",
"settings": {
"ctx_len": 2048,
"ctx_len": 4096,
"prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
},
"parameters": {
"max_tokens": 2048
"max_tokens": 4096
},
"metadata": {
"author": "01-ai, The Bloke",
Expand Down