Skip to content

Commit

Permalink
bugFix(C3):image and some details
Browse files Browse the repository at this point in the history
  • Loading branch information
little1d committed Aug 14, 2024
1 parent c72d128 commit 2130107
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 47 deletions.
2 changes: 1 addition & 1 deletion docs/C3/1. 自定义导入模型.md
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ llama.cpp是GGUF的开源项目,提供CLI和Server功能。
### 3.1 从HuggingFace下载Model

最直觉的下载方式是通过git clone或者链接来下载,但是因为llm每部分都按GB计算,避免出现`OOM Error(Out of memory)`,我们可以使用Python写一个简单的download.py
首先应该去hf拿到用户个人的`ACCESS_TOKEN`,打开 huggingface
首先应该去hf拿到用户个人的`ACCESS_TOKEN`,打开 huggingface个人设置页面

![alt text](../images/C3-3-1.png)

Expand Down
Binary file modified docs/images/C3-3-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/C3-3-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/images/C3-3-3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
53 changes: 7 additions & 46 deletions notebook/C3/1.从GGUF直接导入/main.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,71 +2,32 @@
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\u001b[?25ltransferring model data ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠧ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠧ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠇ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠏ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠙ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠸ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠴ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠦ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠇ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠇ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠋ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠋ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠹ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data ⠼ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1Gtransferring model data \n",
"using existing layer sha256:99f013fc74fcfaab19a9cb36de4ebb8c50e9a15048ef88da2387ee7a0c0cffcb \n",
"using autodetected template chatml \n",
"using existing layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 \n",
"creating new layer sha256:fb05d8484292a164d767deb5cd552e96b9fb4968bbcb855b0ae43cf7beb4e516 \n",
"writing manifest \n",
"success ⠋ \u001b[?25h\u001b[?25l\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1G\u001b[A\u001b[2K\u001b[1Gtransferring model data \n",
"using existing layer sha256:99f013fc74fcfaab19a9cb36de4ebb8c50e9a15048ef88da2387ee7a0c0cffcb \n",
"using autodetected template chatml \n",
"using existing layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216 \n",
"creating new layer sha256:fb05d8484292a164d767deb5cd552e96b9fb4968bbcb855b0ae43cf7beb4e516 \n",
"writing manifest \n",
"success \u001b[?25h\n"
]
}
],
"outputs": [],
"source": [
"# 1. 创建模型\n",
"!ollama create mymodel -f Modelfile"
]
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"NAME \tID \tSIZE \tMODIFIED \n",
"mymodel:latest \tb82a6e9999d2\t355 MB\t15 seconds ago\t\n",
"llama3:latest \t365c0bd3c000\t4.7 GB\t13 hours ago \t\n",
"llama3.1:latest\t75382d0899df\t4.7 GB\t15 hours ago \t\n"
]
}
],
"outputs": [],
"source": [
"# 2.查看模型\n",
"!ollama list"
]
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": null,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"^C\n"
]
}
],
"outputs": [],
"source": [
"# 3. 终端内运行下列脚本运行模型\n",
"# ollama run mymodel"
"ollama run mymodel"
]
}
],
Expand Down

0 comments on commit 2130107

Please sign in to comment.