- 第一个消息是系统消息(prompt)
- 上下文默认附带最新创建的4条消息
- 第一个注册的用户是管理员
- 默认限流 100 chatGPT call /10分钟 (OPENAI_RATELIMIT=100)
- 根据对话生成可以分享的静态页面(like ShareGPT), 也可以继续会话.
- 对话快照目录(对话集), 支持全文查找(Enlgish), 方便整理, 搜索会话记录.
- 支持OPEN AI, Claude 模型
- 支持Ollama host模型, 配置参考: #396
- 支持上传文本文件
- 支持多媒体文件, 需要模型支持
- 提示词管理, 提示词快捷键 '/'
- git clone
- golang dev
cd chat; cd api
go install github.com/cosmtrek/air@latest
go mod tidy
# export env var, change base on your env
export PG_HOST=192.168.0.135
export PG_DB=hwu
export PG_USER=hwu
export PG_PASS=pass
export PG_PORT=5432
# export DATABASE_URL= postgres://user:[email protected]:5432/db?sslmode=disable
# export OPENAI_API_KEY=sk-xxx, not required if you use `debug` model
# export OPENAI_RATELIMIT=100
#
make serve
- node env
cd ..; cd web
npm install
npm run dev
- e2e test
cd ..; cd e2e
# export env var, change base on your env
export PG_HOST=192.168.0.135
export PG_DB=hwu
export PG_USER=hwu
export PG_PASS=pass
export PG_PORT=5432
npm install
npx playwright test # --ui
The instruction might not be accurate, ask in issue or discussion if unclear.
参考 docker-compose.yaml
然后配置环境变量就可以了.
PORT=8080
OPENAI_RATELIMIT=0
别的两个 api key 有就填.
部署之后, 注册用户, 第一个用户是管理员, 然后到 https://$hostname/#/admin/user, 设置 ratelimit, 公网部署, 只对信任的email 增加 ratelimit, 这样即使有人注册, 也是不能用的.
- 安装ollama 并下载模型
curl -fsSL https://ollama.com/install.sh | sh
ollama pull mistral
linux 下,默认的systemd 的配置限制了本机访问, 需要改HOST 能远程访问,如果ollama 和chat 在同一个host, 则不存在这个问题
id: ollama-{modelName} # modelName 与 pull的 ollama 模型 一致, 比如 mistral, ollama3, ollama2
name: does not matter, naming as you like,
baseUrl: http://hostname:11434/api/chat
other fields is irrelevant.
id 和 baseUrl 这两个地方配置对即可。
enjoy!
- web: ChatGPT-Web 复制过来的 。
- api : 参考 Kerwin1202's Chanzhaoyu/chatgpt-web#589 的node版本在chatgpt帮助下写的
- The first message is a system message (prompt)
- by default, the latest 4 messages are context
- First user is superuser.
- 100 chatgpt api call / 10 mins (OPENAI_RATELIMIT=100)
- Snapshot conversation and Share (like ShareGPT)
- Support OPEN AI, Claude model
- Support Upload File
- Support MultiMedia File (rely on Model support)
Refer to docker-compose.yaml
Then configure the environment variables.
PORT=8080
OPENAI_RATELIMIT=0
Fill in the other two keys if you have them.
After deployment, registering users, the first user is an administrator, then go to https://$hostname/#/admin/user to set rate limiting. Public deployment, only adds rate limiting to trusted emails, so even if someone registers, it will not be available.
This helps ensure only authorized users can access the deployed system by limiting registration to trusted emails and enabling rate limiting controls.
- download ollama and pull mistral model.
curl -fsSL https://ollama.com/install.sh | sh
ollama pull mistral
- config ollama model in chat admin
id: ollama-{modelName}
name: does not matter, naming as you like
baseUrl: http://hostname:11434/api/chat
other fields is irrelevant.
enjoy!
- git clone
- golang dev
cd chat; cd api
go install github.com/cosmtrek/air@latest
go mod tidy
# export env var, change base on your env
export PG_HOST=192.168.0.135
export PG_DB=hwu
export PG_USER=hwu
export PG_PASS=pass
export PG_PORT=5432
# export DATABASE_URL= postgres://user:[email protected]:5432/db?sslmode=disable
# export OPENAI_API_KEY=sk-xxx, not required if you use `debug` model
# export OPENAI_RATELIMIT=100
#
make serve
- node env
cd ..; cd web
npm install
npm run dev
- e2e test
cd ..; cd e2e
# export env var, change base on your env
export PG_HOST=192.168.0.135
export PG_DB=hwu
export PG_USER=hwu
export PG_PASS=pass
export PG_PORT=5432
npm install
npx playwright test # --ui
The instruction might not be accurate, ask in issue or discussion if unclear.
- web: copied from chatgpt-web https://github.com/Chanzhaoyu/chatgpt-web
- api: based on the node version of Kerwin1202's Chanzhaoyu/chatgpt-web#589 and written with the help of chatgpt.