Skip to content

Commit

Permalink
Add Ollama backends
Browse files Browse the repository at this point in the history
Signed-off-by: Kay Yan <[email protected]>
  • Loading branch information
yankay committed Apr 14, 2024
1 parent fa5a6cc commit 58ea49d
Showing 1 changed file with 22 additions and 0 deletions.
22 changes: 22 additions & 0 deletions docs/reference/providers/backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Currently, we have a total of 8 backends available:
- [Azure OpenAI](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service)
- [Google Gemini](https://ai.google.dev/docs/gemini_api_overview)
- [LocalAI](https://github.com/go-skynet/LocalAI)
- [Ollama](https://github.com/ollama/ollama)
- FakeAI

## OpenAI
Expand Down Expand Up @@ -132,6 +133,27 @@ LocalAI is a local model, which is an OpenAI compatible API. It uses llama.cpp a
k8sgpt analyze --explain --backend localai
```

## Ollama

Ollama can get up and running locally with large language models. It runs Llama 2, Code Llama, and other models.

- To start the Ollama server, follow the instruction in [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#start-ollama).
```bash
ollama serve
```
It can also run as an docker image, follow the instruction in [Ollama BLog](https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image)
```bash
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```

- Authenticate K8sGPT with Ollama:
```bash
k8sgpt auth add --backend ollama --model llama2 --baseurl http://localhost:11434/v1
```
- Analyze with a Ollama backend:
```bash
k8sgpt analyze --explain --backend ollama
```
## FakeAI

FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.
Expand Down

0 comments on commit 58ea49d

Please sign in to comment.