diff --git a/docs/reference/providers/backend.md b/docs/reference/providers/backend.md index 5db6512..a800cc5 100644 --- a/docs/reference/providers/backend.md +++ b/docs/reference/providers/backend.md @@ -14,6 +14,7 @@ Currently, we have a total of 11 backends available: - [Hugging Face](https://huggingface.co) - [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai) - [LocalAI](https://github.com/go-skynet/LocalAI) +- [Ollama](https://github.com/ollama/ollama) - FakeAI ## OpenAI @@ -208,6 +209,27 @@ Ollama is a local model, which has an OpenAI compatible API. It supports the mod k8sgpt analyze --explain --backend localai ``` +## Ollama + +Ollama can get up and running locally with large language models. It runs Llama 2, Code Llama, and other models. + +- To start the Ollama server, follow the instruction in [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#start-ollama). + ```bash + ollama serve + ``` + It can also run as an docker image, follow the instruction in [Ollama BLog](https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image) + ```bash + docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama + ``` + +- Authenticate K8sGPT with Ollama: + ```bash + k8sgpt auth add --backend ollama --model llama2 --baseurl http://localhost:11434/v1 + ``` +- Analyze with a Ollama backend: + ```bash + k8sgpt analyze --explain --backend ollama + ``` ## FakeAI FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.