Skip to content

Commit

Permalink
Merge pull request #107 from jinsongo/main
Browse files Browse the repository at this point in the history
feat: add watsonx ai provider document
  • Loading branch information
AlexsJones authored Jul 15, 2024
2 parents 9ffb9d8 + cebaae2 commit 93e635e
Showing 1 changed file with 22 additions and 1 deletion.
23 changes: 22 additions & 1 deletion docs/reference/providers/backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

A Backend (also called Provider) is a service that provides access to the AI language model. There are many different backends available for K8sGPT. Each backend has its own strengths and weaknesses, so it is important to choose the one that is right for your needs.

Currently, we have a total of 10 backends available:
Currently, we have a total of 11 backends available:

- [OpenAI](https://openai.com/)
- [Cohere](https://cohere.com/)
Expand All @@ -12,6 +12,7 @@ Currently, we have a total of 10 backends available:
- [Google Gemini](https://ai.google.dev/docs/gemini_api_overview)
- [Google Vertex AI](https://cloud.google.com/vertex-ai)
- [Hugging Face](https://huggingface.co)
- [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai)
- [LocalAI](https://github.com/go-skynet/LocalAI)
- FakeAI

Expand Down Expand Up @@ -158,6 +159,26 @@ Hugging Face is a versatile backend for K8sGPT, offering access to a wide range
k8sgpt analyze --explain --backend huggingface
```

## IBM watsonx.ai

IBM® watsonx.ai™ AI studio is part of the IBM watsonx™ AI and data platform, bringing together new generative AI (gen AI) capabilities powered by foundation models and traditional machine learning (ML) into a powerful studio spanning the AI lifecycle. Tune and guide models with your enterprise data to meet your needs with easy-to-use tools for building and refining performant prompts. With watsonx.ai, you can build AI applications in a fraction of the time and with a fraction of the data.

- To use [IBM watsonx.ai](https://dataplatform.cloud.ibm.com/login?context=wx), you'll need a watsonx API key and project ID for authentication.
- You will need to set the follow local environmental variables:
```
- WATSONX_API_KEY
- WATSONX_PROJECT_ID
```
- To configure backend in K8sGPT use auth command:
```bash
k8sgpt auth add --backend watsonxai --model ibm/granite-13b-chat-v2
```
- Run the following command to analyze issues within your cluster using IBM watsonx.ai:
```bash
k8sgpt analyze --explain --backend watsonxai
```
## LocalAI
LocalAI is a local model, which is an OpenAI compatible API. It uses llama.cpp and ggml to run inference on consumer-grade hardware. Models supported by LocalAI for instance are Vicuna, Alpaca, LLaMA, Cerebras, GPT4ALL, GPT4ALL-J and koala.
Expand Down

0 comments on commit 93e635e

Please sign in to comment.