-
I need use Agenta completely on local server with a local LLM platform ollama, how can do that? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 1 reply
-
Hey @kmx2! I'm here to help you out with any issues or questions you have. Let's see how we can integrate Agenta with the local LLM platform Ollama. To integrate Agenta with the Ollama LLM platform and use it completely on a local server, follow these steps:
By following these steps, you can integrate Agenta with the Ollama LLM platform and use it entirely on a local server [1][2][3]. |
Beta Was this translation helpful? Give feedback.
-
Hey @kmx2 , This should be simple to do. You need to create a custom application that calls your local ollama endpoint. You can see an example on how to create such a custom application here: https://docs.agenta.ai/guides/tutorials/deploy-mistral-model |
Beta Was this translation helpful? Give feedback.
-
Hey @mmabrouk , |
Beta Was this translation helpful? Give feedback.
Hey @kmx2 ,
This should be simple to do. You need to create a custom application that calls your local ollama endpoint. You can see an example on how to create such a custom application here: https://docs.agenta.ai/guides/tutorials/deploy-mistral-model
In this case we are calling Mistral in Huggingface, but you can do exactly the same by calling Ollama locally.