You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe:
Currently, bolt.diy does not directly support the Github Models API available at https://github.com/marketplace/models/ with the endpoint https://models.inference.ai.azure.com. This limits the ability to integrate models hosted on this platform, specifically models like the o1-preview and other Github models. This would be beneficial in extending the scope of models the application has access to.
Describe the solution you'd like:
I would like bolt.diy to include a new provider configuration and associated logic to interact with the Azure AI Inference API at https://models.inference.ai.azure.com. This would involve:
Adding a new provider called "Github models" or similar.
Supporting authentication with API keys.
Adding the available models to the MODEL_LIST.
Adding the https://models.inference.ai.azure.com endpoint to the list of valid providers.
Support passing an model name to the api using client.chat.completions.create
Describe alternatives you've considered:
As a workaround, we could attempt to use the "OpenAILike" provider. However, this is not ideal, it might not fully support all features specific to Azure AI Inference API, and would not provide an ideal experience for users.
Additional context:
The Github models API at https://models.inference.ai.azure.com offers access to models like o1-preview and Llama-3.3-70B-Instruct and could broaden the models available through bolt.diy. Having direct support would improve the user experience for anyone looking to use models from Azure. This would allow bolt.diy to support a bigger array of models.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe:
Currently, bolt.diy does not directly support the Github Models API available at https://github.com/marketplace/models/ with the endpoint
https://models.inference.ai.azure.com
. This limits the ability to integrate models hosted on this platform, specifically models like theo1-preview
and other Github models. This would be beneficial in extending the scope of models the application has access to.Describe the solution you'd like:
I would like bolt.diy to include a new provider configuration and associated logic to interact with the Azure AI Inference API at
https://models.inference.ai.azure.com
. This would involve:MODEL_LIST
.https://models.inference.ai.azure.com
endpoint to the list of valid providers.client.chat.completions.create
Describe alternatives you've considered:
As a workaround, we could attempt to use the "OpenAILike" provider. However, this is not ideal, it might not fully support all features specific to Azure AI Inference API, and would not provide an ideal experience for users.
Additional context:
The Github models API at
https://models.inference.ai.azure.com
offers access to models likeo1-preview
andLlama-3.3-70B-Instruct
and could broaden the models available through bolt.diy. Having direct support would improve the user experience for anyone looking to use models from Azure. This would allow bolt.diy to support a bigger array of models.The text was updated successfully, but these errors were encountered: