Do I have to deploy a model before doing inference? #670
-
Hi all, I have trained a model successfully in Azure machine leaning workspace, and now I am working on how to make inference from that model. I followed the following link: https://github.com/microsoft/InnerEye-Inference But I don't know where to get the following two parameters:
I guess I need to deploy my model in Azure first. Is there any guidence on how to do that? There seems a lot options to proceed: |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
There is no need to deploy the model manually. If you have run training successfully in AzureML, the job overview page ("Details") will have an entry for "Registered model" - click on that link.
|
Beta Was this translation helpful? Give feedback.
There is no need to deploy the model manually. If you have run training successfully in AzureML, the job overview page ("Details") will have an entry for "Registered model" - click on that link.
About the two environment variables:
CUSTOMCONNSTR_AZUREML_SERVICE_PRINCIPAL_SECRET
is the password for an Azure Service Principal (think of that as a machine account) that can talk to your AzureML workspace. You need to create this service principal first, and give it access to your AzureML workspace.CUSTOMCONNSTR_API_AUTH_SECRET
is a random string that you can choose, you can use a GUID for example. This is the magic secret that you put into the API call to authenticate yourself to the inferen…