Using the output model in zip format ‘vita-model-1.zip’ we can include this in our web application as an embedded resource or simply include the file for deployment.
To use the file from a HTTP endpoint:
- Include the zip file in your deployment – embedded resource/content/read from blob storage etc..
- Initialise the model as a singleton during application start up by using a file path or stream
- Call the model using the function PredictionModel.Predict(‘my data from which to predict’)
Sample below that logs to Logz.io
Hosting our endpoint with Swagger on Azure allows us to test the inputs and see the results below:
Here we hosted our model in Azure using an App Service and managed to test it via Swagger.
Hoping to make this a Function App when this issue is resolved –> https://github.com/dotnet/machinelearning/issues/569
Posts in this series
Charge Id – scratching the tech itch [ part 1 ]
Charge Id – lean canvas [ part 2 ]
Charge Id – solution overview [ part 3 ]
Charge Id – analysing the data [ part 4 ]
Charge Id – the prediction model [ part 5 ]
Charge Id – deploying a ML.Net Model to Azure [ part 6 ]