The inference results of the mlflow model and the onnx model created with Azure Machine Learning's AutoML feature are different

ONO,Naoki/小野 尚輝 0 Reputation points
2024-11-19T06:23:16.85+00:00

I built an instance segmentation model using the AutoML feature of Azure Machine Learning. As a result of the training job, both an mlflow model and an onnx model were generated. When performing inference with these models, the results were significantly different. Could you explain the cause?

If possible, I would like to create an onnx model that produces the same inference results as the mlflow model and deploy it locally.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,129 questions
{count} votes

1 answer

Sort by: Most helpful
  1. romungi-MSFT 48,541 Reputation points Microsoft Employee
    2024-11-19T12:31:53.83+00:00

    @ONO,Naoki/小野 尚輝 I think you need to setup explanations and interpretability to understand the model's behavior better, especially with respect to feature importance of the models that were generated. For ONNX model, you can check how it can be used locally but it cannot be tuned in comparison with other model type or architecture. I hope this helps!!

    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.