Issue with Fine-Tuning Microsoft Healthcare Multi-Model MedImageInsight for Classification

Kireeti Magalanadu 0 Reputation points
2025-02-27T14:39:29.3166667+00:00

I have been working with the Microsoft Healthcare Multi-Model MedImageInsight and successfully deployed the endpoint online. I was able to consume it using both the default method provided in the code and a custom approach.

However, I now want to fine-tune this model for a classification task. The GitHub instructions mention achieving zero-shot classification by installing a directory called package in Python, which I followed along with all the other steps.

Despite this, my code is failing at a specific point and is unable to consume the model. When I traced the methods in the failing cell, they led back to classes within the installed package, but I am unable to pinpoint what I might have missed.

I would appreciate any guidance or suggestions on resolving this issue. Has anyone encountered a similar problem, or is there a common mistake I might be overlooking?

User's image

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
3,174 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Prashanth Veeragoni 640 Reputation points Microsoft Vendor
    2025-02-28T03:36:58.5733333+00:00

    Hi Kireeti Magalanadu,

    Welcome to Microsoft Q&A forum. Thank you for posting your query.

    I understood that You are working with the Microsoft Healthcare Multi-Model MedImageInsight and have successfully deployed the endpoint online. Initially, you were able to consume the model using both the default method and a custom approach. However, when attempting to fine-tune the model for a classification task, your code fails at a specific point, preventing you from consuming the model.

    Upon deeply investigating the error traceback, the failure occurs within the installed package’s internal methods, specifically in a function responsible for returning the model inference result. The error message suggests that the endpoint being accessed does not exist or is unreachable.

    This means that when the model tries to send an inference request to the endpoint, the request fails because the endpoint cannot be found.

    I will let you know the possible causes of this issue:

    Incorrect Endpoint URL:

    The URL being used for inference might be incorrect or outdated.

    This could happen if the endpoint name changed after redeployment.

    If you copied the URL manually, there might be formatting errors.

    Endpoint is Not Running or Deployed Correctly:

    The model deployment could have failed due to a configuration issue.

    If the endpoint is stopped or not in an active state, it will not be accessible.

    Missing or Incorrect Authentication:

    If the endpoint requires an API key or token for authentication, missing this information could cause a request failure.

    If you are using Azure’s authentication system, the credentials might not be set up correctly.

    Network Connectivity Issues:

    If you are running the request from an Azure Virtual Machine (VM) or a local machine, network restrictions could prevent access to the endpoint.

    Firewall rules or private network settings might block requests to the inference service.

    Deployment Expired or Deleted:

    If the deployment was temporary, it might have been deleted or expired.

    Azure periodically cleans up inactive resources, which could result in a missing endpoint.

    To Resolve the Issue:

    Verify the Endpoint URL:

    Go to Azure Machine Learning Studio → Endpoints section.

    Locate your deployed model and check the endpoint URL.

    Ensure that the URL in your code matches the one displayed in the Azure portal.

    If the URLs do not match, update your code with the correct URL.

    Check If the Endpoint Is Active:

    In Azure ML Studio, check the status of the deployment.

    If the status is "Stopped" or "Failed", restart or redeploy the endpoint.

    If it is running but still inaccessible, check the logs for any errors during deployment.

    Ensure Proper Authentication:

    If your endpoint requires an API key, retrieve it from Azure ML Studio under Keys & Tokens.

    Ensure that your request includes the correct authentication method.

    If using Azure’s identity-based authentication, confirm that your credentials are properly set up and have access permissions.

    Test Network Connectivity:

    Try accessing the endpoint directly through a browser or a simple command-line tool like cURL.

    If accessing from an Azure Virtual Machine, ensure that the VM has the necessary permissions to communicate with Azure Machine Learning services.

    Check if your firewall or proxy settings are blocking requests.

    Redeploy the Model if Needed:

    If none of the above steps resolve the issue, consider redeploying the model.

    Delete the existing endpoint and create a new one with the same configuration.

    After redeployment, verify the new endpoint URL and update your code accordingly.

    Advanced call example notebook available on GitHub: https://github.com/microsoft/healthcareai-examples/blob/main/azureml/medimageinsight/advanced-call-example.ipynb

    Hope this helps. Do let us know if you any further queries.   

    Thank you.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.