Assistance Needed: "numpy.core.multiarray failed to import" Error During the Monitoring job at Model Performance Metrics Computation step in Azure ML Workspace

Vivek Kumar 5 Reputation points
2025-02-11T14:43:14.9733333+00:00

I am encountering an issue while running a monitoring pipeline job in Azure Machine Learning. During the "Model Performance - Compute Metrics" step, I receive the following error:

"numpy.core.multiarray failed to import"

I do not have control over the packages that it uses while running this job

I am using this schema for creating the yaml file for setting up the monitoring job

$schema: https://azuremlschemas.azureedge.net/latest/monitorSchedule.schema.json

In documentation also i am not able to find if we can set up a custom environment there. The monitoring job automatically does the steps i only provide what metrices it needs to calculate and the data it requires. Has anyone else experienced a similar problem, and if so, how did you address it?

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,141 questions
{count} vote

1 answer

Sort by: Most helpful
  1. Manas Mohanty 745 Reputation points Microsoft Vendor
    2025-02-12T09:56:17.4033333+00:00

    Hi Vivek Kumar!

    Sorry for the delay in response. I checked with our team internally.

    We can create custom signal component to include in the model monitoring yaml to update the environment for spark computes.

    # custom-monitoring.yaml
    $schema:  http://azureml/sdk-2-0/Schedule.json
    name: my-custom-signal
    trigger:
      type: recurrence
      frequency: day # can be minute, hour, day, week, month
      interval: 7 # #every day
    create_monitor:
      compute:
        instance_type: "standard_e4s_v3"
        runtime_version: "3.3"
      monitoring_signals:
        customSignal:
          type: custom
          component_id: azureml:my_custom_signal:1.0.0 #your custom component 
          input_data:
            production_data:
              input_data:
                type: uri_folder
                path: azureml:my_production_data:1
              data_context: test
              data_window:
                lookback_window_size: P30D
                lookback_window_offset: P7D
              pre_processing_component: azureml:custom_preprocessor:1.0.0
          metric_thresholds:
            - metric_name: std_deviation
              threshold: 2
      alert_notification:
        emails:
          - abc@example.com
    
    az ml schedule create -f ./custom-monitoring.yaml
    

    Reference - Custom signal to update spark computes with dependencies. Hope it fixes your issue now.

    Thank you.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.