ModuleStep Class
Creates an Azure Machine Learning pipeline step to run a specific version of a Module.
Module objects define reusable computations, such as scripts or executables, that can be used in different machine learning scenarios and by different users. To use a specific version of a Module in a pipeline create a ModuleStep. A ModuleStep is a step in pipeline that uses an existing ModuleVersion.
For an example of using ModuleStep, see the notebook https://aka.ms/pl-modulestep.
Create an Azure ML pipeline step to run a specific version of a Module.
- Inheritance
-
ModuleStep
Constructor
ModuleStep(module=None, version=None, module_version=None, inputs_map=None, outputs_map=None, compute_target=None, runconfig=None, runconfig_pipeline_params=None, arguments=None, params=None, name=None, _workflow_provider=None)
Parameters
Name | Description |
---|---|
module
|
The module used in the step.
Provide either the Default value: None
|
version
|
The version of the module used in the step. Default value: None
|
module_version
|
A ModuleVersion of the module used in the step.
Provide either the Default value: None
|
inputs_map
|
dict[str, Union[InputPortBinding, DataReference, PortDataReference, PipelineData, PipelineOutputAbstractDataset, DatasetConsumptionConfig]]
A dictionary that maps the names of port definitions of the ModuleVersion to the step's inputs. Default value: None
|
outputs_map
|
dict[str, Union[OutputPortBinding, DataReference, PortDataReference, PipelineData, PipelineOutputAbstractDataset]]
A dictionary that maps the names of port definitions of the ModuleVersion to the step's outputs. Default value: None
|
compute_target
|
The compute target to use. If unspecified, the target from the runconfig will be used. May be a compute target object or the string name of a compute target on the workspace. Optionally, if the compute target is not available at pipeline creation time, you may specify a tuple of ('compute target name', 'compute target type') to avoid fetching the compute target object (AmlCompute type is 'AmlCompute' and RemoteCompute type is 'VirtualMachine'). Default value: None
|
runconfig
|
An optional RunConfiguration to use. A RunConfiguration can be used to specify additional requirements for the run, such as conda dependencies and a Docker image. Default value: None
|
runconfig_pipeline_params
|
An override of runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property. Supported values: 'NodeCount', 'MpiProcessCountPerNode', 'TensorflowWorkerCount', 'TensorflowParameterServerCount' Default value: None
|
arguments
|
A list of command line arguments for the Python script file. The arguments will be delivered to the compute target via arguments in RunConfiguration. For more details how to handle arguments such as special symbols, see the arguments in RunConfiguration Default value: None
|
params
|
A dictionary of name-value pairs. Default value: None
|
name
|
The name of the step. Default value: None
|
_workflow_provider
|
<xref:azureml.pipeline.core._aeva_provider._AevaWorkflowProvider>
(Internal use only.) The workflow provider. Default value: None
|
module
Required
|
The module used in the step.
Provide either the |
version
Required
|
The version of the module used in the step. |
module_version
Required
|
The ModuleVersion of the module used in the step.
Provide either the |
inputs_map
Required
|
dict[str, Union[InputPortBinding, DataReference, PortDataReference, PipelineData, <xref:azureml.pipeline.core.pipeline_output_dataset.PipelineOutputDataset>, DatasetConsumptionConfig]]
A dictionary that maps the names of port definitions of the ModuleVersion to the step's inputs. |
outputs_map
Required
|
dict[str, Union[InputPortBinding, DataReference, PortDataReference, PipelineData, <xref:azureml.pipeline.core.pipeline_output_dataset.PipelineOutputDataset>]]
A dictionary that maps the names of port definitions of the ModuleVersion to the step's outputs. |
compute_target
Required
|
The compute target to use. If unspecified, the target from the runconfig will be used. May be a compute target object or the string name of a compute target on the workspace. Optionally, if the compute target is not available at pipeline creation time, you may specify a tuple of ('compute target name', 'compute target type') to avoid fetching the compute target object (AmlCompute type is 'AmlCompute' and RemoteCompute type is 'VirtualMachine'). |
runconfig
Required
|
An optional RunConfiguration to use. A RunConfiguration can be used to specify additional requirements for the run, such as conda dependencies and a Docker image. |
runconfig_pipeline_params
Required
|
An override of runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property. Supported values: 'NodeCount', 'MpiProcessCountPerNode', 'TensorflowWorkerCount', 'TensorflowParameterServerCount' |
arguments
Required
|
A list of command line arguments for the Python script file. The arguments will be delivered to the compute target via arguments in RunConfiguration. For more details how to handle arguments such as special symbols, see the arguments in RunConfiguration |
params
Required
|
A dictionary of name-value pairs. |
name
Required
|
The name of the step. |
_wokflow_provider
Required
|
(Internal use only.) The workflow provider. |
Remarks
A Module is used to create and manage a resusable computational unit of an Azure Machine Learning pipeline. ModuleStep is the built-in step in Azure Machine Learning used to consume a module. You can either define specifically which ModuleVersion to use or let Azure Machine Learning resolve which ModuleVersion to use following the resolution process defined in the remarks section of the Module class. To define which ModuleVersion is used in a submitted pipeline, define one of the following when creating a ModuleStep:
A ModuleVersion object.
A Module object and a version value.
A Module object without a version value. In this case, version resolution may vary across submissions.
You must define the mapping between the ModuleStep's inputs and outputs to the ModuleVersion's inputs and outputs.
The following example shows how to create a ModuleStep as a part of pipeline with multiple ModuleStep objects:
middle_step = ModuleStep(module=module,
inputs_map= middle_step_input_wiring,
outputs_map= middle_step_output_wiring,
runconfig=RunConfiguration(), compute_target=aml_compute,
arguments = ["--file_num1", first_sum, "--file_num2", first_prod,
"--output_sum", middle_sum, "--output_product", middle_prod])
Full sample is available from https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-how-to-use-modulestep.ipynb
Methods
create_node |
Create a node from the ModuleStep step and add it to the specified graph. This method is not intended to be used directly. When a pipeline is instantiated with this step, Azure ML automatically passes the parameters required through this method so that step can be added to a pipeline graph that represents the workflow. |
create_node
Create a node from the ModuleStep step and add it to the specified graph.
This method is not intended to be used directly. When a pipeline is instantiated with this step, Azure ML automatically passes the parameters required through this method so that step can be added to a pipeline graph that represents the workflow.
create_node(graph, default_datastore, context)
Parameters
Name | Description |
---|---|
graph
Required
|
The graph object to add the node to. |
default_datastore
Required
|
The default datastore. |
context
Required
|
<xref:azureml.pipeline.core._GraphContext>
The graph context. |
Returns
Type | Description |
---|---|
The node object. |