SweepJob Class
Sweep job for hyperparameter tuning.
Note
For sweep jobs, inputs, outputs, and parameters are accessible as environment variables using the prefix
AZUREML_SWEEP_. For example, if you have a parameter named "learning_rate", you can access it as
AZUREML_SWEEP_learning_rate.
]
]
]
- Inheritance
-
azure.ai.ml.entities._job.job.JobSweepJobazure.ai.ml.entities._job.sweep.parameterized_sweep.ParameterizedSweepSweepJobazure.ai.ml.entities._job.job_io_mixin.JobIOMixinSweepJob
Constructor
SweepJob(*, name: str | None = None, description: str | None = None, tags: Dict | None = None, display_name: str | None = None, experiment_name: str | None = None, identity: ManagedIdentityConfiguration | AmlTokenConfiguration | UserIdentityConfiguration | None = None, inputs: Dict[str, Input | str | bool | int | float] | None = None, outputs: Dict | None = None, compute: str | None = None, limits: SweepJobLimits | None = None, sampling_algorithm: str | SamplingAlgorithm | None = None, search_space: Dict[str, Choice | LogNormal | LogUniform | Normal | QLogNormal | QLogUniform | QNormal | QUniform | Randint | Uniform] | None = None, objective: Objective | None = None, trial: CommandJob | CommandComponent | None = None, early_termination: EarlyTerminationPolicy | BanditPolicy | MedianStoppingPolicy | TruncationSelectionPolicy | None = None, queue_settings: QueueSettings | None = None, resources: dict | JobResourceConfiguration | None = None, **kwargs: Any)
Keyword-Only Parameters
Name | Description |
---|---|
name
|
Name of the job. |
display_name
|
Display name of the job. |
description
|
Description of the job. |
tags
|
Tag dictionary. Tags can be added, removed, and updated. |
properties
|
The asset property dictionary. |
experiment_name
|
Name of the experiment the job will be created under. If None is provided, job will be created under experiment 'Default'. |
identity
|
Union[ <xref:azure.ai.ml.ManagedIdentityConfiguration>, <xref:azure.ai.ml.AmlTokenConfiguration>, <xref:azure.ai.ml.UserIdentityConfiguration>
Identity that the training job will use while running on compute. |
inputs
|
Inputs to the command. |
outputs
|
Mapping of output data bindings used in the job. |
sampling_algorithm
|
The hyperparameter sampling algorithm to use over the search_space. Defaults to "random". |
search_space
|
Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. |
objective
|
Metric to optimize for. |
compute
|
The compute target the job runs on. |
trial
|
The job configuration for each trial. Each trial will be provided with a different combination of hyperparameter values that the system samples from the search_space. |
early_termination
|
The early termination policy to use. A trial job is canceled when the criteria of the specified policy are met. If omitted, no early termination policy will be applied. |
limits
|
<xref:azure.ai.ml.entities.SweepJobLimits>
Limits for the sweep job. |
queue_settings
|
Queue settings for the job. |
resources
|
Compute Resource configuration for the job. |
kwargs
|
A dictionary of additional configuration parameters. |
Examples
Creating a SweepJob
from azure.ai.ml.entities import CommandJob
from azure.ai.ml.sweep import BayesianSamplingAlgorithm, Objective, SweepJob, SweepJobLimits
command_job = CommandJob(
inputs=dict(kernel="linear", penalty=1.0),
compute=cpu_cluster,
environment=f"{job_env.name}:{job_env.version}",
code="./scripts",
command="python scripts/train.py --kernel $kernel --penalty $penalty",
experiment_name="sklearn-iris-flowers",
)
sweep = SweepJob(
sampling_algorithm=BayesianSamplingAlgorithm(),
trial=command_job,
search_space={"ss": Choice(type="choice", values=[{"space1": True}, {"space2": True}])},
inputs={"input1": {"file": "top_level.csv", "mode": "ro_mount"}},
compute="top_level",
limits=SweepJobLimits(trial_timeout=600),
objective=Objective(goal="maximize", primary_metric="accuracy"),
)
Methods
dump |
Dumps the job content into a file in YAML format. |
set_limits |
Set limits for Sweep node. Leave parameters as None if you don't want to update corresponding values. |
set_objective |
Set the sweep object.. Leave parameters as None if you don't want to update corresponding values. |
set_resources |
Set resources for Sweep. |
dump
Dumps the job content into a file in YAML format.
dump(dest: str | PathLike | IO, **kwargs: Any) -> None
Parameters
Name | Description |
---|---|
dest
Required
|
The local path or file stream to write the YAML content to. If dest is a file path, a new file will be created. If dest is an open file, the file will be written to directly. |
Exceptions
Type | Description |
---|---|
Raised if dest is a file path and the file already exists. |
|
Raised if dest is an open file and the file is not writable. |
set_limits
Set limits for Sweep node. Leave parameters as None if you don't want to update corresponding values.
set_limits(*, max_concurrent_trials: int | None = None, max_total_trials: int | None = None, timeout: int | None = None, trial_timeout: int | None = None) -> None
Keyword-Only Parameters
Name | Description |
---|---|
max_concurrent_trials
|
maximum concurrent trial number. |
max_total_trials
|
maximum total trial number. |
timeout
|
total timeout in seconds for sweep node |
trial_timeout
|
timeout in seconds for each trial |
Exceptions
Type | Description |
---|---|
Raised if dest is a file path and the file already exists. |
|
Raised if dest is an open file and the file is not writable. |
set_objective
Set the sweep object.. Leave parameters as None if you don't want to update corresponding values.
set_objective(*, goal: str | None = None, primary_metric: str | None = None) -> None
Keyword-Only Parameters
Name | Description |
---|---|
goal
|
Defines supported metric goals for hyperparameter tuning. Acceptable values are: "minimize" and "maximize". |
primary_metric
|
Name of the metric to optimize. |
Exceptions
Type | Description |
---|---|
Raised if dest is a file path and the file already exists. |
|
Raised if dest is an open file and the file is not writable. |
set_resources
Set resources for Sweep.
set_resources(*, instance_type: str | List[str] | None = None, instance_count: int | None = None, locations: List[str] | None = None, properties: Dict | None = None, docker_args: str | None = None, shm_size: str | None = None) -> None
Keyword-Only Parameters
Name | Description |
---|---|
instance_type
|
The instance type to use for the job. |
instance_count
|
The number of instances to use for the job. |
locations
|
The locations to use for the job. |
properties
|
The properties for the job. |
docker_args
|
The docker arguments for the job. |
shm_size
|
The shared memory size for the job. |
Exceptions
Type | Description |
---|---|
Raised if dest is a file path and the file already exists. |
|
Raised if dest is an open file and the file is not writable. |
Attributes
base_path
creation_context
The creation context of the resource.
Returns
Type | Description |
---|---|
The creation metadata for the resource. |
early_termination
Early termination policy for sweep job.
Returns
Type | Description |
---|---|
<xref:azure.ai.ml.entities._job.sweep.early_termination_policy.EarlyTerminationPolicy>
|
Early termination policy for sweep job. |
id
The resource ID.
Returns
Type | Description |
---|---|
The global ID of the resource, an Azure Resource Manager (ARM) ID. |
inputs
limits
log_files
Job output files.
Returns
Type | Description |
---|---|
The dictionary of log names and URLs. |
outputs
resources
sampling_algorithm
Sampling algorithm for sweep job.
Returns
Type | Description |
---|---|
Sampling algorithm for sweep job. |
status
The status of the job.
Common values returned include "Running", "Completed", and "Failed". All possible values are:
NotStarted - This is a temporary state that client-side Run objects are in before cloud submission.
Starting - The Run has started being processed in the cloud. The caller has a run ID at this point.
Provisioning - On-demand compute is being created for a given job submission.
Preparing - The run environment is being prepared and is in one of two stages:
Docker image build
conda environment setup
Queued - The job is queued on the compute target. For example, in BatchAI, the job is in a queued state
while waiting for all the requested nodes to be ready.
Running - The job has started to run on the compute target.
Finalizing - User code execution has completed, and the run is in post-processing stages.
CancelRequested - Cancellation has been requested for the job.
Completed - The run has completed successfully. This includes both the user code execution and run
post-processing stages.
Failed - The run failed. Usually the Error property on a run will provide details as to why.
Canceled - Follows a cancellation request and indicates that the run is now successfully cancelled.
NotResponding - For runs that have Heartbeats enabled, no heartbeat has been recently sent.
Returns
Type | Description |
---|---|
Status of the job. |
studio_url
type
Azure SDK for Python