SynapseSparkCompute Class
Note
This is an experimental class, and may change at any time. Please see https://aka.ms/azuremlexperimental for more information.
SynapseSpark Compute resource.
Constructor
SynapseSparkCompute(*, name: str, description: str | None = None, tags: Dict[str, str] | None = None, node_count: int | None = None, node_family: str | None = None, node_size: str | None = None, spark_version: str | None = None, identity: IdentityConfiguration | None = None, scale_settings: AutoScaleSettings | None = None, auto_pause_settings: AutoPauseSettings | None = None, **kwargs: Any)
Keyword-Only Parameters
Name | Description |
---|---|
name
|
The name of the compute. |
description
|
The description of the resource. Defaults to None. |
tags
|
The set of resource tags defined as key/value pairs. Defaults to None. |
node_count
|
The number of nodes in the compute. |
node_family
|
The node family of the compute. |
node_size
|
The size of the node. |
spark_version
|
The version of Spark to use. |
identity
|
The configuration of identities that are associated with the compute cluster. |
scale_settings
|
The scale settings for the compute. |
auto_pause_settings
|
The auto pause settings for the compute. |
kwargs
|
Additional keyword arguments passed to the parent class. |
Examples
Creating Synapse Spark compute.
from azure.ai.ml.entities import (
AutoPauseSettings,
AutoScaleSettings,
IdentityConfiguration,
ManagedIdentityConfiguration,
SynapseSparkCompute,
)
synapse_compute = SynapseSparkCompute(
name="synapse_name",
resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.Synapse/workspaces/workspace/bigDataPools/pool",
identity=IdentityConfiguration(
type="UserAssigned",
user_assigned_identities=[
ManagedIdentityConfiguration(
resource_id="/subscriptions/subscription/resourceGroups/group/providers/Microsoft.ManagedIdentity/userAssignedIdentities/identity"
)
],
),
scale_settings=AutoScaleSettings(min_node_count=1, max_node_count=3, enabled=True),
auto_pause_settings=AutoPauseSettings(delay_in_minutes=10, enabled=True),
)
Methods
dump |
Dump the compute content into a file in yaml format. |
dump
Dump the compute content into a file in yaml format.
dump(dest: str | PathLike | IO, **kwargs: Any) -> None
Parameters
Name | Description |
---|---|
dest
Required
|
The destination to receive this compute's content. Must be either a path to a local file, or an already-open file stream. If dest is a file path, a new file will be created, and an exception is raised if the file exists. If dest is an open file, the file will be written to directly, and an exception will be raised if the file is not writable.'. |
Attributes
base_path
created_on
creation_context
The creation context of the resource.
Returns
Type | Description |
---|---|
The creation metadata for the resource. |
id
provisioning_errors
provisioning_state
Azure SDK for Python