CLI de clústeres (heredada)
Importante
Esta documentación se ha retirado y es posible que no se actualice.
Esta información se aplica a las versiones heredadas 0.18 e inferiores de la CLI de Databricks. En su lugar, Databricks recomienda usar la versión más reciente 0.205 o posterior de la CLI de Databricks. Consulte ¿Qué es la CLI de Databricks?. Para encontrar su versión de la CLI de Databricks, ejecute databricks -v
.
Para migrar de la versión 0.18 o inferior a la versión 0.205 o superior de la CLI de Databricks, vea Migración de la CLI de Databricks.
Para ejecutar subcomandos de Clusters CLI de Databricks, anéxelos a databricks clusters
. Estos subcomandos llaman a la API de clústeres.
databricks clusters -h
Usage: databricks clusters [OPTIONS] COMMAND [ARGS]...
Utility to interact with Databricks clusters.
Options:
-v, --version [VERSION]
-h, --help Show this message and exit.
Commands:
create Creates a Databricks cluster.
Options:
--json-file PATH File containing JSON request to POST to /api/2.0/clusters/create.
--json JSON JSON string to POST to /api/2.0/clusters/create.
delete Removes a Databricks cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
edit Edits a Databricks cluster.
Options:
--json-file PATH File containing JSON request to POST to /api/2.0/clusters/edit.
--json JSON JSON string to POST to /api/2.0/clusters/edit.
events Gets events for a Spark cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/#/setting/clusters/$CLUSTER_ID/configuration. [required]
--start-time TEXT The start time in epoch milliseconds. If
unprovided, returns events starting from the
beginning of time.
--end-time TEXT The end time in epoch milliseconds. If unprovided,
returns events up to the current time
--order TEXT The order to list events in; either ASC or DESC.
Defaults to DESC (most recent first).
--event-type TEXT An event types to filter on (specify multiple event
types by passing the --event-type option multiple
times). If empty, all event types are returned.
--offset TEXT The offset in the result set. Defaults to 0 (no
offset). When an offset is specified and the
results are requested in descending order, the
end_time field is required.
--limit TEXT The maximum number of events to include in a page
of events. Defaults to 50, and maximum allowed
value is 500.
--output FORMAT can be "JSON" or "TABLE". Set to TABLE by default.
get Retrieves metadata about a cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
list Lists active and recently terminated clusters.
Options:
--output FORMAT JSON or TABLE. Set to TABLE by default.
list-node-types Lists node types for a cluster.
list-zones Lists zones where clusters can be created.
permanent-delete Permanently deletes a cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
resize Resizes a Databricks cluster given its ID.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
--num-workers INTEGER Number of workers. [required]
restart Restarts a Databricks cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
spark-versions Lists possible Databricks Runtime versions.
start Starts a terminated Databricks cluster.
Options:
--cluster-id CLUSTER_ID Can be found in the URL at https://<databricks-instance>/?o=<16-digit-number>#/setting/clusters/$CLUSTER_ID/configuration.
Crear un clúster
Para mostrar la documentación de uso, ejecute databricks clusters create --help
.
databricks clusters create --json-file create-cluster.json
create-cluster.json
:
{
"cluster_name": "my-cluster",
"spark_version": "7.3.x-scala2.12",
"node_type_id": "Standard_D3_v2",
"spark_conf": {
"spark.speculation": true
},
"num_workers": 25
}
{
"cluster_id": "1234-567890-batch123"
}
Eliminación de un clúster
Para mostrar la documentación de uso, ejecute databricks clusters delete --help
.
databricks clusters delete --cluster-id 1234-567890-batch123
Si se ejecuta correctamente, no se muestra ninguna salida.
Cambio de la configuración de un clúster
Para mostrar la documentación de uso, ejecute databricks clusters edit --help
.
databricks clusters edit --json-file edit-cluster.json
edit-cluster.json
:
{
"cluster_id": "1234-567890-batch123",
"num_workers": 10,
"spark_version": "7.3.x-scala2.12",
"node_type_id": "Standard_D3_v2"
}
Si se lleva a cabo correctamente, no se muestra ninguna salida.
Enumeración de eventos para un clúster
Para mostrar la documentación de uso, ejecute databricks clusters events --help
.
databricks clusters events \
--cluster-id 1234-567890-batch123 \
--start-time 1617238800000 \
--end-time 1619485200000 \
--order DESC \
--limit 5 \
--event-type RUNNING \
--output JSON \
| jq .
{
"events": [
{
"cluster_id": "1234-567890-batch123",
"timestamp": 1619214150232,
"type": "RUNNING",
"details": {
"current_num_workers": 2,
"target_num_workers": 2
}
},
...
{
"cluster_id": "1234-567890-batch123",
"timestamp": 1617895221986,
"type": "RUNNING",
"details": {
"current_num_workers": 2,
"target_num_workers": 2
}
}
],
"next_page": {
"cluster_id": "1234-567890-batch123",
"start_time": 1617238800000,
"end_time": 1619485200000,
"order": "DESC",
"event_types": [
"RUNNING"
],
"offset": 5,
"limit": 5
},
"total_count": 11
}
Obtención de información sobre un clúster
Para mostrar la documentación de uso, ejecute databricks clusters get --help
.
databricks clusters get --cluster-id 1234-567890-batch123
O:
databricks clusters get --cluster-name my-cluster
{
"cluster_id": "1234-567890-batch123",
"spark_context_id": 3124308392469747564,
"cluster_name": "my-cluster",
"spark_version": "7.5.x-scala2.12",
"spark_conf": {
"spark.databricks.delta.preview.enabled": "true"
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"spark_env_vars": {
"PYSPARK_PYTHON": "/databricks/python3/bin/python3"
},
"autotermination_minutes": 0,
"enable_elastic_disk": true,
"disk_spec": {},
"cluster_source": "JOB",
"enable_local_disk_encryption": false,
"azure_attributes": {
"first_on_demand": 1,
"availability": "ON_DEMAND_AZURE",
"spot_bid_max_price": -1.0
},
"instance_source": {
"node_type_id": "Standard_DS3_v2"
},
"driver_instance_source": {
"node_type_id": "Standard_DS3_v2"
},
"state": "TERMINATED",
"state_message": "",
"start_time": 1619563745373,
"terminated_time": 1619563822867,
"last_state_loss_time": 0,
"num_workers": 8,
"default_tags": {
"Vendor": "Databricks",
"Creator": "someone@example.com",
"ClusterName": "my-cluster",
"ClusterId": "1234-567890-batch123",
"JobId": "1268284",
"RunName": "Normal job"
},
"creator_user_name": "someone@example.com",
"termination_reason": {
"code": "JOB_FINISHED",
"type": "SUCCESS"
},
"init_scripts_safe_mode": false
}
Enumeración de información sobre todos los clústeres disponibles
Para mostrar la documentación de uso, ejecute databricks clusters list --help
.
databricks clusters list --output JSON | jq .
{
"clusters": [
{
"cluster_id": "1234-567890-batch123",
"spark_context_id": 3124308392469747564,
"cluster_name": "my-cluster",
"spark_version": "7.5.x-scala2.12",
"spark_conf": {
"spark.databricks.delta.preview.enabled": "true"
},
"node_type_id": "Standard_DS3_v2",
"driver_node_type_id": "Standard_DS3_v2",
"spark_env_vars": {
"PYSPARK_PYTHON": "/databricks/python3/bin/python3"
},
"autotermination_minutes": 0,
"enable_elastic_disk": true,
"disk_spec": {},
"cluster_source": "JOB",
"enable_local_disk_encryption": false,
"azure_attributes": {
"first_on_demand": 1,
"availability": "ON_DEMAND_AZURE",
"spot_bid_max_price": -1.0
},
"instance_source": {
"node_type_id": "Standard_DS3_v2"
},
"driver_instance_source": {
"node_type_id": "Standard_DS3_v2"
},
"state": "TERMINATED",
"state_message": "",
"start_time": 1619563745373,
"terminated_time": 1619563822867,
"last_state_loss_time": 0,
"num_workers": 8,
"default_tags": {
"Vendor": "Databricks",
"Creator": "someone@example.com",
"ClusterName": "my-cluster",
"ClusterId": "1234-567890-batch123",
"JobId": "1268284",
"RunName": "Normal job"
},
"creator_user_name": "someone@example.com",
"termination_reason": {
"code": "JOB_FINISHED",
"type": "SUCCESS"
},
"init_scripts_safe_mode": false
},
...
]
}
Enumeración de los tipos de nodos de clúster disponibles
Para mostrar la documentación de uso, ejecute databricks clusters list-node-types --help
.
databricks clusters list-node-types
{
"node_types": [
{
"node_type_id": "Standard_L80s_v2",
"memory_mb": 655360,
"num_cores": 80.0,
"description": "Standard_L80s_v2",
"instance_type_id": "Standard_L80s_v2",
"is_deprecated": false,
"category": "Storage Optimized",
"support_ebs_volumes": true,
"support_cluster_tags": true,
"num_gpus": 0,
"node_instance_type": {
"instance_type_id": "Standard_L80s_v2",
"local_disks": 1,
"local_disk_size_gb": 800,
"instance_family": "Standard LSv2 Family vCPUs",
"local_nvme_disk_size_gb": 1788,
"local_nvme_disks": 10,
"swap_size": "10g"
},
"is_hidden": false,
"support_port_forwarding": true,
"display_order": 0,
"is_io_cache_enabled": true,
"node_info": {
"available_core_quota": 350.0,
"total_core_quota": 350.0
}
},
...
]
}
Enumeración de las zonas disponibles para crear clústeres
Nota:
Este comando no funciona con Azure Databricks.
Para mostrar la documentación de uso, ejecute databricks clusters list-zones --help
.
databricks clusters list-zones
Eliminación permanente de un usuario
Para mostrar la documentación de uso, ejecute databricks clusters permanent-delete --help
.
databricks clusters permanent-delete --cluster-id 1234-567890-batch123
Si se ejecuta correctamente, no se muestra ninguna salida.
Cambio del tamaño de un clúster
Para mostrar la documentación de uso, ejecute databricks clusters resize --help
.
databricks clusters resize --cluster-id 1234-567890-batch123 --num-workers 10
Si se ejecuta correctamente, no se muestra ninguna salida.
Reinicio de un clúster
Para mostrar la documentación de uso, ejecute databricks clusters restart --help
.
databricks clusters restart --cluster-id 1234-567890-batch123
Si se ejecuta correctamente, no se muestra ninguna salida.
Enumeración de las versiones disponibles del entorno de ejecución de Spark
Para mostrar la documentación de uso, ejecute databricks clusters spark-versions --help
.
databricks clusters spark-versions
{
"versions": [
{
"key": "8.2.x-scala2.12",
"name": "8.2 (includes Apache Spark 3.1.1, Scala 2.12)"
},
...
]
}
Inicio de un clúster
Para mostrar la documentación de uso, ejecute databricks clusters start --help
.
databricks clusters start --cluster-id 1234-567890-batch123
Si se ejecuta correctamente, no se muestra ninguna salida.