Κοινή χρήση μέσω


Manage identities, permissions, and privileges for Databricks Jobs

This article contains recommendations and instructions for managing identities, permissions, and privileges for Databricks Jobs.

Note

Secrets are not redacted from a cluster’s Spark driver log stdout and stderr streams. To protect sensitive data, by default, Spark driver logs are viewable only by users with CAN MANAGE permission on job, single user access mode, and shared access mode clusters. To allow users with CAN ATTACH TO or CAN RESTART permission to view the logs on these clusters, set the following Spark configuration property in the cluster configuration: spark.databricks.acl.needAdminPermissionToViewLogs false.

On No Isolation Shared access mode clusters, the Spark driver logs can be viewed by users with CAN ATTACH TO or CAN MANAGE permission. To limit who can read the logs to only users with the CAN MANAGE permission, set spark.databricks.acl.needAdminPermissionToViewLogs to true.

See Spark configuration to learn how to add Spark properties to a cluster configuration.

Default privileges for jobs

Jobs have the following privileges set by default:

  • The creator of the job is granted the IS OWNER permission.
  • Workspace admins are granted the CAN MANAGE permission.
  • The creator of the job is set for Run as.

Admin permissions for jobs

By default, workspace admins can change the job owner or Run as configuration to any user or service principal in the workspace. Account admins can configure the RestrictWorkspaceAdmins setting to change this behavior. See Restrict workspace admins.

How do jobs interact with Unity Catalog permissions?

Jobs run as the identity of the user in the Run as setting. This identity is evaluated against permission grants for the following:

  • Unity Catalog-managed assets, including tables, volumes, models, and views.
  • Legacy table access control lists (ACLs) for assets registered in the legacy Hive metastore.
  • ACLs for compute, notebooks, queries, and other workspace assets.
  • Databricks secrets. See Secret management.

Note

Unity Catalog grants and legacy table ACLs require compatible compute access modes. See Configure compute for jobs.

SQL tasks and permissions

The file task is the only SQL task type to respect the Run as identity fully.

SQL queries, alerts, and legacy dashboard tasks respect configured sharing settings.

  • Run as owner: Runs of the scheduled SQL task always use the identity of the owner of the configured SQL asset.
  • Run as viewer: Runs of the scheduled SQL task always use the identity set in the job Run as field.

To learn more about query sharing settings, see Configure query permissions.

Example

The following scenario illustrates the interaction of SQL sharing settings and the job Run as setting:

  • User A is the owner of the SQL query named my_query.
  • User A configures my_query with the sharing setting Run as owner.
  • User B schedules my_query as a task in a job named my_job.
  • User B configures my_job to run with a service principle named prod_sp.
  • When my_job runs, it uses the identity for User A to run my_query.

Now assume that User B does not want this behavior. Starting from the existing configuration, the following occurs:

  • User A changes the sharing setting for my_query to Run as viewer.
  • When my_job runs, it uses the identify prod_sp.

Configure identity for job runs

To change the Run as setting, you must have the CAN MANAGE or IS OWNER permission on the job.

You can set the Run as setting to yourself or any service principal in the workspace on which you have the Service Principal User entitlement.

To configure the Run as setting for a job in the workspace UI, select an existing job using the following steps:

  1. Click Workflows Icon Workflows in the sidebar.
  2. In the Name column, click the job name.
  3. In the Job details side panel, click the pencil icon next to the Run as field.
  4. Search for and select a user or service principal.
  5. Click Save.

For more information on working with service principals, see the following:

Best practices for jobs governance

Databricks recommends the following for all production jobs:

  • Assign job ownership to a service principal

    If the user who owns a job leaves your organization, the job might fail. Use service principals to make jobs robust to employee churn.

    By default, workspace admins can manage job permissions and reassign ownership if necessary.

  • Run production jobs using a service principal

    Jobs run using the privileges of the job owner by default. If you assign ownership to a service principal, job runs use the permissions of the service principal.

    Using service principals for production jobs allows you to restrict write permissions on production data. If you run jobs using a user’s permissions, that user needs the same permissions to edit the production data required by the job.

  • Always use Unity Catalog-compatible compute configurations

    Unity Catalog data governance requires that you use a supported compute configuration.

    Serverless compute for jobs and SQL warehouses always use Unity Catalog.

    For jobs with classic compute, Databricks recommends shared access mode for supported workloads. Use single user access mode when required.

    Delta Live Tables pipelines configured with Unity Catalog have some limitations. See Limitations.

  • Restrict permissions on production jobs

    Users that trigger, stop, or restart job runs need the Can Manage Run permission.

    Users that view the job configuration or monitor runs need the Can View permission.

    Only grant Can Manage or Is Owner privileges to users trusted to modify production code.

Control access to a job

Job access control enables job owners and administrators to grant fine-grained permissions on jobs. The following permissions are available:

Note

Each permission includes the grants of permissions below it in the following table.

Permission Grant
Is Owner The identity used for Run as by default.
Can Manage Users can edit the job definition, including permissions. Users can pause and resume a schedule.
Can Manage Run Users can trigger and cancel job runs.
Can View Users can view job run results.

For information on job permission levels, see Job ACLs.

Configure job permissions

To configure permissions for a job in the workspace UI, select an existing job using the following steps:

  1. Click Workflows Icon Workflows in the sidebar.
  2. In the Name column, click the job name.
  3. In the Job details panel, click Edit permissions. The Permission Settings dialog appears.
  4. Click the Select User, Group or Service Principal… field and start typing a user, group, or service principal. The field searches all available identities in the workspace.
  5. Click Add.
  6. Click Save.

Manage the job owner

Only workspace admins can edit the job owner. Exactly one job owner must be assigned. Job owners can be users or service principals.