This article explains how you can use the Databricks Jobs API to grant a single group permission to access all the jobs in your workspace.
Info
You must be a workspace administrator to perform the steps detailed in this article.
Instructions
Use the following sample code to give a specific group of users permission for all the jobs in your workspace.
Info
To get your workspace URL, review Workspace instance names, URLs, and IDs (AWS | Azure | GCP).
Review the Generate a personal access token (AWS | Azure | GCP) documentation for details on how to create a personal access token for use with the REST APIs.
- Start a cluster in your workspace and attach a notebook.
- Copy and paste the sample code into a notebook cell.
- Update the <workspace-domain-name> and <personal-access-token> values.
- Update the <permissions-to-assign> (AWS | Azure | GCP) value.
- Update the <group-name> value with the name of the user group you are granting permissions.
- Run the notebook cell.
%python
shard_url = "<workspace-domain-name-without-backslash>"
access_token = "<personal-access-token>"
group_name = "<group-name>"
headers_auth = {
'Authorization': f'Bearer {access_token}'
}
job_list_url = shard_url+"/api/2.1/jobs/list"
jobs_list = requests.request("GET", job_list_url, headers=headers_auth).json()
for job in jobs_list['jobs']:
job_id = job['job_id']
job_change_url = shard_url+"/api/2.0/preview/permissions/jobs/"+str(job_id)
payload_pause_schedule = json.dumps({
"access_control_list": [
{
"group_name":group_name,
"permission_level": "<permission-to-assign>"
}
]
})
response = requests.request("PATCH", job_change_url, headers=headers_auth, data=payload_pause_schedule)
print("Permissions Updated")