Updated February 22nd, 2024 by simran.arora

Bulk update workflow permissions for a group

This article explains how you can use the Databricks Jobs API to grant a single group permission to access all the jobs in your workspace. Info You must be a workspace administrator to perform the steps detailed in this article.    Instructions Use the following sample code to give a specific group of users permission for all the jobs in your worksp...

0 min reading time
Updated June 7th, 2023 by simran.arora

INVALID_PARAMETER_VALUE.LOCATION_OVERLAP: overlaps with managed storage error

Problem You are using dbutils to access an external location (AWS | Azure | GCP) that is mounted on managed tables in a shared cluster. When you try to list the path to the location, it fails with an INVALID_PARAMETER_VALUE.LOCATION_OVERLAP error message.  The error says the given path overlaps with managed storage. dbutils.fs.ls("<storage-blob&g...

0 min reading time
Updated February 29th, 2024 by simran.arora

Update the Databricks SQL warehouse owner

Whoever creates a SQL warehouse is defined as the owner by default. There may be times when you want to transfer ownership of the SQL warehouse to another user. This can be done by transferring ownership of Databricks SQL objects (AWS | Azure | GCP) via the UI or the Permissions REST API. Instructions Info The service principal cannot be changed to ...

2 min reading time
Updated June 7th, 2023 by simran.arora

Stop all scheduled jobs

Under normal conditions, jobs run periodically and auto-terminate once their task is completed. In some cases, you may want to stop all scheduled jobs. For more information on scheduled jobs, please review the Create, run, and manage Databricks Jobs (AWS | Azure | GCP) documentation. This article provides sample code that you can use to stop all of ...

0 min reading time
Updated December 21st, 2022 by simran.arora

Pin cluster configurations using the API

Normally, cluster configurations are automatically deleted 30 days after the cluster was last terminated. If you want to keep specific cluster configurations, you can pin them. Up to 100 clusters can be pinned. Pinned clusters are not automatically deleted, however they can be manually deleted. Info You must be a Databricks administrator to pin a cl...

2 min reading time
Updated December 21st, 2022 by simran.arora

Unpin cluster configurations using the API

Normally, cluster configurations are automatically deleted 30 days after the cluster was last terminated. If you want to keep specific cluster configurations, you can pin them. Up to 100 clusters can be pinned. If you not longer need a pinned cluster, you can unpin it. If you have pinned 100 clusters, you must unpin a cluster before you can pin anot...

2 min reading time
Updated July 17th, 2023 by simran.arora

Cluster fails with Fatal uncaught exception error. Failed to bind.

Problem Clusters running Databricks Runtime 11.3 LTS or above terminate with a Failed to bind error message. Fatal uncaught exception. Terminating driver. java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:6062 Cause This can happen if multiple processes attempt to use the same port. Databricks Runtime 11.3 LTS and above use the IPython kernel (...

0 min reading time
Updated June 7th, 2023 by simran.arora

Generate a list of all workspace admins

Workspace administrators have full privileges to manage a workspace. This includes adding and removing users, as well as managing all of the data resources (jobs, libraries, notebooks, repos, etc.) in the workspace. Info You must be a workspace administrator to perform the steps detailed in this article.  If you are a workspace admin, you can view o...

1 min reading time
Load More