How to restrict cluster creation to single-node only

Add a JSON configuration to your compute policy.

Written by guruprasad.bn

Last published at: March 25th, 2025

Problem

You want to restrict other users to creating only single-node clusters, in order to conduct testing or in situations where multi-node clusters are not needed. 

 

While Databricks allows users to configure cluster settings through the UI, there is no option to enforce single-node clusters for all users. 

 

Cause

The UI allows setting up single-node clusters manually, but it does not prevent users from modifying settings to create multi-node clusters. 

 

Solution

When you create a compute policy and define cluster actions, add the following JSON configuration to enforce single-node cluster creation by default, and help restrict the creation of unnecessary multi-node clusters. 

 

{
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "fixed",
    "value": "singleNode"
  },
  "spark_conf.spark.master": {
    "type": "fixed",
    "value": "local[*]"
  },
  "num_workers": {
    "type": "fixed",
    "value": 0
  },
  "custom_tags.ResourceClass": {
    "type": "fixed",
    "value": "SingleNode"
  }
}

 

For more information, refer to the Create and manage compute policies (AWSAzureGCP) documentation.

 

To add more attributes or customize the policy further, refer to the Compute policy reference (AWSAzureGCP) documentation.