Unable to select a single node cluster when using the default job compute policy

Create a custom job policy.

Written by kevin.salas

Last published at: July 1st, 2025

Problem

While setting up a new job cluster, you indicate the default policy, Job Compute, and Single user under Access mode. You notice a tooltip in the UI saying you can select “single-node” from the compute mode while your indicated number of workers is set to 1 but cannot see how to select the single node cluster in the UI.

 

The following screenshot shows the UI with the tooltip after selecting 1 worker.

 

Cause

The Job Compute default policy is designed to create clusters with multiple workers. 

 

Solution

Create a custom job policy that instructs Databricks to create a single-node cluster. Use the following configuration as a minimum. For details on creating a custom job policy, refer to the “Use policy families to create custom policies” section of the Default policies and policy families (AWSAzureGCP) documentation.

{
  "cluster_type": {
    "type": "fixed",
    "value": "job"
  },
  "num_workers": {
    "type": "fixed",
    "value": 0
  },
  "spark_conf.spark.databricks.cluster.profile": {
    "type": "fixed",
    "value": "singleNode"
  },
  "spark_conf.spark.master": {
    "type": "fixed",
    "value": "local[*,4]"
  },
  "custom_tags.ResourceClass": {
    "type": "fixed",
    "value": "SingleNode"
  }
}

 

Then, when you return to the job cluster setup UI view, you can indicate your custom policy, and see Single node selected as well.