Error “number of currently active jobs exceeds hard limit of spark.databricks.maxActiveJobs” when trying to run an API request

Optimize job design first to the extent possible, then change the spark.databricks.maxActiveJobs setting to N, depending on your needs.

Written by saritha.shivakumar

Last published at: April 28th, 2025

Problem

When trying to run a notebook or job API request, you encounter the following error. 

ERROR Uncaught throwable from user code: org.apache.spark.SparkException: The number of currently active jobs (XXXX) exceeds the hard limit of spark.databricks.maxActiveJobs=2000.

 

Cause

The number of active jobs ($numActive) in your Apache Spark application has exceeded the limit set by the spark.databricks.maxActiveJobs parameter, which is set to 2000.

 

Solution

In your cluster settings, raise the configured limit from the default 2000 to N, depending on your needs.

 

Important

The spark.databricks.maxActiveJobs setting is used to limit the number of concurrent active jobs to prevent resource contention and ensure system stability. Use caution when increasing this limit, and consider your available physical resources (CPU, memory, and so on) to avoid potential performance degradation. 

 

Raising this limit can indirectly lead to increased costs if it results in higher resource usage or degraded performance. Before implementing the solution, consider whether the job structure itself is generating too many concurrent jobs due to inefficient logic. If possible, optimize job design first to reduce the need to raise this limit.

 

 

The spark.databricks.maxActiveJobs configuration must be set at the cluster level and cannot be set programmatically within a job.

1. In the Databricks UI, navigate to the Compute menu option in the vertical menu on the left.

2. Select the cluster you are using.

3. In the Cluster Configuration tab, click the Edit button in the top right. 

4. Scroll down to the Advanced Options section and click to expand.

5. Enter the configuration spark.databricks.maxActiveJobs with the desired value in the Spark Config field. 

spark.databricks.maxActiveJobs <max-active-jobs> 

6. Save the changes and restart the cluster for the new configuration to take effect.