Too many execution contexts are open right now

Reduce the number of notebooks used to limit the number of execution contexts required for your job.

Written by akash.bhat

Last published at: May 31st, 2023

Problem

You come across the below error message when you try to attach a notebook to a cluster or in a job failure. 

Run result unavailable: job failed with error message Too many execution contexts are open right now.(Limit set currently to 150)

Cause

Databricks create an execution context when you attach a notebook to a cluster. The execution context contains the state for a REPL environment for each supported programming language: Python, R, Scala, and SQL.

The cluster has a maximum number of 150 execution contexts. 145 are user REPLs, while the remaining five are allocated as internal system REPLs which are reserved contexts for backend operations. Once this threshold is reached, you can no longer attach a notebook to the cluster.

Delete

Info

It is not possible to view the current number of execution contexts in use.

Solution

Make sure you have not disabled auto-eviction for the cluster in your Spark config (AWS | Azure | GCP).

If the following line is present in your Spark config, auto-eviction is disabled. Remove this line to re-enable auto-eviction:

spark.databricks.chauffeur.enableIdleContextTracking false

Best practices

  • Use a job cluster instead of an interactive cluster. A job cluster for each job is the best way to avoid running out of execution contexts. Job clusters should be used for isolation and reliability.
  • Reduce the number of separate notebooks used to reduce the number of execution contexts required.

Temporary workaround

As a short-term solution, you can use a cluster-scoped init script to increase the execution context limit from 150 to 175.

Delete

Warning

If you increase the execution context limit, the driver memory pressure is likely to increase. You should not use this as a long-term solution.

Create the init script

Run this sample script in a notebook to create the init script on your cluster.

%scala

val initScriptContent = s"""
|#!/bin/bash
|cat > /databricks/common/conf/set_exec_context_limit.conf << EOL
|{
| rb  = 170
|}
|EOL
""".stripMargin
dbutils.fs.put("dbfs:/<path-to-init-script>/set_exec_context_limit.sh", initScriptContent, true)

Remember the path to the init script. You will need it when configuring your cluster.

Configure the init script

Follow the documentation to configure a cluster-scoped init script (AWS | Azure | GCP).

Set the Destination as DBFS and specify the path to the init script. Use the same path that you used in the sample script.

After configuring the init script, restart the cluster.


Was this article helpful?