Job execution returning [UDF_MAX_COUNT_EXCEEDED] error

Increase the UDF limit and adjust memory tracking settings.

Written by saritha.shivakumar

Last published at: July 10th, 2025

Problem

When executing a job that uses more than five user-defined functions (UDFs) in Databricks Runtime versions below 14.3 LTS, you encounter the following error, where X is greater than five.

SparkRuntimeException: [UDF_MAX_COUNT_EXCEEDED] Exceeded query-wide UDF limit of 5 UDFs (limited during public preview). Found X. The UDFs were: <list-of-each-of-X-udfs>. 

 

Cause

Databricks Runtime versions below 14.3 LTS enforce a maximum of five UDFs per query plan on Unity Catalog Python and PySpark UDFs, to mitigate the risk of out-of-memory (OOM) errors, especially on shared clusters.

 

Solution

If your workload requires more than five UDFs, you can increase the UDF limit by adjusting the cluster configuration, but be aware that increasing the limit may increase the risk of OOM errors. 

 

Databricks also recommends adjusting memory tracking settings to optimize resource usage. 

1. Navigate to your target cluster and click Edit.

2. Scroll to Advanced options and click to expand. 

3. Under the Spark tab, in the Spark config field, add the following Apache Spark configurations. 

spark.databricks.safespark.externalUDF.plan.limit 20
spark.databricks.safespark.sandbox.trackMemory.enabled false
spark.databricks.safespark.sandbox.size.default.mib 500

 

4. Restart the cluster for the changes to take effect.