Apache Spark query plan not displaying in Spark UI

Increase the spark.eventLog.unknownRecord.maxSize default threshold.

Written by avi.yehuda

Last published at: January 16th, 2026

Problem

When you check the Spark UI, you cannot see the query plans for some of your Apache Spark jobs. 

 

When you check the driver log, you see the following error. You notice “Exceeded 2097152 bytes” and “physicalPlanDescription”. 

WARN DBCEventLoggingListener: Error in writing the event to log
com.fasterxml.jackson.databind.JsonMappingException: Exceeded 2097152 bytes (current = 2100764) (through reference chain: org.apache.spark.sql.execution.ui.SparkListenerSQLAdaptiveExecutionUpdate["physicalPlanDescription"])
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:390)
at com.fasterxml.jackson.databind.JsonMappingException.wrapWithPath(JsonMappingException.java:349)
at com.fasterxml.jackson.databind.ser.std.StdSerializer.wrapAndThrow(StdSerializer.java:316)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:778)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeWithType(BeanSerializerBase.java:655)
at com.fasterxml.jackson.databind.ser.impl.TypeWrappedSerializer.serialize(TypeWrappedSerializer.java:32)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
…
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:205)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:204)
at com.databricks.threading.NamedExecutor.withAttributionContext(NamedExecutor.scala:287)
at com.databricks.threading.NamedExecutor$$anon$2.run(NamedExecutor.scala:358)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: com.databricks.spark.util.LimitedOutputStream$LimitExceededException: Exceeded 2097152 bytes (current = 2100764)
at com.databricks.spark.util.LimitedOutputStream.write(LimitedOutputStream.scala:45)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2177)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment2(UTF8JsonGenerator.java:1491)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegment(UTF8JsonGenerator.java:1438)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator._writeStringSegments(UTF8JsonGenerator.java:1321)
at com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeString(UTF8JsonGenerator.java:517)
at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41)
at com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:728)
at com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:770)

 

Cause

Spark parses Spark job query plans to the event logs for debugging as data structures. Those event log data structures are used to display the query plans in Spark UI. 

 

When this type of data structure’s size exceeds the maximum allowed threshold of 2MB, the error is thrown and the associated query plan does not appear in the Spark UI.

 

Solution

Change the value of the  `spark.eventLog.unknownRecord.maxSize` config to increase the default threshold. For example, to change the default to 16MB, add the parameter 16m.

 

For details on how to apply Spark configs at the compute level, refer to the “Spark configuration” section of the Compute configuration reference (AWS | Azure | GCP) documentation.

 

Alternatively, you can change the parameter in the job’s notebook.  

spark.conf.set("spark.eventLog.unknownRecord.maxSize",”16m”)

 

For details, refer to the Set Spark configuration properties on Databricks (AWSAzureGCP) documentation.