scala.collection.immutable.HashMap$HashMap1 class leading to OOM error in driver

Change the webUI events location to RockDB.

Written by fernando.soster

Last published at: March 19th, 2025

Problem

When you use the scala.collection.immutable.HashMap$HashMap1 class, you notice performance issues and compute (formerly cluster) instability in your Databricks environment. You receive error messages such as Driver is up but is not responsive, likely due to GC or Driver is up but is not responsive, likely due to out of memory

 

You confirm the OOM error is due to the heap allocation for Hashmap by taking the heap dump on the driver and checking whether it points to the scala.collection.immutable.HashMap$HashMap1 class. 

 

Cause

By design, the driver memory accumulates Apache Spark UI events, including the scala.collection.immutable.HashMap$HashMap1 class, which has the most heap allocation. The accumulation leads to increased garbage collection (GC) activity.

 

When the number of events exceeds the available memory, the driver struggles to manage the heap. This struggle leads to GC pauses and eventually causes the driver to become unresponsive or run out of memory. 

 

Solution

Change the events location from driver memory to RockDB.  

  1. Navigate to the Databricks workspace and select the affected compute (formerly cluster). 
  2. Click the Edit button to modify the compute configuration. 
  3. Expand the Advanced options section and select the Spark
  4. Add the property spark.ui.store.path /databricks/driver/sparkuirocksdb in the Spark config field. 
  5. Click Confirm to save the changes. 
  6. Restart the compute for the configuration change to take effect.