ProtoSerializer stack overflow error in DBConnect


You are using DBConnect to run a PySpark transformation on a DataFrame with more than 100 columns when you get a stack overflow error.

py4j.protocol.Py4JJavaError: An error occurred while calling o945.count.
: java.lang.StackOverflowError
    at java.lang.Class.getEnclosingMethodInfo(
    at java.lang.Class.getEnclosingClass(
    at java.lang.Class.getSimpleBinaryName(
    at java.lang.Class.getSimpleName(
    at org.apache.spark.sql.types.DataType.typeName(DataType.scala:67)
    at org.apache.spark.sql.types.DataType.simpleString(DataType.scala:82)
    at org.apache.spark.sql.types.DataType.sql(DataType.scala:90)
    at org.apache.spark.sql.util.ProtoSerializer.serializeDataType(ProtoSerializer.scala:3207)
    at org.apache.spark.sql.util.ProtoSerializer.serializeAttrRef(ProtoSerializer.scala:3610)
    at org.apache.spark.sql.util.ProtoSerializer.serializeAttr(ProtoSerializer.scala:3600)
    at org.apache.spark.sql.util.ProtoSerializer.serializeNamedExpr(ProtoSerializer.scala:3537)
    at org.apache.spark.sql.util.ProtoSerializer.serializeExpr(ProtoSerializer.scala:2323)
    at org.apache.spark.sql.util.ProtoSerializer$$anonfun$$nestedInanonfun$serializeCanonicalizable$1$1.applyOrElse(ProtoSerializer.scala:3001)
    at org.apache.spark.sql.util.ProtoSerializer$$anonfun$$nestedInanonfun$serializeCanonicalizable$1$1.applyOrElse(ProtoSerializer.scala:2998)

Performing the same operation in a notebook works correctly and does not produce an error.

Example code

You can reproduce the error with this sample code.

It creates a DataFrame with 200 columns and renames them all.

This sample code runs correctly in a notebook, but results in an error when run in DBConnect.

df = spark.createDataFrame([{str(i) : i for i in range(2000)}])
df = spark.createDataFrame([{str(i) : i for i in range(200)}])
for col in df.columns:
  df = df.withColumnRenamed(col, col + "_a")


When you run code in DBConnect, some functions are handled on the remote cluster driver, but some are handled locally on the client PC.

If enough memory is not allocated on the local PC, you get an error.


You should increase the memory allocated to the Apache Spark driver on the local PC.

  1. Run databricks-connect get-spark-home on your local PC to get the ${spark_home} value.

  2. Navigate to the ${spark_home}/conf/ folder.

  3. Open the spark-defaults.conf file.

  4. Add the following settings to the spark-defaults.conf file:

    spark.driver.memory 4g
    spark.driver.extraJavaOptions -Xss32M
  5. Save the changes.

  6. Restart DBConnect.


DBConnect only works with supported Databricks Runtime versions. Ensure that you are using a supported runtime on your cluster before using DBConnect.