Databricks Connect job fails after a Databricks Runtime update

Use the most recent version of Databricks Connect that matches your Databricks Runtime version to avoid an error.

Written by Rajeev kannan Thangaiah

Last published at: July 27th, 2023

Problem

Your legacy Databricks Connect jobs start failing with a java.lang.ClassCastException error message. The error is not associated with any specific commands but seems to affect multiple Databricks Connect commands or jobs.

Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.spark.sql.catalyst.trees.TreePattern$ to field org.apache.spark.sql.catalyst.trees.TreePattern$.WITH_WINDOW_DEFINITION of type scala.Enumeration$Value in instance of org.apache.spark.sql.catalyst.trees.TreePattern$at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2301)

Cause

Your cluster is running the latest maintenance release for your chosen Databricks Runtime, but you did not update the version of Databricks Connect you are using to connect to the cluster.

Databricks Connect requires the client version to match the Databricks Runtime version on your compute cluster. 

If the Databricks Connect client version does not correspond to the Databricks Runtime version on your cluster, you may get an error message.

Solution

The Databricks Connect package must be kept in sync with the corresponding Databricks Runtime release.

Review the Databricks Connect release notes (AWS | Azure | GCP) to determine the correct version to use with your selected Databricks Runtime.

Was this article helpful?