Job fails, but Apache Spark tasks finish


Your Databricks job reports a failed status, but all Spark jobs and tasks have successfully completed.


You have explicitly called spark.stop() or System.exit(0) in your code.

If either of these are called, the Spark context is stopped, but the graceful shutdown and handshake with the Databricks job service does not happen.


Do not call spark.stop() or System.exit(0) in Spark code that is running on a Databricks cluster.