Problem
Your Databricks job reports a failed status, but all Spark jobs and tasks have successfully completed.
Cause
You have explicitly called spark.stop() or System.exit(0) in your code.
If either of these are called, the Spark context is stopped, but the graceful shutdown and handshake with the Databricks job service does not happen.
Solution
Do not call spark.stop() or System.exit(0) in Spark code that is running on a Databricks cluster.