Spark 2.0.0 cluster takes a long time to append data
If you find that a cluster using Spark 2.0.0
version takes a longer time to append
data to an existing dataset and in particular, all of Spark jobs have finished, but your
command has not finished, it is because driver node is moving the output files of tasks from the
job temporary directory to the final destination one-by-one, which is slow with cloud storage. To resolve this
issue, set mapreduce.fileoutputcommitter.algorithm.version
to 2. This issue does not affect overwriting a dataset or writing data to a new location.
Note
Starting with Spark 2.0.1-db1
, the default value of mapreduce.fileoutputcommitter.algorithm.version
is 2.
If you are using Spark 2.0.0
, manually set this config if you experience this slowness issue.
How to confirm if I am experiencing this issue?
You can confirm if you are experiencing this issue by checking the following things:
All of your Spark jobs have finished and your cell has not finished. The progress bar should look like
The thread dump of the driver (you can find it on the executor page of the Spark UI) shows that there is a thread spending a long time inside the
commitJob
method ofFileOutputCommitter
class.
How do I set spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version
to 2?
You can set this config by using any of the following methods:
- When you launch your cluster, you can put
spark.hadoop.mapreduce.fileoutputcommitter.algorithm.version 2
in the Spark config. - In your notebook, you can run
%sql set mapreduce.fileoutputcommitter.algorithm.version=2
orspark.conf.set("mapreduce.fileoutputcommitter.algorithm.version", "2")
(spark
is aSparkSession
object provided with Databricks notebooks). - When you write data using Dataset API, you can set it in the option, i.e.
dataset.write.option("mapreduce.fileoutputcommitter.algorithm.version", "2")
.
What is the cause?
When Spark appends data to an existing dataset, Spark uses FileOutputCommitter
to manage staging output files and final output files. The behavior of FileOutputCommitter
has direct impact on the performance of jobs that write data.
A FileOutputCommitter
has two methods, commitTask
and commitJob
.
Apache Spark 2.0 and higher versions use Apache Hadoop 2, which uses the value of mapreduce.fileoutputcommitter.algorithm.version
to control how commitTask
and commitJob
work. In Hadoop 2, the default value of
mapreduce.fileoutputcommitter.algorithm.version
is 1. For this version, commitTask
moves data generated by a task
from the task temporary directory to job temporary directory and when all tasks complete, commitJob
moves data to
from job temporary directory to the final destination [1]. Because the driver is doing the work of commitJob
, for cloud storage, this operation can take a long time. You may often think that your cell is “hanging”.
However, when the value of mapreduce.fileoutputcommitter.algorithm.version
is 2, commitTask
moves data
generated by a task directly to the final destination and commitJob
is basically a no-op.