Why use VACUUM on Delta Lake?
VACUUM is used to clean up unused and stale data files that are taking up unnecessary storage space. Removing these files can help reduce storage costs.
When you run VACUUM on a Delta table it removes the following files from the underlying file system:
- Any data files that are not maintained by Delta Lake
- Removes stale data files (files that are no longer referenced by a Delta table) and are older than 7 days
When should you run VACUUM?
The default configuration for a Delta table allows you to time travel 30 days into the past. However, to do this, the underlying data files must be present.
The default configuration for VACUUM deletes stale data files that are older than seven days. As a result, if you run VACUUM with the default settings, you will only be able to time travel seven days into the past, from the time you run VACUUM.
If you do not need to time travel more than seven days into the past, you can VACUUM on a daily basis.
Running VACUUM daily helps keep storage costs in check, especially for larger tables. You can also run VACUUM on-demand if you notice a sudden surge in the storage costs for a specific Delta table.
Issues you may face with VACUUM
- No progress update: You may not know how far the VACUUM has completed, especially when VACUUM has run for a long time. You may not know how many files have been successfully removed and how many files remain.
- Poor run performance: VACUUM runs for a long time, especially when tables are huge and/or when tables are a source for high frequency input streams.
Mitigate issues with VACUUM
No progress update
If VACUUM completes within an hour or two, there is no need to troubleshoot. However, if VACUUM runs for longer than two hours (this can happen on large tables when VACUUM hasn’t been run recently), you may want to check the progress. In this case you can run VACUUM with the DRY RUN option before and after the actual VACUUM run to monitor the performance of a specific VACUUM run and to identify the number of files deleted.
- Run VACUUM DRY RUN to determine the number of files eligible for deletion. Replace <table-path> with the actual table path location.
%python spark.sql("VACUUM delta.`<table-path>` DRY RUN")The DRY RUN option tells VACUUM it should not delete any files. Instead, DRY RUN prints the number of files and directories that are safe to be deleted. The intention in this step is not to delete the files, but know the number of files eligible for deletion.
The example DRY RUN command returns an output which tells us that there are x files and directories that are safe to be deleted.
Found x files and directories in a total of y directories that are safe to delete.You should record the number of files identified as safe to delete.
- Run VACUUM.
- Cancel VACUUM after one hour.
- Run VACUUM with DRY RUN again.
- The second DRY RUN command identifies the number of outstanding files that can be safely deleted.
- Subtract the outstanding number of files (second DRY RUN) from the original number of files to get the number of files that were deleted.
Poor run performance
This can be mitigated by following VACUUM best practices.
Avoid actions that hamper performance
Avoid over-partitioned data folders
- Over-partitioned data can result in a lot of small files. You should avoid partitioning on a high cardinality column. When you over-partition data, even running OPTIMIZE can have issues compacting small files, as compaction does not happen across partition directories.
- File deletion speed is directly dependent on the number of files. Over-partitioning data can hamper the performance of VACUUM.
Avoid concurrent runs
- When running VACUUM on a large table, avoid concurrent runs (including dry runs).
- Avoid running other operations on the same location to avoid file system level throttling. Other operations can compete for the same bandwidth.
Avoid cloud versioning
- Since Delta Lake maintains version history, you should avoid using cloud version control mechanisms, like S3 versioning on AWS.
- Using cloud version controls in addition to Delta Lake can result in additional storage costs and performance degradation.
Actions to improve performance
- Run OPTIMIZE to eliminate small files. When you combine OPTIMIZE with regular VACUUM runs you ensure the number of stale data files (and the associated storage cost) is minimized.
- Review the documentation on autoOptimize and autoCompaction (AWS | Azure | GCP) for more information.
- Review the documentation on OPTIMIZE (AWS | Azure | GCP) for more information.
Use Databricks Runtime 10.4 LTS or above and additional driver cores (Azure and GCP only)
- On Azure and GCP VACUUM performs the deletion in parallel on the driver, when using Databricks Runtime 10.4 LTS or above. The higher the number of driver cores, the more the operation can be parallelized.
Use Databricks Runtime 11.1 or above on AWS
- Databricks Runtime 11.1 and above set the checkpoint creation interval to 100, instead of 10. As a result, fewer checkpoint files are created. With less checkpoint files to index, the faster the listing time in the transaction log directory. This reduces the delta log size and improves the VACUUM listing time. It also decreases the checkpoint storage size.
- If you are using Databricks Runtime 10.4 LTS on AWS and cannot update to a newer runtime, you can manually set the table property with delta.checkpointInterval=100. This creates checkpoint files for every 100 commits, instead of every 10 commits.
%sql alter table <delta-table-name> set tblproperties ('delta.checkpointInterval' = 100)
Use compute optimized instances
- Since VACUUMis compute intensive, you should use compute optimized instances.
- On AWS use C5 series worker types.
- On Azure use F series worker types.
- On GCP use C2 series worker types.
Use auto-scaling clusters
- Before performing file deletion, VACUUM command lists the files. File listing happens in parallel by leveraging the workers in the cluster. Having more workers in the cluster can help with the initial listing of files. The higher the number of workers, the faster the file listing process.
- Additional workers are NOT needed for file deletion. This is why you should use an auto-scaling cluster with multiple workers. Once the file listing completes, the cluster can scale down and use the driver for the file deletion. This saves cluster costs.
Set a higher trigger frequency for streaming jobs
- Use a trigger frequency of 120 seconds or more for streaming jobs that write to Delta tables. You can adjust this based on your needs.
// ProcessingTime trigger with 120 seconds micro-batch interval df.writeStream .format("console") .trigger(Trigger.ProcessingTime("120 seconds")) .start()
- The higher the trigger frequency, the bigger the data files. The bigger the data files, the lesser the number of total files. The lesser the number of total files, the less time it takes to delete files. As a result, future VACUUM attempts run faster.
Reduce log retention
- If you do not need to time travel far into the past, you can reduce log retention to seven days. This reduces the number of JSON files and thereby reduces the listing time. This also reduces the delta log size.
- The delta.logRetentionDuration property configures how long you can go back in time. The default value is 30 days. You need to use ALTER TABLEto modify existing property values.
%sql ALTER TABLE <table-name> SET TBLPROPERTIES ('delta.logRetentionDuration'='7 days')
Run VACUUM daily
- If you reduce log retention to seven days (thereby limiting time travel to seven days) you can run VACUUM on a daily basis.
- This deletes stale data files that are older than sever days, every day. This is a good way to avoid stale data files and reduce you storage costs.
- After testing and verification on a small table, you can schedule VACUUM to run everyday via a job.
- Schedule VACUUM to run using a job cluster, instead of running it manually on all-purpose clusters, which may cost more.
- Use auto-scaling cluster when configuring the job to save costs.
To improve VACUUM performance:
- Avoid over-partitioned directories
- Avoid concurrent runs (during VACUUM)
- Avoid enabling cloud storage file versioning
- If you run a periodic OPTIMIZE command, enable autoCompaction/autoOptimize on the delta table
- Use a current Databricks Runtime
- Use auto-scaling clusters with compute optimized worker types
In addition, if your application allows for it:
- Increase the trigger frequency of any streaming jobs that write to your Delta table
- Reduce the log retention duration of the Delta table
- Perform a periodic VACUUM
These additional steps further increase VACUUM performance and can also help reduce storage costs.