JDBC write fails with a PrimaryKeyViolation error

JDBC write to a SQL database fails with a `PrimaryKeyViolation` error or results in duplicate data

Written by harikrishnan.kunhumveettil

Last published at: May 24th, 2022

Problem

You are using JDBC to write to a SQL table that has primary key constraints, and the job fails with a PrimaryKeyViolation error.

Alternatively, you are using JDBC to write to a SQL table that does not have primary key constraints, and you see duplicate entries in recently written tables.

Cause

When Apache Spark performs a JDBC write, one partition of the DataFrame is written to a SQL table. This is generally done as a single JDBC transaction, in order to avoid repeatedly inserting data. However, if the transaction fails after the commit occurs, but before the final stage completes, it is possible for duplicate data to be copied into the SQL table.

The PrimaryKeyViolation error occurs when a write operation is attempting to insert a duplicate entry for the primary key.

Solution

You should use a temporary table to buffer the write, and ensure there is no duplicate data.

  1. Verify that speculative execution is disabled in your Spark configuration: spark.speculation false. This is disabled by default.
  2. Create a temporary table on your SQL database.
  3. Modify your Spark code to write to the temporary table.
  4. After the Spark writes have completed, check the temporary table to ensure there is no duplicate data.
  5. Merge the temporary table with the target table on your SQL database.
  6. Delete the temporary table.
Delete

Info

This workaround should only be used if you encounter the listed data duplication issue, as there is a small performance penalty when compared to Spark jobs that write directly to the target table.