Lakeflow Declarative Pipelines failing with the error "UNTRACEABLE_TABLE_MODIFICATION"

Perform a full refresh of the table.

Written by aishwarya.ghosh

Last published at: February 12th, 2026

Problem

While working with Lakeflow Declarative Pipelines streaming tables, you encounter the following error.

{
  "eventType": "UNTRACEABLE_TABLE_MODIFICATION",
  "message": "Flow 'gateway_cdc_UserTopic' has failed because of a fatal error during realtime phase",
  "snapshot_request_timestamp": <timestamp>
}
tech.replicant.common.ExtractorException: Untraceable table modification detected for table table-name, reason: Table truncated. Reinit required.

You notice that the error occurs for multiple tables loaded from the same database.
 

Cause

A disruptive source-side event, such as a table truncate or a change data capture (CDC) log rollover, can break the pipeline’s ability to reliably track changes. When this happens, the table becomes stale and stops refreshing even if new data arrives.

When you have more idle tables (no changes since the last replication run) than active ones, the table‑cursor mechanism can become inefficient and may lose log history—especially after extended pipeline stoppages or following a truncate operation on the table. This inefficiency makes it harder for the pipeline to resume accurate change tracking and increases the likelihood of staleness, connecting the idle/active imbalance to the stale table condition.
 

Solution

  1. Trigger a full refresh on the particular table to re-include it in your pipeline for replication. This will reload the entire current state of the table from scratch.
  2. Ensure that the pipeline is properly configured and change tracking is enabled for the relevant tables. Run the following command in a notebook to enable change tracking at the table level.
ALTER TABLE <schema-name>.<table-name> ENABLE CHANGE_TRACKING;
  1. After performing the full refresh, check the pipeline status to ensure the error doesn’t reappear. 
     

Best practices 

  • Regularly check the pipeline status to detect any potential issues before they become critical.
  • Use the change data feed configuration to minimize the likelihood of log history loss for idle tables. For more information, refer to the Use Delta Lake change data feed on Databricks (AWSAzureGCP) documentation.
  • Avoid prolonged pipeline stops and be sure to monitor the CDC metadata and SQL Server CT state.
  • If the pipeline has been stopped for too long, the pipeline state or CDC metadata can remain stale, leading to issues even after the fix has been implemented.