Queries on Delta Sharing fail with error "Error Message/Issue: Error while reading file delta-sharing"

Enable history on all tables you share using Delta Sharing.

Written by jayant.sharma

Last published at: September 9th, 2025

Problem

While working with tables shared using Delta Sharing, you use filter criteria such as __END_AT in the WHERE clause, or any other filter that may refer to some historical data. The following query provides an example. 

SELECT * FROM `catalog`.`schema`.`table_name` WHERE `__END_AT` is NULL

 

You encounter the following error. 

Error Message/Issue: Error while reading file delta-sharing:xxxxxxxxx/.uc-deltasharing.... org.apache.spark.SparkIOException

 

You do not see the error in the Schema or UniForm tables.

 

Cause

When you examine the error stack trace, you see that the error indicates an Apache Spark failure while fetching a file from the Delta Sharing server. Your query likely involved time travel or referencing historical versions of the Delta table, which caused Spark to not be able to access some of the restricted files in Share. This means that the tables shared using Delta Sharing do not have history enabled. 

 

Stack trace

Error Message/Issue: Error while reading file delta-sharing:/xxxxxxxxx.uc-deltasharing.... org.apache.spark.SparkIOException
com.databricks.sql.io.FileReadException:
   at com.databricks.sql.io.FileReadException$.fileReadError(FileReadException.scala:48)
   at com.databricks.sql.io.FileReadException.fileReadError(FileReadException.scala)
   [....]
   Caused by: org.apache.spark.SparkIOException:
   at org.apache.spark.sql.errors.QueryExecutionErrors$.hdfsHttpErrorUncategorized(QueryExecutionErrors.scala:3215)
   at org.apache.spark.sql.errors.QueryExecutionErrors.hdfsHttpErrorUncategorized(QueryExecutionErrors.scala)
   at com.databricks.sql.io.HDFSStorage$ReadFileImpl.lambda$fetchRange$0(HDFSStorage.java:118)
   [....]
Caused by: io.delta.sharing.client.util.UnexpectedHttpStatus:
   at io.delta.sharing.client.RandomAccessHttpInputStream.$anonfun$reopen$3(RandomAccessHttpInputStream.scala:172)
   at io.delta.sharing.client.util.RetryUtils$.runWithExponentialBackoff(RetryUtils.scala:43)
   at io.delta.sharing.client.RandomAccessHttpInputStream.reopen(RandomAccessHttpInputStream.scala:150)
   at io.delta.sharing.client.RandomAccessHttpInputStream.seek(RandomAccessHttpInputStream.scala:88)
   at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:73)
   at com.databricks.spark.util.DatabricksRangeFetcher.fetchRange(DatabricksRangeFetcher.scala:78)
   at com.databricks.io.RangeFetcher$.fetch(RangeFetcher.scala:104)
   at com.databricks.io.RangeFetcher.fetch(RangeFetcher.scala)
   at com.databricks.sql.io.HDFSStorage$ReadFileImpl.lambda$fetchRange$0(HDFSStorage.java:105)

 

When history is enabled and there is no partition filter, Delta Sharing grants access to the entire table directory in cloud storage. When history is disabled or has any partition filter, you receive pre-signed URLs with limited Parquet file access.

 

History is shared by default for Schema or Delta UniForm tables when shared using Delta Sharing, which is why you do not see this issue in those tables.

 

For more information, refer to the “Add tables to a share” section of the Create and manage shares for Delta Sharing (AWSAzureGCP) documentation. 

 

Solution

Enable history on all tables you share using Delta Sharing. This ensures that your shares have full access to the table directory.

 

Use the Databricks API to set the history_data_sharing_status parameter to ENABLED. For details, refer to the Update a share (AWSAzureGCP) API documentation.