Getting REQUEST_LIMIT_EXCEEDED when using catalog information-schema

Reduce information schema query concurrency, add selective filters to the query, and avoid issuing information schema queries too frequently.

Written by kevin.salas

Last published at: July 18th, 2025

Problem

When you make frequent calls to your Unity Catalog schema, you encounter a request rate limit error.

“[RequestId=<your-request-id>ErrorClass=REQUEST_LIMIT_EXCEEDED.REQUEST_LIMIT_EXCEEDED]
Your request was rejected since your organization has exceeded the rate limit. Please retry your request later.”

 

Cause

Your organization has exceeded its allocated rate limit. This limit is in place to maintain Databricks service stability. Frequent calls to information schema queries can overload the service and trigger the error.

 

In some cases, the error may be caused by concurrent queries that are executed too frequently, leading to a large number of Remote Procedure Calls (RPCs) that hit the rate limit. Every hour, hundreds of queries against the `columns` table may occur, which can result in a larger number of RPCs and trigger the rate limit.

 

Solution

There are three actions you can take to reduce the likelihood of receiving a rate limit error. 

  • Reduce the concurrency of information schema queries by making fewer calls to the API. These queries scan the Unity Catalog service database directly, so the frequency should be low.
  • Add supported selective filters to the query. Databricks supports push filters like `column_name (like/=/>/</>=/<=) <literal>`. Using selective filters can reduce the scanned data, making the query faster and reducing overhead. The column name can be `<catalog-name>`, `<schema-name>`, or `<table-name>`.
  • Avoid issuing information schema queries too frequently or you will hit a rate limit. Treat these queries like any other calls the REST API sends to the Databricks service. 

 

For more information, review the Information schema  (AWSAzureGCP) documentation.