Problem
A serverless query executed in a notebook still appears as "running" in the Databricks UI even after you attempt to cancel it.
Cause
The issue arises due to a combination of factors.
- The connection between the REPL and the Spark Connect Gateway is lost, causing the interrupt process to remain incomplete.
- No explicit interrupt request was sent to cancel the query, preventing proper termination.
- There are gaps in the cleanup process, specifically the failure to call
postClosed
during interruptions. This is essential to update the query history UI. - The SQL History service could not reconnect to the Spark Connect session to retrieve status updates after the session became inactive.
Solution
- Run the
CANCEL <your-query-id>
command from a different notebook or SQL editor to terminate the query forcefully. - Configure a timeout for queries by setting the
spark.databricks.queryWatchdog.timeoutInSeconds
property. This limits how long a query can run before being automatically terminated. - Verify that the query is no longer displayed as "running" in the query history.
For more information, refer to the Query history (AWS | Azure | GCP) documentation.