JDBC write operation fails with HiveSQLException error: The background threadpool cannot accept new task for execution

Change the spark.hive.server2.async.exec.threads, spark.hive.server2.async.exec.wait.queue.size, and spark.hive.server2.async.exec.keepalive.time configs to handle more concurrent asynchronous queries.

Written by manikandan.ganesan

Last published at: April 28th, 2025

Problem

When running the JDBC write operation using the Databricks JDBC driver, your job fails with the following error. 

Caused by: com.databricks.client.support.exceptions.ErrorException: [Databricks][DatabricksJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: TStatus(statusCode:ERROR_STATUS, infoMessages:[*org.apache.hive.service.cli.HiveSQLException: The background threadpool cannot accept new task for execution, please retry the operation. ]) 

 

Cause

The number of concurrent asynchronous queries exceeds 100. By default, the driver cannot handle more than 100 Thrift/JDBC connections, leading to resource exhaustion in the background thread pool.

 

Solution

In your cluster settings: 

  1. Scroll to Advanced options and click to expand.
  2. In the Spark tab, set the following configurations in the Spark config box to increase the concurrent query limit.

 

spark.hive.server2.async.exec.threads 200
spark.hive.server2.async.exec.wait.queue.size 200
spark.hive.server2.async.exec.keepalive.time 20

 

These settings increase the number of threads and queue capacity for asynchronous execution, helping prevent thread pool exhaustion.