NucleusUserException error when trying to create an SQL warehouse cluster

Check global init script and manually change the my.cnf file.

Written by satyadeepak.bollineni

Last published at: February 12th, 2025

Problem

When you create an external Hive metastore (HMS) using Amazon Aurora and then try to create a table using an SQL warehouse cluster, you receive an error message.

 

Error: org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container SEQUENCE_TABLE since autoCreate flags do not allow it

 

Cause

This error occurs specifically in Hive's interaction with Amazon Aurora as its metastore.

 

Data Nucleus does not escape the database name such as 'db-name' or [db-name]. When the database catalog name has dashes, HiveServer2 cannot connect to the catalog because of a SQL syntax error when issuing a critical command used to start Hive. 

 

Hive metastore logging shows the following message.

Could not create "increment"/"table" value-generation container SEQUENCE_TABLE since autoCreate flags do not allow it. org.datanucleus.exceptions.NucleusUserException: Could not create "increment"/"table" value-generation container SEQUENCE_TABLE since autoCreate flags do not allow it.   

 

Alternatively, your global init script may have the metastore configured to point to a development environment, but your warehouse is pointing to a production environment.

 

Solution

First, check your global init script and warehouse to ensure they point to the same environment.

 

Then, manually change the my.cnf file to make sure the database name is correctly configured under the MySQL settings. Avoid dashes or hyphens in the database catalog name.

 

mysql> show variables like 'binlog_format'; +---------------+-------+ | binlog_format | MIXED | 

 

Verify the configuration in your MySQL settings by running the below steps. 

 

  1. Find the MySQL config file my.cnf (/etc/my.cnf)
  2. Search binlog_format. It may be commented with #.
  3. Open binlog_format and create the Hive database again.