Introduction
You want to leverage Databricks syntax and configurations to ensure smooth parameter handling in your jobs even when using serverless SQL endpoints.
Instructions
Create a SQL notebook and set up parameters, execute the query, then configure a job with matching parameter keys and corresponding values. When the values are triggered through the job, they are automatically passed to the notebook, enabling dynamic and parameter-driven SQL execution aligned with workflow settings.
Create a SQL notebook and define parameters
Create your notebook and set up the parameters in the UI. Parameter set up is like creating a variable. You define the name and the value type only.
1. Navigate to your workspace.
2. Click the Create button in the top right corner and select Notebook.
3. Within the notebook, click Edit in the top horizontal navigation, then choose Add parameter… from the dropdown menu.
4. Repeat steps 1-3 for each parameter you want to add.
5. Clicking Add parameter… creates a field titled param_1. Click the cog above the field on the right side to expand the parameter details and specify them.
- Parameter name: the name you use to reference the widget in your code.
- Widget label: (Optional) UI label displayed above the widget for user clarity.
- Widget type: Specifies the widget input type. The choices are Text, Dropdown, Combobox, or Multiselect.
-
Parameter type: Declares the expected data type for the widget value, such as String or Int.
- This only applies to resources attached to SQL warehouses.
- Default parameter value: (Optional) Default value for the parameter.
6. Click the cog again to close the parameter.
Execute your query with defined parameter in a notebook
If you work with Databricks Runtimes 15.2 and above, use the following query structure to invoke the parameter you named in the previous step. Replace <parameter-name>
with your parameter name.
The `IDENTIFIER()`
clause is required to parse strings as object identifiers such as names for databases, tables, views, functions, columns, and fields.
SELECT * FROM IDENTIFIER(:<parameter-name>);
If you work with Databricks Runtimes 11.3 LTS - 14.3 LTS, use the following syntax in your notebook. Replace <parameter-name>
with your parameter name.
SELECT * FROM ${<parameter-name>};
Set up your job
After you execute your query in a notebook, create a job for the notebook and provide the input parameters you set up in previously. The values created in the job, which can change by job, are passed in to each input parameter.
1. Navigate to Jobs & pipelines (formerly Workflows) in the sidebar.
2. Within Jobs & pipelines, click the Create button in the top right corner.
3. Choose Job from the dropdown menu.
4. Within the job creation UI:
- Task name: Choose your own task name.
- Type: Select Notebook.
- Source: Select Workspace.
- Path: Specify the path of the notebook you created previously.
- Compute: Select Serverless.
- Parameters: Provide the input parameter according to the parameter name added to the notebook previously.
5. Click Save task to finish creating the job.