Problem
When you try to access mount paths backed by an Azure Data Lake Storage (ADLS) Gen2 storage account, you receive the following error message.
DatabricksServiceException: IO_ERROR: HTTP Error 401; url='https://login.microsoftonline.com/xxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxx/oauth2/token' AADToken: HTTP connection to https://login.microsoftonline.com/xxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxx/oauth2/token failed for getting token from AzureAD.; requestId='<request-id>'; contentType='application/json; charset=utf-8'; response '{'error':'invalid_client','error_description':'AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app '<app>'. Trace ID: <trace-id> Correlation ID: <correlation-id> Timestamp: 2025-06-03 14:45:15Z','error_codes':[7000215],'timestamp':'2025-06-03 14:45:15Z','trace_id':'<trace-id>','correlation_id':'<correlation-id>','error_uri':'https://login.microsoftonline.com;
Cause
The client secret used for authenticating a service principal (SP) with Azure Storage has expired.
When the secret expires, Azure Active Directory (AD) no longer accepts the authentication request and the mount configuration becomes invalid, resulting in the observed HTTP Error 401.
Solution
- Generate a new client secret for the service principal (SP) in Azure Entra ID (formerly Azure Active Directory). Securely save the new secret, as it will be used in subsequent steps.
- In a Databricks notebook, unmount the storage account mount path that was configured using the SP with the expired secret. Execute
dbutils.fs.unmount("/mnt/<your-mount-path>")
- After unmounting, run
dbutils.fs.refreshMounts()
on all other running clusters to ensure that the mount changes are propagated.
- Recreate the mount path using the same SP but with the new client secret. Update your mount configuration with the new secret and execute the mount command. For details refer to the Mounting cloud object storage on Azure Databricks documentation.
- Once remounted, run
dbutils.fs.refreshMounts()
on all running clusters again to propagate the updated mount configuration, ensuring consistency.