HTTP Error 401 when trying to access the ADLS mount path from Databricks

Generate a new client secret for the service principal in Azure Entra ID and refresh the mount path.

Written by saikumar.divvela

Last published at: July 23rd, 2025

Problem

When you try to access mount paths backed by an Azure Data Lake Storage (ADLS) Gen2 storage account, you receive the following error message. 

DatabricksServiceException: IO_ERROR: HTTP Error 401; url='https://login.microsoftonline.com/xxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxx/oauth2/token' AADToken: HTTP connection to https://login.microsoftonline.com/xxxxxxx-xxxx-xxxx-xxx-xxxxxxxxxxx/oauth2/token failed for getting token from AzureAD.; requestId='<request-id>'; contentType='application/json; charset=utf-8'; response '{'error':'invalid_client','error_description':'AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app '<app>'. Trace ID: <trace-id> Correlation ID: <correlation-id> Timestamp: 2025-06-03 14:45:15Z','error_codes':[7000215],'timestamp':'2025-06-03 14:45:15Z','trace_id':'<trace-id>','correlation_id':'<correlation-id>','error_uri':'https://login.microsoftonline.com;

 

Cause

The client secret used for authenticating a service principal (SP) with Azure Storage has expired. 

 

When the secret expires, Azure Active Directory (AD) no longer accepts the authentication request and the mount configuration becomes invalid, resulting in the observed HTTP Error 401. 

 

Solution

  1. Generate a new client secret for the service principal (SP) in Azure Entra ID (formerly Azure Active Directory). Securely save the new secret, as it will be used in subsequent steps.

 

  1. In a Databricks notebook, unmount the storage account mount path that was configured using the SP with the expired secret. Execute dbutils.fs.unmount("/mnt/<your-mount-path>")

 

  1. After unmounting, run dbutils.fs.refreshMounts() on all other running clusters to ensure that the mount changes are propagated.

 

  1. Recreate the mount path using the same SP but with the new client secret. Update your mount configuration with the new secret and execute the mount command. For details refer to the Mounting cloud object storage on Azure Databricks documentation.
  •  
  1. Once remounted, run dbutils.fs.refreshMounts() on all running clusters again to propagate the updated mount configuration, ensuring consistency.