By default, the MLflow client saves artifacts to an artifact store URI during an experiment. The artifact store URI is similar to /dbfs/databricks/mlflow-tracking/<experiment-id>/<run-id>/artifacts/.
This artifact store is a MLflow managed location, so you cannot download artifacts directly.
You must use client.download_artifacts in the MLflow client to copy artifacts from the artifact store to another storage location.
Example code
This example code downloads the MLflow artifacts from a specific run and stores them in the location specified as local_dir.
Replace <local-path-to-store-artifacts> with the local path where you want to store the artifacts.
Replace <run-id> with the run_id of your specified MLflow run.
%python import mlflow import os from mlflow.tracking import MlflowClient client = MlflowClient() local_dir = "<local-path-to-store-artifacts>" if not os.path.exists(local_dir): os.mkdir(local_dir) # Creating sample artifact "features.txt". features = "rooms, zipcode, median_price, school_rating, transport" with open("features.txt", 'w') as f: f.write(features) # Creating sample MLflow run & logging artifact "features.txt" to the MLflow run. with mlflow.start_run() as run: mlflow.log_artifact("features.txt", artifact_path="features") # Download the artifact to local storage. local_path = client.download_artifacts(<run-id>, "features", local_dir) print("Artifacts downloaded in: {}".format(local_dir)) print("Artifacts: {}".format(local_dir))
After the artifacts have been downloaded to local storage, you can copy (or move) them to an external filesystem or a mount point using standard tools.
Copy to an external filesystem
%scala dbutils.fs.cp(local_dir, "<filesystem://path-to-store-artifacts>")
Move to a mount point
%python shutil.move(local_dir, "/dbfs/mnt/<path-to-store-artifacts>")