Getting node-specific instead of cluster-wide memory usage data from system.compute.node_timeline

Join the table node_timeline with the table node_types in your query.

Written by anshuman.sahu

Last published at: July 25th, 2025

Problem

When you try to programmatically get a cluster’s memory usage using system.compute.node_timeline, you get node-specific data instead of cluster-wide data.

 

Cause

Determining cluster-wide memory usage is not possible with a single system table. 

 

Solution

To check memory usage (in bytes), join the table node_timeline with the table node_types. Run the following code in a notebook, through a job, or with Databricks SQL.

select cluster_id, instance_id, start_time, end_time, round(mem_used_percent / 100 * node_types.memory_mb, 0) as mem_used_mb
from system.compute.node_timeline
join system.compute.node_types using(node_type)
order by start_time desc;

 

For more information, refer to the Compute system tables reference (AWSAzureGCP) documentation.