最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

databricks memory metrics graph - Stack Overflow

programmeradmin2浏览0评论

I am trying to understand the following graph databricks is showing me and failing:

What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizing any of the Spark worker nodes, it's just a Python script. I know that memory usage of ~138GB is real because job was failing on a 128GB driver node and seems to be happy on 256GB driver.

It is a race between SO and Databrick community now!

I am trying to understand the following graph databricks is showing me and failing:

What is that constant lightly shaded area close to 138GB? It is not explained in the "Usage type" legend. The job is running completely on the driver node, not utilizing any of the Spark worker nodes, it's just a Python script. I know that memory usage of ~138GB is real because job was failing on a 128GB driver node and seems to be happy on 256GB driver.

It is a race between SO and Databrick community now! https://community.databricks/t5/administration-architecture/help-undersanding-ram-utilization-graph/m-p/112864#M3139

Share Improve this question edited Mar 17 at 23:53 MK. asked Mar 14 at 15:15 MK.MK. 34.6k19 gold badges79 silver badges114 bronze badges
Add a comment  | 

1 Answer 1

Reset to default 1

One thing that might explain the discrepancy in the total memory numbers could be that if you leave the Compute drop down at default it will average the metrics across all nodes in the cluster. Make sure you're selecting just the driver node when you want to see metrics for just that node.

Cluster Metrics - View Metrics at the Node Level

发布评论

评论列表(0)

  1. 暂无评论