return FALSE; $r = well_tag_thread__update(array('id' => $id), $update); return $r; } function well_tag_thread_find($tagid, $page, $pagesize) { $arr = well_tag_thread__find(array('tagid' => $tagid), array('id' => -1), $page, $pagesize); return $arr; } function well_tag_thread_find_by_tid($tid, $page, $pagesize) { $arr = well_tag_thread__find(array('tid' => $tid), array(), $page, $pagesize); return $arr; } ?>pyspark - Spark Application Fails Every 50 Days – Driver Memory Shows 98.1 GB19.1 GB - Stack Overflow
最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

pyspark - Spark Application Fails Every 50 Days – Driver Memory Shows 98.1 GB19.1 GB - Stack Overflow

programmeradmin0浏览0评论

I am facing an issue where my Spark application fails approximately once every 50 days. However, I don’t see any errors in the application logs. The only clue I found is in the NodeManager logs, which show the following error:

WARN .apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exception from container-launch with container ID: container_e225_1708884103504_1826568_02_000002 and exit code: 1

After the restart, I checked the memory usage in both the executor and the driver. In the Spark UI, the driver's memory usage appears unusual: it's showing 98.1 GB/19.1 GB.

  • My spark version is 2.4.0.

My Questions:

  • What does 98.1 GB / 19.1 GB in the Spark UI Storage tab for the driver indicate?
  • Could this excessive driver memory usage be the reason for my application's failure?
  • How can I debug or find the root cause of why my application fails once every 50 days?

Any insights or suggestions would be greatly appreciated!

发布评论

评论列表(0)

  1. 暂无评论