最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

node.js - nodejs process running on kubernetes pod spawning multiple child processes - Stack Overflow

programmeradmin2浏览0评论

Running a node.js application which to all intents and purposes is a standard single threaded node-app that will delegate I/O bound tasks to the I/O / libuv threadpool. What I have just noticed is that when calling either

pstree -p

OR

ps 1 -T

(assuming that the main node process is pid 1) we will always see 10 child processes spawned. this is happening to all of our node apps, regardless of technology (some use express, some use astro, some use svelte - all are spawning 10 child processes).

What I am trying to get to the bottom of is, why is this happening? And would we be better off setting max-workers=1 and just increase the size of the UV_THREADPOOL?

My hypothesis right now and I accept it may be way off, is that we run our pods on nodes with at least x16 vCPU and 64GB of memory. In lieu of an explicit setting of --max-workers, my guess is the node-run time decides how many workers it should spawn. However, certain Linux commands pertain to the node the pod runs on, and others pertain to the pod only. For example, calling free or top will give information about the node the pod runs on. As opposed to cat /sys/fs/cgroup/memory/memory.limit_in_bytes. The hypothesis is that because --max-workers is not explicitly set, node tries to determine this at runtime, but the command it calls to decide actually pertains to the node the pod runs on, and not the pod configuration (pods are usually configured with just 1000 millicores).

I'm finding it difficult to prove one way or another. The other hypothesis is, that the node engine just spawns a child worker process when the UV_THREADPOOL is exhausted. Neither --max-workers or UV_THREADPOOL are explicitly set by any of these applications.

Right now we are planning to experiment with either basing the --max-workers off of the pod CPU size, ostensibly this would be nearly always 2, and also to experiment with fixing --max-workers to be 1, and just increase the size of the UV_THREADPOOL to say, 64.

It would be good to get some feedback on where are these 10 child processes coming from, and what would be the reccommended steps be? it is making these pods harder to monitor because they are OOMKilled before the monitored process gets anywhere close to it's set max heap size. Typically the pods have 2GB of memory and therefore max-heap-size is fixed at 1536MB. When we check actual consumed memory from the /sys/fs/cgroup/memory/memory.actual_usage_bytes we see it is far higher than the monitored process. The monitored process periodically dumps the output of v8.getHeapStatistics and it is hard to make these match up. It is believed the reason for the lack of correlation is all of these child processes, each in turn have their own heap, have their own memory / cpu usage and this is why it is not reflected in the stats for the main process.

发布评论

评论列表(0)

  1. 暂无评论