最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

docker - Why does my Python build on OpenShift consume too much host disk space? - Stack Overflow

programmeradmin4浏览0评论

I'm running a Python application on OpenShift, and I've noticed that the build process is consuming an excessive amount of disk space on the host. The build is evicted during the "Exporting" step, specifically at "Copying blob," and the metrics indicate that 20 GB of filesystem space was used before the eviction.

Here are some details about my setup:

Python version: 3.11 OpenShift version: 4.15.0-0.okd-2024-03-10-010116 (OKD) Build strategy: Dockerfile Dependencies: transformers, pytorch-lightning, mlflow, protobuf, and some Flask libraries Build process: Using a BuildConfig, so the build is executed on the cluster, and the app code is downloaded from GitLab.

I've already tried to optimize the image size by double-checking the Dockerfile and removing unnecessary items. I also experimented with different package managers like pip and poetry.

In local the final image size is 10.3gb.

My questions are:

Is my build really evicted because of excessive filesystem usage? How can I manage this kind of build to avoid eviction?

Thank you!

I'm running a Python application on OpenShift, and I've noticed that the build process is consuming an excessive amount of disk space on the host. The build is evicted during the "Exporting" step, specifically at "Copying blob," and the metrics indicate that 20 GB of filesystem space was used before the eviction.

Here are some details about my setup:

Python version: 3.11 OpenShift version: 4.15.0-0.okd-2024-03-10-010116 (OKD) Build strategy: Dockerfile Dependencies: transformers, pytorch-lightning, mlflow, protobuf, and some Flask libraries Build process: Using a BuildConfig, so the build is executed on the cluster, and the app code is downloaded from GitLab.

I've already tried to optimize the image size by double-checking the Dockerfile and removing unnecessary items. I also experimented with different package managers like pip and poetry.

In local the final image size is 10.3gb.

My questions are:

Is my build really evicted because of excessive filesystem usage? How can I manage this kind of build to avoid eviction?

Thank you!

Share Improve this question edited 19 hours ago Guido Mista asked yesterday Guido MistaGuido Mista 115 bronze badges 3
  • Consider running this locally and inspecting the size of your filesystem layers -- does that align with the sizes openshift reports? If it does, you can look at the layers themselves and see with your own eyes which files are taking up space within them. – Charles Duffy Commented yesterday
  • Depending on implementation details, I can certainly see creation of a 10gb final image overrunning a 20gb limit. For example, if (and this is admittedly speculation) the quota were enforced at a block level (creating a LVM snapshot and mounting the child partition for write), then any interim changes made will count against the quota even if those temporary files &c are later deleted. 10gb is a lot -- I'd hope that by inspecting where the storage is going you could bring the image down to 2-3gb; then even if there's a 3x factor of working-space to final space you'll still have ample margin. – Charles Duffy Commented 17 hours ago
  • (block-layer storage isn't Docker's default configuration, but it is an available configuration, and there are sometimes good reasons to use it -- more reliably POSIX-compliant filesystem semantics, for one; I'd need to know more about OpenShift to know how they configure things). – Charles Duffy Commented 17 hours ago
Add a comment  | 

1 Answer 1

Reset to default -1

I am new to BuidConfig. Is it running on ephemeral storage? It could be exhausting the root volume of the Kube node. I found this, possibly related. I also found here some options to use volumes, maybe it helps?

发布评论

评论列表(0)

  1. 暂无评论