��权限没有,则隐藏 function forum_list_access_filter($forumlist, $gid, $allow = 'allowread') { global $grouplist; if (empty($forumlist)) return array(); if (1 == $gid) return $forumlist; $forumlist_filter = $forumlist; $group = $grouplist[$gid]; foreach ($forumlist_filter as $fid => $forum) { if (empty($forum['accesson']) && empty($group[$allow]) || !empty($forum['accesson']) && empty($forum['accesslist'][$gid][$allow])) { unset($forumlist_filter[$fid]); } unset($forumlist_filter[$fid]['accesslist']); } return $forumlist_filter; } function forum_filter_moduid($moduids) { $moduids = trim($moduids); if (empty($moduids)) return ''; $arr = explode(',', $moduids); $r = array(); foreach ($arr as $_uid) { $_uid = intval($_uid); $_user = user_read($_uid); if (empty($_user)) continue; if ($_user['gid'] > 4) continue; $r[] = $_uid; } return implode(',', $r); } function forum_safe_info($forum) { //unset($forum['moduids']); return $forum; } function forum_filter($forumlist) { foreach ($forumlist as &$val) { unset($val['brief'], $val['announcement'], $val['seo_title'], $val['seo_keywords'], $val['create_date_fmt'], $val['icon_url'], $val['modlist']); } return $forumlist; } function forum_format_url($forum) { global $conf; if (0 == $forum['category']) { // 列表URL $url = url('list-' . $forum['fid'], '', FALSE); } elseif (1 == $forum['category']) { // 频道 $url = url('category-' . $forum['fid'], '', FALSE); } elseif (2 == $forum['category']) { // 单页 $url = url('read-' . trim($forum['brief']), '', FALSE); } if ($conf['url_rewrite_on'] > 1 && $forum['well_alias']) { if (0 == $forum['category'] || 1 == $forum['category']) { $url = url($forum['well_alias'], '', FALSE); } elseif (2 == $forum['category']) { // 单页 $url = ($forum['threads'] && $forum['brief']) ? url($forum['well_alias'] . '-' . trim($forum['brief']), '', FALSE) : url($forum['well_alias'], '', FALSE); } } return $url; } function well_forum_alias() { $forumlist = forum_list_cache(); if (empty($forumlist)) return ''; $key = 'forum-alias'; static $cache = array(); if (isset($cache[$key])) return $cache[$key]; $cache[$key] = array(); foreach ($forumlist as $val) { if ($val['well_alias']) $cache[$key][$val['fid']] = $val['well_alias']; } return array_flip($cache[$key]); } function well_forum_alias_cache() { global $conf; $key = 'forum-alias-cache'; static $cache = array(); // 用静态变量只能在当前 request 生命周期缓存,跨进程需要再加一层缓存:redis/memcached/xcache/apc if (isset($cache[$key])) return $cache[$key]; if ('mysql' == $conf['cache']['type']) { $arr = well_forum_alias(); } else { $arr = cache_get($key); if (NULL === $arr) { $arr = well_forum_alias(); !empty($arr) AND cache_set($key, $arr); } } $cache[$key] = empty($arr) ? '' : $arr; return $cache[$key]; } ?>Using a tarball as docker parent image inside a Dockerfile without docker load - Stack Overflow
最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

Using a tarball as docker parent image inside a Dockerfile without docker load - Stack Overflow

programmeradmin5浏览0评论

I have a docker image I've saved as a .tar using docker save and I want to use it as a parent image FROM tarball_image.tar inside a Dockerfile without doing docker load.

At deployment time, I only upload files to a Git Repo and an automated system that just takes a Dockerfile and builds the image based on the files in the repo. There's no access to the docker engine nor can I do docker load etc. The tarball image is managed with Git LFS (although I can find the direct URL that points to the raw binary file). Additionally, the tarball image can't be uploaded in a public repo/hub. I just have the Dockerfile and the tarball in the git repo and an automatic agent takes the Dockerfile and builds it.

Note that I do not have access to the docker commands that are run to build the image so nothing can change there either.

I have a docker image I've saved as a .tar using docker save and I want to use it as a parent image FROM tarball_image.tar inside a Dockerfile without doing docker load.

At deployment time, I only upload files to a Git Repo and an automated system that just takes a Dockerfile and builds the image based on the files in the repo. There's no access to the docker engine nor can I do docker load etc. The tarball image is managed with Git LFS (although I can find the direct URL that points to the raw binary file). Additionally, the tarball image can't be uploaded in a public repo/hub. I just have the Dockerfile and the tarball in the git repo and an automatic agent takes the Dockerfile and builds it.

Note that I do not have access to the docker commands that are run to build the image so nothing can change there either.

Share Improve this question edited Jan 20 at 16:32 PentaKon asked Jan 17 at 19:41 PentaKonPentaKon 4,6466 gold badges49 silver badges84 bronze badges 5
  • Is the file a raw the filesystem, or is it a container image save (with a manifest.json in the root of the tar)? In other words, does it load with docker load or do you need to use docker import? – BMitch Commented Jan 17 at 21:32
  • It's a tarball that I generated using docker save so I guess it can be loaded using docker load if I'm not mistaken. – PentaKon Commented Jan 20 at 13:48
  • Sorry, now I got what you mean. The tarball is an image with a manifest.json, blobs folder etc. – PentaKon Commented Jan 20 at 15:17
  • What parts of the process do you control? It seems you control the Dockerfile but not the rest of the build context. Do you control the creation of the tar file? Do you control the docker build command line? – BMitch Commented Jan 20 at 15:52
  • I only control the Dockerfile and the files uploaded to the filesystem that will be used during docker build. However, the filesystem is essentially a Git Repo so any large files such as an image tar is uploaded using Git LFS. I can find the direct URL to the image if that's needed, but not sure how Git LFS interacts when being worked on as a normal file system. I edited the question to reflect that. – PentaKon Commented Jan 20 at 16:30
Add a comment  | 

2 Answers 2

Reset to default 1

There are a list of options, many of which aren't available to you.

Typically, you would want to push the image to a registry, including free options like Docker Hub and GitHub's ghcr.io. However, it sounds like you do not have the ability to do this.


The next best option, and the one that most closely matches your problem, is to pass an additional build context to buildx. And recent versions of the docker engine will include the OCI Layout files (oci-layout and index.json) in the docker save output, making this option even easier.

docker buildx build --build-context foo=oci-layout:///path/to/local/layout:<tag> ...

Which you could then reference as foo in the Dockerfile:

FROM foo
...

Details on using additional contexts are available at: https://docs.docker/reference/cli/docker/buildx/build/#source-oci-layout

However, since you do not control the build command line, I don't believe this option is available to you either.


Without either of the above, and since it appears you can control the tar output, you'll want to use a docker export instead of docker save for generating the tar.

docker create --name container-to-export $image
docker export container-to-export >export.tar

If the content was in a registry, you could also use Google's crane tool to perform a crane export $image export.tar. For details, see the crane documentation.

With that tar file, the easiest way to use it would be if the file was available in your build context so that you can add it like:

FROM scratch
ADD export.tar /

If you need to pull the tar from an external URL, then I'd still recommend using ADD to pull the tar for better cache management (Docker will perform a conditional fetch of the tar, and use the cache if it has already seen the tar file):

FROM busybox as export
ADD http://$url/export.tar /export.tar
RUN mkdir /export && tar -xvf /export.tar -C /export

FROM scratch
COPY --from=export /export /
# ...

If you didn't have control of the tar file, and needed to import directly from the docker save output, then I'd look at building a tool for the task. The OCI image-spec documents the OCI Layout structure, the Index for parsing the index.json and potentially a nested index manifest, the Manifest for parsing the image specific manifest, and the Layer details including how change sets and whiteouts are applied. You would then run that tooling as the first stage of the build, to generate the /export directory as before, and then use it in a later stage.

The Dockerfile ADD directive has a couple of capabilities that COPY doesn't. Of note, it can unpack and uncompress tar files from the build context, without requiring any particular tools in the image. You can combine this with the special scratch image that contains absolutely nothing at all to produce an image that only contains the contents of your tar file.

FROM scratch
ADD tarball_image.tar /

This is the same mechanism the various base images use; see for example the debian image Dockerfile.

(You often don't want ADD's extra behaviors; if you COPY file.tar.gz ./ into your image, you expect the tar file and not to unpack it. Conventional wisdom is to prefer COPY over ADD to avoid surprises. But here you do specifically want to unpack the tar file, and ADD is the right tool.)


In a comment you clarify that the tar file isn't actually in the build context but you need to fetch it. You can use a multi-stage build for this as another option. A first stage retrieves and unpacks the rootfs, using tools available in some normal image. You can then COPY --from an entire directory tree into the real image.

FROM busybox AS rootfs
RUN wget -O /rootfs.tar.gz http://.../rootfs.tar.gz
WORKDIR /rootfs
RUN tar xzf /rootfs.tar.gz

FROM scratch
COPY --from=rootfs /rootfs/ /
发布评论

评论列表(0)

  1. 暂无评论