This is a pretty hot topic, but I've never actually found a solution.
As you probably know, when we have a volume in a container and we install the dependencies (with npm i
or something) from a Dockerfile (with default perms), Npm will create a node_modules
folder in the container with root:root
access.
I'm facing two issues with this method (on a local/dev environment):
The
node_modules
folder only exists inside the container, but the host's IDE/LSPs needs this folder to work properly (module imports, type definitions, etc).If the host wants to install/update a package (
npm i ...
, etc) he will have to restart and rebuild the container for thenode_modules
folder to be updated.
So I came up with another idea, what if I install the dependencies using CMD
in a Dockerfile (or the command
property of a service in a docker-compose
file) and use a volume so the node_modules
can be shared with the host. Unfortunately, this method introduces new issues. For instance, the node_modules
has a root:root
permissions access, so if your host's username is "named" otherwise and doesn't have the same uid
& gid
you will need to run root access commands update the node_modules
(sudo npm i ...
).
Here is my current config:
docker-compose.yml
:
version: '3.7'
services:
app:
container_name: 'app_DEV'
build: .
command: sh -c "yarn install && node ./server.js"
volumes:
- ./:/usr/src/app
ports:
- 3000:3000
tty: true
Dockerfile
:
FROM node:12.8.1-alpine
WORKDIR /usr/src/app
COPY . .
package.json
:
{
"dependencies": {
"express": "^4.17.1"
}
}
server.js
:
const app = require('express')();
app.get('/', (req, res) => {
res.send('Hello');
});
app.listen(3000, () => console.log('App is listening on port 3000'));
Then you can try to run docker-compose up
and do a ls -la
:
-rw-r--r-- 1 mint mint 215 août 23 16:39 docker-compose.yml
-rw-r--r-- 1 mint mint 56 août 23 16:29 Dockerfile
drwxr-xr-x 52 root root 4096 août 23 16:31 node_modules
-rw-r--r-- 1 mint mint 53 août 23 16:31 package.json
-rw-r--r-- 1 mint mint 160 août 23 16:29 server.js
As you can see every files/folders have mint:mint
access except node_modules
(mint
is my host's user).
So to sum up my question: is there a better way to manage NodeJS dependencies with Docker containers?
This is a pretty hot topic, but I've never actually found a solution.
As you probably know, when we have a volume in a container and we install the dependencies (with npm i
or something) from a Dockerfile (with default perms), Npm will create a node_modules
folder in the container with root:root
access.
I'm facing two issues with this method (on a local/dev environment):
The
node_modules
folder only exists inside the container, but the host's IDE/LSPs needs this folder to work properly (module imports, type definitions, etc).If the host wants to install/update a package (
npm i ...
, etc) he will have to restart and rebuild the container for thenode_modules
folder to be updated.
So I came up with another idea, what if I install the dependencies using CMD
in a Dockerfile (or the command
property of a service in a docker-compose
file) and use a volume so the node_modules
can be shared with the host. Unfortunately, this method introduces new issues. For instance, the node_modules
has a root:root
permissions access, so if your host's username is "named" otherwise and doesn't have the same uid
& gid
you will need to run root access commands update the node_modules
(sudo npm i ...
).
Here is my current config:
docker-compose.yml
:
version: '3.7'
services:
app:
container_name: 'app_DEV'
build: .
command: sh -c "yarn install && node ./server.js"
volumes:
- ./:/usr/src/app
ports:
- 3000:3000
tty: true
Dockerfile
:
FROM node:12.8.1-alpine
WORKDIR /usr/src/app
COPY . .
package.json
:
{
"dependencies": {
"express": "^4.17.1"
}
}
server.js
:
const app = require('express')();
app.get('/', (req, res) => {
res.send('Hello');
});
app.listen(3000, () => console.log('App is listening on port 3000'));
Then you can try to run docker-compose up
and do a ls -la
:
-rw-r--r-- 1 mint mint 215 août 23 16:39 docker-compose.yml
-rw-r--r-- 1 mint mint 56 août 23 16:29 Dockerfile
drwxr-xr-x 52 root root 4096 août 23 16:31 node_modules
-rw-r--r-- 1 mint mint 53 août 23 16:31 package.json
-rw-r--r-- 1 mint mint 160 août 23 16:29 server.js
As you can see every files/folders have mint:mint
access except node_modules
(mint
is my host's user).
So to sum up my question: is there a better way to manage NodeJS dependencies with Docker containers?
Share Improve this question edited Oct 26, 2022 at 8:30 Anatole Lucet asked Aug 23, 2019 at 15:05 Anatole LucetAnatole Lucet 1,8134 gold badges28 silver badges47 bronze badges 2 |3 Answers
Reset to default 6Generally speaking, I would not recommend this approach since you're host & container might not be able to share the same modules. For example, if someone else on your team uses Windows and you have some compiled modules (i.e. node-sass or bcrypt), sharing those makes either the container or the host not able to use them.
Another solution that comes up frequently is to separate the node_modules installation step in your Dockerfile, and to override the volume mount for this. You will still need to rebuild the Docker image every time you want to add a package, but this (probably) shouldn't happen that often down the road.
Here's the relevant parts of the Dockerfile:
FROM node:12.8.1-alpine WORKDIR /usr/src/app COPY ./package*.json . COPY ./yarn.lock . RUN yarn COPY . . CMD [ "yarn", "start" ]
Then, in your docker-compose file:
version: '3.7' services: app: container_name: 'app_DEV' build: . command: sh -c "yarn install && node ./server.js" volumes: - ./:/usr/src/app - /usr/src/app/node_modules/ ports: - 3000:3000 tty: true
Make sure you include the /usr/src/app/node_modules/
volume AFTER the root mount, as it will override it within the container. Also, the trailing slash in important.
A few years have pasted since I originally wrote this question. I wanted to come back and share a different opinion, since my POV has changed a bit since then, and I now think the way I wanted to use containers is incorrect.
First of all, pretty much any file/folder created in a container shouldn't be altered outside this same container.
In the context of this post, any command altering the node_modules
folder should be run from within the container. I understand it can be a bit cumbersome, but I think it's fine as long as you use docker-compose (e.g. docker-compose exec app npm i
).
I think it fits better the way OCI containers are intended to be used.
On the OS compatibility side, since everything (dev environment related) should be done from inside the container, there shouldn't be any issue. Note that I've seen organizations distributing dev images both with uninstalled and preinstalled dependencies. I think both ways are fine, it just really depends on whether you want a lightweight dev image or not.
In your case in order to have it working in the way you want, you should add the USER
in the docker file and the user:
in the docker-compose.yml. e.g.
Dockerfile:
FROM node:12.8.1-alpine
WORKDIR /usr/src/app
USER node
COPY . .
docker-compose.yml:
version: '3.7'
services:
app:
container_name: 'app_DEV'
build: .
command: sh -c "yarn install && node ./server.js"
user: "1000:1000"
volumes:
- ./:/usr/src/app
ports:
- 3000:3000
tty: true
Anyway, we faced a similar situation and we opted for a different approach. Instead of sharing the node_modules folder between host and container (nasty behaviors if you work with colleagues using different OS) we decided to avoid mounting the node_modules folder in the docker-compose.yml.
In our case the Dockerfile looks like this:
FROM node:8.12.0-stretch
RUN mkdir /api
WORKDIR /api
COPY ./package.json ./package-lock.json ./
RUN npm ci --prod
COPY . .
CMD [ "nodemon", "server.js" ]
The docker-compose.yml looks like this:
version: '3.7'
services:
app:
build: .
volumes:
- "./:/api"
- "/api/node_modules/"
This way we're able to create the node_modules in the host (that can be used for testing, development and so on) and safely keep unchanged the content of the docker container. The downside of this approach is that our devs have to run the npm ci
also on the host and they need to recreate the image every time the package.json is changed.
USER
instruction in your dockerfile in order to not run everything as root but run it as a specific user. Which, security wise is a better idea, instead of running everything as root. – β.εηοιτ.βε Commented Aug 23, 2019 at 15:31node_modules
folder ? – Anatole Lucet Commented Aug 23, 2019 at 15:35