Existing Node Project (Node + Redis + DB) to Docker Container

Vipul Vyas
5 min readSep 27, 2021

--

Let’s Learn about how to transfer your existing project to Docker Container,

For do containerise your project you need to give some instruction to build your project in docker. For that you need to create Dockerfile which is use for write your instruction step by step.

Dockerfile is a simple text file that consists of instructions to build Docker images. more about dockerfile. I will cover how to create docker file no worries about it.

Let’s create containerise node js application with one example. In this project suppose you have three services Node server, Redis for caching and Database. We can create Containerise this application in two way. First is we can install all services in one container and run in single container. Second one is create separate containers for all services.

In first method, Create docker file and install all services and start them and install node dependency. it is same as we install service and starting in our local machine and for node js install dependency with npm install .

For more then one container we need Docker Compose. Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.

Let’s create dockerfile,

touch Dockerfile

First, we need to specify a reference image to build from.

From node:alpine

here, I am using node image as a base image and tag is alpine. you can take as require for your application. node images…

Note: Image size matters in docker execution. Use of lite weight base image can generate lite weight image for your application. you can install your requirements in lite weight base image it is good practice.

If application has any requirement so we can add here like i am updating base image. One more thing here if you are using light weight base image make sure you have some knowledge on that image or system like here i used alpine linux for demo i know only some idea but for real life you should have knowledge on it.

RUN apt-get update

here, RUN instruction is for run command in image. there is also CMD instruction which use for run the command we will see that as well.

Next, we will specify the working directory with our container. you need to take one working directory for your application. after that you can put your application code there. more on workdir…

WORKDIR /usr/src/app

Now, Copy package.json and application code into working directory. suppose you are in application folder. more on copy…

COPY package.json /usr/src/app
COPY . /usr/src/app

there is also ADD instruction for copy files/Directory. Just different on how to copy compressed file. Copy just do copy file/Directory as given. ADD instruction extracts a compressed source only if it is in a recognized compression format which is solely based on the contents of the file (not on the file name). The recognized compression formats include identity, gzip, bzip, and xz. ADD vs COPY…

Don’t copy node_module from local to docker container. it increase the size of docker image. so install all packages in docker it self.

RUN npm install

Now, suppose application has some environment variable. we can add environment variable with ENV instruction or we can give at run time with -e argument.

--env , -e Set environment variables

--env-fileRead in a file of environment variables

ENV DB_HOST=127.0.0.1

Now, we need to export port where our application is running so that we can access application. suppose our application will run on port 3000.

EXPOSE 3000

Last, start node application with CMD instruction.

CMD ["node", "app.js"]

RUN and CMD is both is for execute commands. RUN lets you execute commands inside of your Docker image. CMD lets you define a default command to run when your container starts. RUN vs CMD.

Now, remember when we added the statement to copy our app code into the working directory. There is a slight problem with that. It would copy all the unwanted things like node_modules into the container. To solve this problem we need a .dockerignore file. create .dockerignore file in same directory where your dockerfile present.

node_modules
some_file_which_not_require

Dockerfile :

From node:alpine
RUN apt-get update
WORKDIR /usr/src/app
COPY package.json /usr/src/app
COPY . /usr/src/app
RUN npm install
ENV DB_HOST=127.0.0.1
EXPOSE 3000
CMD ["node", "app.js"]

Now, our main application is ready but our redis and database is not done yet so for that we can create different container for that we need docker compose.

Let’s create docker-compose.yml where we created docker file.

touch docker-compose.yml

First, specify the version of docker-compose to use. We will go with version 3.

version: '3'

we have 3 services node application + Redis + DB (Mongo).

version: '3'
# Define the services/containers to be run
services:
myapp: #name of your service
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- mongo
- redis
volumes:
- .:/usr/src/app
depends_on:
- mongo
- redis
redis:
image: redis
ports:
- "6379:6379"
mongo: # name of the service
image: mongo # specify image to build container from
ports:
- "27017:27017"

here, services: contain 3 services.

myapp: which service name of our application

myapp: 
build: ./
ports:
- "3000:3000"
links:
- mongo
- redis
volumes:
- .:/usr/src/app
depends_on:
- mongo
- redis
env_file: ./env # environment file path
environment:
# - DB_HOST:localhost # $DB_HOST variable
- DB_HOST:mongo
- REDIS_HOST:redis
- APP_ENV:dev

myapp contain build which specify the directory of the Dockerfile. so, whenever this docker-compose run then it take that dockerfile for application. ports which specify out “local machine port : docker port”. it can be multiple port. “-” specify the array like we can expose like this.

ports:
- "3000:3000"
- "3001:3001"
- "5000:3002"

links which specify which services you want to link. here, we linked mongo and redis with container where our service is running. volumes which use for persistence our data. here, we mapped local current directory with our docker’s app data. you can also consider like copy your application code or mount in local machine and in docker container. deponds_on which as name it specify that this services is depends on other specified services, here mongo and redis. we can also add environment variable here with env_file and environment. we can also use variable like $variablename. here for DB_HOST i used mongo which is service name. same for redis as well.

redis:
image: redis
ports:
- "6379:6379"

here, redis service using redis:latest image if there is no tag specified then it use latest image. for ports same as i mention above. if you want to make database persistence then add volume to it.

version: '3'
# Define the services/containers to be run
services:
myapp:
build: ./ # specify the directory of the Dockerfile
ports:
- "3000:3000"
links:
- mongo
- redis
volumes:
- .:/usr/src/app
depends_on:
- mongo
- redis
env_file: ./env # environment file path
environment:
# - DB_HOST:localhost # $DB_HOST variable
- DB_HOST:mongo
- REDIS_HOST:redis
- APP_ENV:dev
redis:
image: redis
ports:
- "6379:6379"
mongo:
image: mongo # specify image to build container from
ports:
- "27017:27017"

Now we are ready to run our containers.

docker-compose up

There is also networks field which help to connect with services each other and out side the docker as well. more on docker network.

docker volumes : Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. suppose any of the reason your docker container stop then your data also loss which store in DB. so we can map data of docker container with our local machine with docker volume.

H@ppy Learning :)

Part 4

--

--

No responses yet