Docker is a great way to package and deploy web applications. Applications that have been containerized can easily be created, destroyed, or even moved between servers, as long as the servers are using the Docker runtime.
Not too long ago I wrote about creating a RESTful API that could process images and generate Android compliant launcher icons. This article was titled, Create an Android Launcher Icon Generator RESTful API with Node.js, and Jimp and it was powered by Express Framework. The application could be served on any properly configured server with Node.js. The catch here is that server configuration is never easy or quick.
We’re going to see how to package our web application into a container using Docker.
If you haven’t already done so, grab the code that was used in the previous example. You’ll want a functional project with an app.js file and a package.json file.
Before we can create an image that can be deployed as a container on some server, we’ll need to define the image. Within your project, execute the following commands:
touch Dockerfile
touch .dockerignore
If you don’t have the touch
command, go ahead and create the files however you find appropriate. The Dockerfile file defines any necessary setup, and the .dockerignore file specifies the files that should be excluded from the image.
Open the .dockerignore file and include the following:
node_modules
package-lock.json
*.jpg
It is very important that you do not transfer your local node_modules directory into the image. Most Node.js packages are installed based on the local system architecture. Your Docker container likely won’t match your local computer’s architecture. We’re also ignoring the package-lock.json file to prevent any install issues.
With our ignore list created, let’s define the Dockerfile file. Open it and include the following:
FROM node:6-alpine
COPY . /srv/
WORKDIR /srv
RUN /usr/local/bin/npm install
CMD /usr/local/bin/node app.js
So what is happening in the above configuration?
We’re saying that the container will use the Node.js Alpine image, which is a very lightweight Node.js image. Upon build, all files that aren’t listed in the .dockerignore will be copied to the /srv/ directory of the image.
Next we’re using the RUN
and CMD
commands. There is a big difference between the two. The RUN
command is executed when building the image while the CMD
command is executed when the container is ran. If you can, do as much as possible when the image is built, not ran.
During the build, we’re installing all Node.js dependencies based on the package.json file. At runtime, we start serving the app.js file that was copied over.
With the Dockerfile file available, we can build the image. From the Command Prompt (Windows) or Terminal (Mac and Linux), execute the following:
docker build -t image-service-api .
The above command should be executed from within the project directory. It will create an image called image-service-api that can be deployed quickly and easily.
You can verify that the image is available by executing the following:
docker images
When the command finishes, you should see a list of your images, similar to the following:
REPOSITORY TAG IMAGE ID CREATED SIZE
image-service-api latest aabe2defa256 37 minutes ago 86.4MB
node 6-alpine 1560e1ef2245 12 days ago 54.7MB
alpine latest 37eec16f1872 3 weeks ago 3.97MB
Now let’s try to deploy our image as a Docker container. From the command line, execute the following command:
docker run -d -p 3000:3000 --name image-service image-service-api
The above command will run the container in detached mode. Remember, our application was designed to serve on port 3000, so we are mapping the container port to our host port. We are calling the container image-service and we’re using the image-service-api image.
cURL is a quick and easy way to test that our API and container is functioning correctly. You could always use something like Postman, but it is probably overkill for this example.
From the command line, execute the following:
curl -X POST -F upload=@./myimage.jpg http://localhost:3000/process >> android.zip
The above command will issue a POST request to our container and store the binary result in an android.zip archive. The @
symbol in the request will allow us to map a local file to be sent.
There are other ways to create and deploy Docker containers. For example, we could make use of Compose to accomplish the task.
Within the project directory, execute the following command:
touch docker-compose.yml
If you don’t have the touch
command, create the docker-compose.yml file however you feel appropriate. This file should include the following content:
version: '2'
services:
image-api:
build: .
ports:
- 3000:3000
restart: always
In the above YAML, we’re saying we want to create a service called image-api that maps to port 3000 on both the host and container. When we run the Compose file, we are going to build the image.
To deploy a container, execute the following:
docker-compose up -d
All services in the Compose file will be ran in detached mode. To bring the containers down, execute the following:
docker-compose down
Container deployments can become very complex with environment variables and other things. While a Compose file doesn’t benefit us too much for this project, it could in more complicated projects.
You just saw how to create an easily deployable Docker container for our Node.js powered image processing API with minimal effort. If you haven’t seen the example, check out the previous article titled, Create an Android Launcher Icon Generator RESTful API with Node.js, and Jimp.
The great thing about Docker is that not only are applications easy and consistent in their deployments, but they can be easily scaled for demand. To see how to create a cluster, known as a swarm of containers, check out an article I wrote titled, Create a Cluster of Microservice Containers with Docker Swarm.