Need a consistent development and deployment experience as developers work across teams and use different machines for their daily tasks? That is where Docker has you covered with containers. A common experience might include running a local version of MongoDB Community in a container and an application in another container. This strategy works for some organizations, but what if you want to leverage all the benefits that come with MongoDB Atlas in addition to a container strategy for your application development?
In this tutorial we’ll see how to create a MongoDB-compatible web application, bundle it into a container with Docker, and manage creation as well as destruction for MongoDB Atlas with the Atlas CLI during container deployment.
It should be noted that this tutorial was intended for a development or staging setting through your local computer. It is not advised to use all the techniques found in this tutorial in a production setting. Use your best judgment when it comes to the code included.
If you’d like to try the results of this tutorial, check out the repository and instructions on GitHub.
There are a lot of moving parts in this tutorial, so you’ll need a few things prior to be successful:
The Atlas CLI can create an Atlas account for you along with any keys and ids, but for the scope of this tutorial you’ll need one created along with quick access to the “Public API Key”, “Private API Key”, “Organization ID”, and “Project ID” within your account. You can see how to do this in the documentation.
Docker is going to be the true star of this tutorial. You don’t need anything beyond Docker because the Node.js application and the Atlas CLI will be managed by the Docker container, not your host computer.
On your host computer, create a project directory. The name isn’t important, but for this tutorial we’ll use mongodbexample as the project directory.
We’re going to start by creating a Node.js application that communicates with MongoDB using the Node.js driver for MongoDB. The application will be simple in terms of functionality. It will connect to MongoDB, create a database and collection, insert a document, and expose an API endpoint to show the document with an HTTP request.
Within the project directory, create a new app directory for the Node.js application to live. Within the app directory, using a command line, execute the following:
npm init -y
npm install express mongodb
If you don’t have Node.js installed, just create a package.json file within the app directory with the following contents:
{
"name": "mongodbexample",
"version": "1.0.0",
"description": "",
"main": "main.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node main.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.2",
"mongodb": "^4.12.1"
}
}
Next, we’ll need to define our application logic. Within the app directory we need to create a main.js file. Within the main.js file, add the following JavaScript code:
const { MongoClient } = require("mongodb");
const Express = require("express");
const app = Express();
const mongoClient = new MongoClient(process.env.MONGODB_ATLAS_URI);
let database, collection;
app.get("/data", async (request, response) => {
try {
const results = await collection.find({}).limit(5).toArray();
response.send(results);
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
const server = app.listen(3000, async () => {
try {
await mongoClient.connect();
database = mongoClient.db(process.env.MONGODB_DATABASE);
collection = database.collection(`${process.env.MONGODB_COLLECTION}`);
collection.insertOne({ "firstname": "Nic", "lastname": "Raboy" });
console.log("Listening at :3000");
} catch (error) {
console.error(error);
}
});
process.on("SIGTERM", async () => {
if(process.env.CLEANUP_ONDESTROY == "true") {
await database.dropDatabase();
}
mongoClient.close();
server.close(() => {
console.log("NODE APPLICATION TERMINATED!");
});
});
There’s a lot happening in the few lines of code above. We’re going to break it down!
Before we break down the pieces, take note of the environment variables used throughout the JavaScript code. We’ll be passing these values through Docker in the end so we have a more dynamic experience with our local development.
The first important snippet of code to focus on is the start of our application service:
const server = app.listen(3000, async () => {
try {
await mongoClient.connect();
database = mongoClient.db(process.env.MONGODB_DATABASE);
collection = database.collection(`${process.env.MONGODB_COLLECTION}`);
collection.insertOne({ "firstname": "Nic", "lastname": "Raboy" });
console.log("Listening at :3000");
} catch (error) {
console.error(error);
}
});
Using the client that was configured near the top of the file, we can connect to MongoDB. Once connected, we can get a reference to a database and collection. This database and collection doesn’t need to exist before that because it will be created automatically when data is inserted. With the reference to a collection, we insert a document and begin listening for API requests through HTTP.
This brings us to our one and only endpoint:
app.get("/data", async (request, response) => {
try {
const results = await collection.find({}).limit(5).toArray();
response.send(results);
} catch (error) {
response.status(500).send({ "message": error.message });
}
});
When the /data
endpoint is consumed, the first five documents in our collection are returned to the user. Otherwise if there was some issue, an error message would be returned.
This brings us to something optional, but potentially valuable when it comes to a Docker deployment for local development:
process.on("SIGTERM", async () => {
if(process.env.CLEANUP_ONDESTROY == "true") {
await database.dropDatabase();
}
mongoClient.close();
server.close(() => {
console.log("NODE APPLICATION TERMINATED!");
});
});
The above code says that when a termination event is sent to the application, drop the database we had created and close the connection to MongoDB as well as the Express Framework service. This could be useful if we want to undo everything we had created when the container stops. If you want your changes to persist, it might not be necessary. For example, if you want your data to exist between container deployments, persistence would be required. On the other hand, maybe you are using the container as part of a test pipeline and you want to clean up when you’re done, the termination commands could be valuable.
So we have an environment variable heavy Node.js application. What’s next?
While we have the application, our MongoDB Atlas cluster may not be available to us. For example, maybe this is our first time being exposed to Atlas and nothing has been created yet. We need to be able to quickly and easily create a cluster, configure our IP access rules, specify users and permissions, and then connect with our Node.js application.
This is where the MongoDB Atlas CLI does the heavy lifting!
There are many different ways to create a script. Some like Bash, some like ZSH, some like something else. We’re going to be using ZX which is a JavaScript wrapper for Bash.
Within your project directory, not your app directory, create a docker_run_script.mjs file with the following code:
#!/usr/bin/env zx
$.verbose = true;
const runtimeTimestamp = Date.now();
process.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || "examples";
process.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || "demo";
process.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || "password1234";
process.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || "business_" + runtimeTimestamp;
process.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || "people_" + runtimeTimestamp;
process.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;
var app;
process.on("SIGTERM", () => {
app.kill("SIGTERM");
});
try {
let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;
await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`
let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;
} catch (error) {
console.log(error.stdout);
}
try {
let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;
let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;
await $`sleep 10`
} catch (error) {
console.log(error.stdout);
}
try {
let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;
let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);
parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);
parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);
parsedConnectionString.search = "retryWrites=true&w=majority";
process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();
app = $`node main.js`;
} catch (error) {
console.log(error.stdout);
}
Once again, we’re going to break down what’s happening!
Like with the Node.js application, the ZX script will be using a lot of environment variables. In the end, these variables will be passed with Docker, but you can hard-code them at any time if you want to test things outside of Docker.
The first important thing to note is the defaulting of environment variables:
process.env.MONGODB_CLUSTER_NAME = process.env.MONGODB_CLUSTER_NAME || "examples";
process.env.MONGODB_USERNAME = process.env.MONGODB_USERNAME || "demo";
process.env.MONGODB_PASSWORD = process.env.MONGODB_PASSWORD || "password1234";
process.env.MONGODB_DATABASE = process.env.MONGODB_DATABASE || "business_" + runtimeTimestamp;
process.env.MONGODB_COLLECTION = process.env.MONGODB_COLLECTION || "people_" + runtimeTimestamp;
process.env.CLEANUP_ONDESTROY = process.env.CLEANUP_ONDESTROY || false;
The above snippet isn’t a requirement, but if you want to avoid setting or passing around variables, defaulting them could be helpful. In the above example, the use of runtimeTimestamp
will allow us to create a unique database and collection should we want to. This could be useful if numerous developers plan to use the same Docker images to deploy containers because then each developer would be in a sandboxed area. If the developer chooses to undo the deployment, only their unique database and collection would be dropped.
Next we have the following:
process.on("SIGTERM", () => {
app.kill("SIGTERM");
});
We have something similar in the Node.js application as well. We have it in the script because eventually the script controls the application. So when we (or Docker) stops the script, the same stop event is passed to the application. If we didn’t do this, the application would not have a graceful shutdown and the drop logic wouldn’t be applied.
Now we have three try / catch blocks, each focusing on something particular.
The first block is responsible for creating a cluster with sample data:
try {
let createClusterResult = await $`atlas clusters create ${process.env.MONGODB_CLUSTER_NAME} --tier M0 --provider AWS --region US_EAST_1 --output json`;
await $`atlas clusters watch ${process.env.MONGODB_CLUSTER_NAME}`
let loadSampleDataResult = await $`atlas clusters loadSampleData ${process.env.MONGODB_CLUSTER_NAME} --output json`;
} catch (error) {
console.log(error.stdout);
}
If the cluster already exists, an error will be caught. We have three blocks because in our scenario, it is alright if certain parts already exist.
Next we worry about users and access:
try {
let createAccessListResult = await $`atlas accessLists create --currentIp --output json`;
let createDatabaseUserResult = await $`atlas dbusers create --role readWriteAnyDatabase,dbAdminAnyDatabase --username ${process.env.MONGODB_USERNAME} --password ${process.env.MONGODB_PASSWORD} --output json`;
await $`sleep 10`
} catch (error) {
console.log(error.stdout);
}
We want our local IP address to be added to the access list and we want a user to be created. In this example, we are creating a user with extensive access, but you may want to refine the level of permission they have in your own project. For example, maybe the container deployment is meant to be a sandboxed experience. In this scenario, it makes sense that the user created access only the database and collection in the sandbox. We sleep
after these commands because they are not instant and we want to make sure everything is ready before we try to connect.
Finally we try to connect:
try {
let connectionString = await $`atlas clusters connectionStrings describe ${process.env.MONGODB_CLUSTER_NAME} --output json`;
let parsedConnectionString = new URL(JSON.parse(connectionString.stdout).standardSrv);
parsedConnectionString.username = encodeURIComponent(process.env.MONGODB_USERNAME);
parsedConnectionString.password = encodeURIComponent(process.env.MONGODB_PASSWORD);
parsedConnectionString.search = "retryWrites=true&w=majority";
process.env.MONGODB_ATLAS_URI = parsedConnectionString.toString();
app = $`node main.js`;
} catch (error) {
console.log(error.stdout);
}
After the first try / catch block finishes, we’ll have a connection string. We can finalize our connection string with a Node.js URL object by including the username and password, then we can run our Node.js application. Remember, the environment variables and any manipulations we made to them in our script will be passed into the Node.js application.
At this point, we have an application and we have a script for preparing MongoDB Atlas and launching the application. It’s time to get everything into a Docker image to be deployed as a container.
At the root of your project directory, add a Dockerfile file with the following:
FROM node:18
WORKDIR /usr/src/app
COPY ./app/* ./
COPY ./docker_run_script.mjs ./
RUN curl https://fastdl.mongodb.org/mongocli/mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz --output mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz
RUN tar -xvf mongodb-atlas-cli_1.3.0_linux_x86_64.tar.gz && mv mongodb-atlas-cli_1.3.0_linux_x86_64 atlas_cli
RUN chmod +x atlas_cli/bin/atlas
RUN mv atlas_cli/bin/atlas /usr/bin/
RUN npm install -g zx
RUN npm install
EXPOSE 3000
CMD ["./docker_run_script.mjs"]
The custom Docker image will be based on a Node.js image which will allow us to run our Node.js application as well as our ZX script.
After our files are copied into the image, we run a few commands to download and extract the MongoDB Atlas CLI.
Finally, we install ZX and our application dependencies and run the ZX script. The CMD
command for running the script is done when the container is run. Everything else is done when the image is built.
We could build our image from this Dockerfile file, but it is a lot easier to manage when there is a Compose configuration. Within the project directory, create a docker-compose.yml file with the following YAML:
version: "3.9"
services:
web:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
environment:
MONGODB_ATLAS_PUBLIC_API_KEY: YOUR_PUBLIC_KEY_HERE
MONGODB_ATLAS_PRIVATE_API_KEY: YOUR_PRIVATE_KEY_HERE
MONGODB_ATLAS_ORG_ID: YOUR_ORG_ID_HERE
MONGODB_ATLAS_PROJECT_ID: YOUR_PROJECT_ID_HERE
MONGODB_CLUSTER_NAME: examples
MONGODB_USERNAME: demo
MONGODB_PASSWORD: password1234
# MONGODB_DATABASE: sample_mflix
# MONGODB_COLLECTION: movies
CLEANUP_ONDESTROY: true
You’ll want to swap the environment variable values with your own. In the above example, the database and collection variables are commented out so the defaults would be used in the ZX script.
To see everything in action, execute the following from the command line on the host computer:
docker-compose up
The above command will use the docker-compose.yml file to build the Docker image if it doesn’t already exist. The build process will bundle our files, install our dependencies, and obtain the MongoDB Atlas CLI. When Compose deploys a container from the image, the environment variables will be passed to the ZX script responsible for configuring MongoDB Atlas. When ready, the ZX script will run the Node.js application, further passing the environment variables. If the CLEANUP_ONDESTROY
variable was set to true
, when the container is stopped the database and collection will be removed.
The MongoDB Atlas CLI can be a powerful tool for bringing MongoDB Atlas to your local development experience on Docker. Essentially you would be swapping out a local version of MongoDB with Atlas CLI logic to manage a more feature-rich cloud version of MongoDB.
MongoDB Atlas enhances the MongoDB experience by giving you access to more features such as Atlas Search, Charts, and App Services, which allow you to build great applications with minimal effort.
This content first appeared on MongoDB.