In the developer community, ensuring your projects run accurately regardless of the environment can be a pain. Whether it’s trying to recreate a demo from an online tutorial or working on a code review, hearing the words, “Well, it works on my machine…” can be frustrating. Instead of spending hours debugging, we want to introduce you to a platform that will change your developer experience: Docker.
Docker is a great tool to learn because it provides developers with the ability for their applications to be used easily between environments, and it’s resource-efficient in comparison to virtual machines. This tutorial will gently guide you through how to navigate Docker, along with how to integrate Go on the platform. We will be using this project to connect to our previously built MongoDB Atlas Search Cluster made for using Synonyms in Atlas Search. Stay tuned for a fun read on how to learn all the above while also expanding your Gen-Z slang knowledge from our synonyms cluster. Get hyped!
There are a few requirements that must be met to be successful with this tutorial.
To use MongoDB with the Golang driver, you only need a free M0 cluster. To create this cluster, follow the instructions listed on the MongoDB documentation. However, we’ll be making many references to a previous tutorial where we used Atlas Search with custom synonyms.
Since this is a Docker tutorial, you’ll need Docker Desktop. You don’t actually need to have Golang configured on your host machine because Docker can take care of this for us as we progress through the tutorial.
Like previously mentioned, you don’t need Go installed and configured on your host computer to be successful. However, it wouldn’t hurt to have it in case you wanted to test things prior to creating a Docker image.
On your computer, create a new project directory, and within that project directory, create a src directory with the following files:
The go.mod file is our dependency management file for Go modules. It could easily be created manually or by using the following command:
go mod init
The main.go file is where we’ll keep all of our project code.
Starting with the go.mod file, add the following lines:
module github.com/mongodb-developer/docker-golang-example
go 1.15
require go.mongodb.org/mongo-driver v1.7.0
require github.com/gorilla/mux v1.8.0
Essentially, we’re defining what version of Go to use and the modules that we want to use. For this project, we’ll be using the MongoDB Go driver as well as the Gorilla Web Toolkit.
This brings us into the building of our simple API.
Within the main.go file, add the following code:
package main
import (
"context"
"encoding/json"
"fmt"
"net/http"
"os"
"time"
"github.com/gorilla/mux"
"go.mongodb.org/mongo-driver/bson"
"go.mongodb.org/mongo-driver/mongo"
"go.mongodb.org/mongo-driver/mongo/options"
)
var client *mongo.Client
var collection *mongo.Collection
type Tweet struct {
ID int64 `json:"_id,omitempty" bson:"_id,omitempty"`
FullText string `json:"full_text,omitempty" bson:"full_text,omitempty"`
User struct {
ScreenName string `json:"screen_name" bson:"screen_name"`
} `json:"user,omitempty" bson:"user,omitempty"`
}
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func main() {
fmt.Println("Starting the application...")
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("MONGODB_URI")))
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection = client.Database("synonyms").Collection("tweets")
router := mux.NewRouter()
router.HandleFunc("/tweets", GetTweetsEndpoint).Methods("GET")
router.HandleFunc("/search", SearchTweetsEndpoint).Methods("GET")
http.ListenAndServe(":12345", router)
}
There’s more to the code, but before we see the rest, let’s start breaking down what we have above to make sense of it.
You’ll probably notice our Tweets
data structure:
type Tweet struct {
ID int64 `json:"_id,omitempty" bson:"_id,omitempty"`
FullText string `json:"full_text,omitempty" bson:"full_text,omitempty"`
User struct {
ScreenName string `json:"screen_name" bson:"screen_name"`
} `json:"user,omitempty" bson:"user,omitempty"`
}
Earlier in the tutorial, we mentioned that this example is heavily influenced by a previous tutorial that used Twitter data. We highly recommend you take a look at it. This data structure has some of the fields that represent a tweet that we scraped from Twitter. We didn’t map all the fields because it just wasn’t necessary for this example.
Next, you’ll notice the following:
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {}
These will be the functions that hold our API endpoint logic. We’re going to skip these for now and focus on understanding the connection and configuration logic.
As of now, most of what we’re interested in is happening in the main
function.
The first thing we’re doing is connecting to MongoDB:
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
client, err := mongo.Connect(ctx, options.Client().ApplyURI(os.Getenv("MONGODB_URI")))
defer func() {
if err = client.Disconnect(ctx); err != nil {
panic(err)
}
}()
collection = client.Database("synonyms").Collection("tweets")
You’ll probably notice the MONGODB_URI
environment variable in the above code. It’s not a good idea to hard-code the MongoDB connection string in the application. This prevents us from being flexible and it could be a security risk. Instead, we’re using environment variables that we’ll pass in with Docker when we deploy our containers.
You can visit the MongoDB Atlas dashboard for your URI string.
The database we plan to use is synonyms
and we plan to use the tweets
collection, both of which we talked about in that previous tutorial.
After connecting to MongoDB, we focus on configuring the Gorilla Web Toolkit:
router := mux.NewRouter()
router.HandleFunc("/tweets", GetTweetsEndpoint).Methods("GET")
router.HandleFunc("/search", SearchTweetsEndpoint).Methods("GET")
http.ListenAndServe(":12345", router)
In this code, we are defining which endpoint path should route to which function. The functions are defined, but we haven’t yet added any logic to them. The application itself will be serving on port 12345.
As of now, the application has the necessary basic connection and configuration information. Let’s circle back to each of those endpoint functions.
We’ll start with the GetTweetsEndpoint
because it will work fine with an M0 cluster:
func GetTweetsEndpoint(response http.ResponseWriter, request *http.Request) {
response.Header().Set("content-type", "application/json")
var tweets []Tweet
ctx, _ := context.WithTimeout(context.Background(), 30*time.Second)
cursor, err := collection.Find(ctx, bson.M{})
if err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
if err = cursor.All(ctx, &tweets); err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
json.NewEncoder(response).Encode(tweets)
}
In the above code, we’re saying that we want to use the Find
operation on our collection for all documents in that collection, hence the empty filter object.
If there were no errors, we can get all the results from our cursor, load them into a Tweet
slice, and then JSON encode that slice for sending to the client. The client will receive JSON data as a result.
Now we can look at the more interesting endpoint function.
func SearchTweetsEndpoint(response http.ResponseWriter, request *http.Request) {
response.Header().Set("content-type", "application/json")
queryParams := request.URL.Query()
var tweets []Tweet
ctx, _ := context.WithTimeout(context.Background(), 30*time.Second)
searchStage := bson.D{
{"$search", bson.D{
{"index", "synsearch"},
{"text", bson.D{
{"query", queryParams.Get("q")},
{"path", "full_text"},
{"synonyms", "slang"},
}},
}},
}
cursor, err := collection.Aggregate(ctx, mongo.Pipeline{searchStage})
if err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
if err = cursor.All(ctx, &tweets); err != nil {
response.WriteHeader(http.StatusInternalServerError)
response.Write([]byte(`{ "message": "` + err.Error() + `" }`))
return
}
json.NewEncoder(response).Encode(tweets)
}
The idea behind the above function is that we want to use an aggregation pipeline for Atlas Search. It does use the synonym information that we outlined in the previous tutorial.
The first important thing in the above code to note is the following:
queryParams := request.URL.Query()
We’re obtaining the query parameters passed with the HTTP request. We’re expecting a q
parameter to exist with the search query to be used.
To keep things simple, we make use of a single stage for the MongoDB aggregation pipeline:
searchStage := bson.D{
{"$search", bson.D{
{"index", "synsearch"},
{"text", bson.D{
{"query", queryParams.Get("q")},
{"path", "full_text"},
{"synonyms", "slang"},
}},
}},
}
In this stage, we are doing a text search with a specific index and a specific set of synonyms. The query that we use for our text search comes from the query parameter of our HTTP request.
Assuming that everything went well, we can load all the results from the cursor into a Tweet
slice, JSON encode it, and return it to the client that requested it.
If you have Go installed and configured on your computer, go ahead and try to run this application. Just don’t forget to add the MONGODB_URI
to your environment variables prior.
If you want to learn more about API development with the Gorilla Web Toolkit and MongoDB, check out this tutorial on the subject.
Let’s get started with Docker! If it’s a platform you’ve never used before, it might seem a bit daunting at first, but let us guide you through it, step by step. We will be showing you how to download Docker and get started with setting up your first Dockerfile to connect to our Gen-Z Synonyms Atlas Cluster.
First things first. Let’s download Docker. This can be done through their website in just a couple of minutes.
Once you have that up and running, it’s time to create your very first Dockerfile.
At the root of your project folder, create a new Dockerfile file with the following content:
#get a base image
FROM golang:1.16-buster
MAINTAINER anaiya raisinghani <anaiya.raisinghani@mongodb.com>
WORKDIR /go/src/app
COPY ./src .
RUN go get -d -v
RUN go build -v
CMD ["./docker-golang-example"]
This format is what many Dockerfiles are composed of, and a lot of it is heavily customizable and can be edited to fit your project’s needs.
The first step is to grab a base image that you’re going to use to build your new image. You can think of using Dockerfiles as layers to a cake. There are a multitude of different base images out there, or you can use FROM scratch
to start from an entirely blank image. Since this project is using the programming language Go, we chose to start from the golang
base image and add the tag 1.16
to represent the version of Go that we plan to use. Whenever you include a tag next to your base image, be sure to set it up with a colon in between, just like this: golang:1.16
. To learn more about which tag will benefit your project the best, check out Docker’s documentation on the subject.
This site holds a lot of different tags that can be used on a Golang base image. Tags are important because they hold very valuable information about the base image you’re using such as software versions, operating system flavor, etc.
Let’s run through the rest of what will happen in this Dockerfile!
It’s optional to include a MAINTAINER
for your image, but it’s good practice so that people viewing your Dockerfile can know who created it. It’s not necessary, but it’s helpful to include your full name and your email address in the file.
The WORKDIR /go/src/app
command is crucial to include in your Dockerfile since WORKDIR
specifies which working directory you’re in. All the commands after will be run through whichever directory you choose, so be sure to be aware of which directory you’re currently in.
The COPY ./src .
command allows you to copy whichever files you want from the specified location on the host machine into the Docker image.
Now, we can use the RUN
command to set up exactly what we want to happen at image build time before deploying as a container. The first command we have is RUN go get -d -v
, which will download all of the Go dependencies listed in the go.mod file that was copied into the image..
Our second RUN
command is RUN go build -v
, which will build our project into an executable binary file.
The last step of this Dockerfile is to use a CMD
command, CMD ["./docker-golang-example"]
. This command will define what is run when the container is deployed rather than when the image is built. Essentially we’re saying that we want the built Go application to be run when the container is deployed.
Once you have this Dockerfile set up, you can build and execute your project using your entire MongoDB URI link:
To build the Docker image and deploy the container, execute the following from the command line:
docker build -t docker-syn-image .
docker run -d -p 12345:12345 -e "MONGODB_URI=YOUR_URI_HERE" docker-syn-image
Following these instructions will allow you to run the project and access it from http://localhost:12345. But! It’s so tedious. What if we told you there was an easier way to run your application without having to write in the entire URI link? There is! All it takes is one extra step: setting up a Docker Compose file.
A Docker Compose file is a nice little step to run all your container files and dependencies through a simple command: docker compose up
.
In order to set up this file, you need to establish a YAML configuration file first. Do this by creating a new file in the root of your project folder, naming it docker-compose, and adding .yml at the end. You can name it something else if you like, but this is the easiest since when running the docker compose up
command, you won’t need to specify a file name. Once that is in your project folder, follow the steps below.
This is what your Docker Compose file will look like once you have it all set up:
version: "3.9"
services:
web:
build: .
ports:
- "12345:12345"
environment:
MONGODB_URI: your_URI_here
Let’s run through it!
First things first. Determine which schema version you want to be running. You should be using the most recent version, and you can find this out through Docker’s documentation.
Next, define which services, otherwise known as containers, you want to be running in your project. We have included web
since we are attaching to our Atlas Search cluster. The name isn’t important and it acts more as an identifier for that particular service. Next, specify that you are building your application, and put in your ports
information in the correct spot. For the next step, we can set up our environment
as our MongoDB URI and we’re done!
Now, run the command docker compose up
and watch the magic happen. Your container should build, then run, and you’ll be able to connect to your port and see all the tweets!
This tutorial has now left you equipped with the knowledge you need to build a Go API with the MongoDB Golang driver, create a Dockerfile, create a Docker Compose file, and connect your newly built container to a MongoDB Atlas Cluster.
Using these new platforms will allow you to take your projects to a whole new level.
If you’d like to take a look at the code used in our project, you can access it on GitHub.
Using Docker or Go, but have a question? Check out the MongoDB Community Forums!
This content first appeared on MongoDB.