Working with Multiple Containers Using Docker Compose

Joshua Hall

In this article we’re going to explore how to segment our app into a small network of multiple Docker containers, each with their own images.

Single containers are easy enough to build imperatively in the command line, but doing anything more complicated can quickly get out of hand. Instead we’re going to use a new special type of config file called docker-compose.yml. This declarative approach will allow us to quickly define our images in each container and setup the networking between them.

In this example we’re going to be setting up an NGINX server, an Express server, and our React app. The goal being to have our client and server hosted separately with NGINX managing any requests to the correct container, so any http request to /api will be sent to the server container and everything else to the client.

Prerequisites

It would be helpful to know how to build images with Dockerfile, which you can brush up on here, but that will mostly be taken care of in the starter.

↓ Here's a great Node course we recommend. Plus, this affiliate banner helps support the site 🙏

Starter Setup

To save you the monotony of getting the basic React app and server setup and working, I’ve made this starter. The app itself is just an input that sends some text to get logged by the server, nothing fancy. Since we’re segmenting everything into their own containers, the client and server will have their own package.json files with dependencies, so remember to run npm install on each folder individually if you want to test locally.

NGINX Setup

The NGINX server is different than the other containers. NGINX will act as the router between the React app and the server, directing requests to the correct container.

In a special configuration file, default.conf, we’ll use upstream to tell NGINX on what server port each container is running. Note that we’re referencing the service names that we defined over in docker-compose.yml.

server is our controller, in this case our NGINX server. Docker just needs to know where it can find the controller and where we want to reroute traffic to depending on the request with proxy_pass.

default.conf

upstream client {
  server client:3000;
}

upstream server {
  server server:4000;
}

server {
  listen 80;

  location / {
    proxy_pass http://client;
  }

  location /api {
    proxy_pass http://server;
  }
}

Now we just need docker to put this configuration somewhere more useful. The NGINX container will already have an empty default.conf file, so copying ours to its location will override the old one.

server/Dockerfile

FROM nginx 
COPY ./default.conf /etc/NGINX/conf.d/default.conf

Docker Compose

docker-compose.yml

version: '3'
services:
    server:
        build: 
            dockerfile: Dockerfile
            context: ./server 
        volumes:
            - /app/node_modules 
            - ./server:/app
    nginx:
        restart: always
        build: 
          dockerfile: Dockerfile
          context: ./controller
        ports: 
          - '5000:80'
    client: 
        build: 
            dockerfile: Dockerfile
            context: ./client
        volumes:
            - /app/node_modules 
            - ./client:/app

Let’s go over exactly what this is trying to do:

  • service declares each container with its particular configuration, which we can name however we like.
  • build tells how we want our container built, in this case which file to use and where it is with dockerfile and context.
  • restart tells Docker what to do if a container fails during runtime, in this case we always want it to attempt to restart.
  • ports remaps whatever port we want to the default port, just like the -p flag when working in the terminal.
  • volumes are the persistent data connected to each container. We’re duplicating parts of our container and its dependencies in a way that when we throw the container away and start a new one it’ll have that cache to avoid the time of reinstalling everything.

Finally we can create our services and attach our containers together using the docker-compose up command and the --build flag to build out our Dockerfiles.

$ docker-compose up --build

This may take a while since it’s copying everything over and running npm install, but when it’s done you should see server_1, nginx_1, and client_1 running simultaneously.

Closing Thoughts

This may have been a very simple use case, but Docker Compose is definitely one of the major tools you’ll be using with almost all of your Docker projects.

  Tweet It

🕵 Search Results

🔎 Searching...

Sponsored by #native_company# — Learn More
#native_title# #native_desc#
#native_cta#