In this article, you’ll learn how to use AWS’s Elastic Beanstalk service to deploy your Docker-containerized application.
Prerequisites and Starter
All you need is a little bit of knowledge about creating images with a Dockerfile, which you can brush up on here, and a simple Docker-based app. Note that this process is only for single-container apps, anything that needs networking between multiple containers is a bit more of a complicated process.
We’re going to be using a Continuous Integration (CI) tool, so if you’re unfamiliar with using a CI service in your workflow you can check out this article about CircleCI to get started.
— Sorry to interrupt this program! 📺
If you're interested in learning Node in a comprehensive and structured way, I highly recommend you try Wes Bos' Learn Node course. Learning from a premium course like that is a serious investment in yourself.
Plus, this is an affiliate link, so if you purchase the course you help Alligator.io continue to exist at the same time! 🙏
Since we don’t need it to do too much, we’ll just use a fresh React app and add a Dockerfile.
The only difference from what we’ve done in the past is taht now our app is going to be on an NGINX server. When React is finished building, everything will be moved over to our server and its port exposed, which is necessary for AWS to make our site public later.
$ npx create-react-app new-docker-app
FROM node:alpine as builder WORKDIR '/app' COPY ./package.json ./ RUN npm install COPY . . RUN npm run build FROM nginx EXPOSE 80 COPY --from=builder /app/build /usr/share/nginx/html
I’ve also included a
Dockerfile.dev and a
docker-compose.yml, in case you want to run it locally.
FROM node:alpine WORKDIR '/app' COPY ./package.json ./ RUN npm install COPY . . CMD ["npm", "run", "start"]
version: '3' services: web: build: context: . dockerfile: Dockerfile.dev ports: - "3000:3000" volumes: - /app/node_modules - .:/app
For working with docker applications, especially multi-container ones, it’s been easiest for me to work with Travis CI since it offers some very useful hooks that CircleCI doesn’t at the time of this writing.
Of course, you’re first going to need to create a Travis CI account and connect it with your GitHub. Once you have a new repository in place and can see it in Travis’ dashboard you should be good to go.
The only thing we need Travis to do right now is use Docker to build our app inside a throwaway container and run our tests. Since
npm test never stops by default, we’ll add the
--coverage flag so it’ll give a code report and only run once.
language: generic sudo: required services: - docker # Build testing container before_install: - docker build -t dynamisdevelopment/test-container -f . # Run Tests script: - docker run -e CI=true dynamisdevelopment/test-container npm test -- --coverage
Now you’ll need an account over at AWS, when that’s set up you can search in services for Elastic Beanstalk. We’ll do
Create New Application, give it a name, and create a new environment which will be a web server. The only thing that needs to be changed is
Platform which you’ll want to be
Docker, and not
If everything went well you should be redirected to the project’s dashboard and given a link to this placeholder site.
Connecting Travis to AWS
AWS may have our environment created but Travis needs to know that it exists.
deploy: provider: elasticbeanstalk region: "us-east-2" app: "docker-example-app" env: "DockerExampleApp-env" bucket_name: "elasticbeanstalk-us-east-2-936355730773" bucket_path: "docker-example-app" on: branch: master
regionis where the server for your environment is located, it’s in the generated URL like, DockerExampleApp-env.cptfdisnche.
bucket_pathare the name of your app.
envis the app’s environment name.
bucket_nameis where your app is stored on the
S3service. If you search for
S3you’ll immediately see your new bucket.
branch: materis telling Travis we only want a deployment when the master branch changes.
Giving Access to Travis CI
Now that Travis knows to send everything to AWS we need to let AWS know to listen for it and give it access. To do this we’ll need to create a new user with credentials to give to Travis. Search for the
IAM service, go to
Users, and create a new one. The name doesn’t matter, it just needs programmatic access.
You’ll want to attach existing policies so we can just use a premade configuration, search for beanstalk, and add
If everything was successful, you should have a new user with the credentials Travis will need. AWS will only let you see these once, so if you close the page you’ll have to create a new user.
In the project settings in Travis you’ll have the option of adding environment variables. You’ll want to add these in.
And finally, we just need to throw those credentials to Travis’s deployment config and we’re done.
on: branch: master access_key_id: $AWS_ACCESS_KEY secret_access_key: secure: $AWS_SECRET_KEY
While it’s best to be able to work with multiple hosting services, it should be kept in mind that if you intend to get involved with Kubernetes it would be helpful to also learn how to work with Google Cloud, since they’re the ones that developed it and make the process much more manageable.