Docker containers are pretty useful, but are still very limited without the use of images. Now we’re going to explore creating, working with, and publishing our own images onto Dockerhub.
You’re going to need a basic idea about what Docker is and how to get started with it, which you can read up on here.
In the previous article we went into how we use, create, and work with containers.
— Sorry to interrupt this program! 📺
If you're interested in learning Node in a comprehensive and structured way, I highly recommend you try Wes Bos' Learn Node course. Learning from a premium course like that is a serious investment in yourself.
Plus, this is an affiliate link, so if you purchase the course you help Alligator.io continue to exist at the same time! 🙏
What Exactly is a Docker Image
The official, and perhaps a bit confusing, definition of a Docker image is:
An ordered collection of root filesystem changes and the corresponding execution parameters for use within a container runtime.
Essentially what that means is that each image is an efficient system of layering that allows us to have what we need while removing unnecessary duplication of resources. We could have our NGINX base image and added on Node, if we then wanted to open up two different ports with that same configuration we wouldn’t need to create a whole new NGINX/Node stack. It works the same if we then needed two separate containers on one of those ports, this way everything is kept as reusable and efficient as possible.
You can see how a particular image is layered with the
history command. From the output of that command,
<missing> doesn’t mean that anything is wrong, just that the layer wasn’t given its own tag, since only the uppermost layer is tagged to represent the whole stack.
$ docker image history nginx
From now on, DockerHub is going to be your best friend. It has essentially every Open Source image ready and available for you to pull into your own projects, just like you would with a git repo.
If we knew we’re going to need NGINX for multiple projects, we could add it directly to our cache. If you look at the official NGINX docs you’ll notice that there are many versions and base image options. In any production image you’ll make you’ll probably want to avoid using just the plain name or
:latest, since you want to ensure the version you tested with will be the one that’s used.
Similar to forks, we can use tags to select specific versions and variations of images. If we used NGINX as our base image, maybe we would also want access to a shell terminal, NGINX with an
alpine base image would let us do that.
$ docker pull nginx:1.16.1-alpine $ docker run -it nginx:1.16.1-alpine sh
Sometimes Ctrl+C won't let you exit a terminal in a container, try Ctrl+D.
We could go the long route and manually build our images from the terminal, but that generally isn’t best practice and makes it more cumbersome to change things later on. Instead we can use Dockerfile to configure what we want to construct or whenever we need to alter our image. Let’s practice by creating a React app in a container using npx.
$ npx create-react-app docker-app
In the base of the project, create a new file called
Dockerfile, you should get a little whale icon beside it. Dockerfile is just a step-by-step list of instructions for how we want our image built, every command will be in uppercase.
FROM node:alpine WORKDIR /user/app COPY package.json . RUN npm install COPY . . CMD ["npm", "run", "start"]
That’s all we really need for a simple React app, let’s go over what exactly it’s doing:
FROM: Sets our base image, in this case,
Nodeso we can install our node_modules with alpine so we’ll have access to our shell terminal.
WORKDIR: Sets where our project should go inside of our container. If you
lsin the base of your container you’ll get a list of directories like
user. We’re putting our project in an
appdirectory which will be created automatically inside the
COPY: Copies our
package.jsonfile over to our working directory, which we set with
WORKDIRand is always signified with a period. The first item is always what you want and the second is where you want to put it.
RUN: Installs our
COPY: Now that our packages are installed, this will copy over our actual project files to our working directory.
CMD: Sets the default run command for our image, which will be ran whenever the image is finished building. The argument must be an array of each word in our command.
Once you have something that you’re happy with, you probably don’t want to risk losing it by storing it on your machine. It’s incredibly easy to publish your new image to your DockerHub account. You’ll probably need to login to your account first with
Only official images published and managed by the Docker team have plain names, like NGINX. Everyone else follows the format of
account name/image name, which for me would be
dynamisdevelopment/nginx. will use the
tag command to label our image before pushing it to our account.
$ docker login $ docker tag [image_ID] dynamisdevelopment/docker-app $ docker push dynamisdevelopment/docker-app
If you want to publish a private image, you'll want to create the image on DockerHub first then push your image with the same tag.
Maybe now that you have it saved onto DockerHub you don’t need it taking up space on your machine anymore. Just use
rm to remove it and
-f if there are any containers dependent on it.
$ docker image rm [Image_ID/Tag] -f
|Copies an image from DockerHub|
|Published tagged image to DockerHub|
|Logs in to DockerHub account|
|Labels an image, takes the format of |
|Deletes an image despite containers running on it|
|Establishes the base image|
|Duplicated files from your local machine into your container|
|Sets where copied files should be placed|
|Allows the execution of commands, like |
|Run a terminal command|
Since images and containers are the backbone of what Docker provides, it’s essential to get as comfortable as possible with working with them. Hopefully this post was helpful in accomplishing just that!