Skip to content

Docker

Docker

The Rise of Docker

The age of virtualization has done a lot for allowing developers to build and create from their computers as if they were different systems. Virtual machines allow them to create entirely virtual operating systems within which to work and build. These systems are one of the major ways through which a homogenization process has been able to spread through the development world. A developer can run a Windows virtual machine from their Macbook in order to test the functionality of an app that was designed for Windows. It is far cheaper and easier to run a virtual machine than it is to buy a whole new computer.

Yet even virtual machines bring with them their own set of difficulties. There are many factors which determine how efficient a virtual machine will be on any given computer. Then, on top of that, each computer uses different hardware like graphics cards and processors. Two different Macbooks may each be able to run a Windows virtual machine but the underlying hardware has a direct effect on how well the app inside the virtual machine will run. So, while virtual machines have started a homogenization process, they are still far from completing it.

That’s where Docker comes in. Docker is a platform as a service product which uses operating-system-level virtualization so that users can utilize software in packages which are called containers. Each container is isolated from each other and includes everything that the end-user needs to run the software on their computer regardless of its specs. They’re more lightweight than virtual systems and remove a lot of the guesswork from the virtualization process. Since Docker’s creation in 2013, it has been used and expanded on by companies like Microsoft, Google, IBM, Huawei, Cisco and more. In fact, the use of Docker is spreading so quickly that an analysis of LinkedIn profiles in 2017 showed that mentions of the application went up almost 200% in 2016 alone.

Docker can be a bit of a confusing program to understand if you’ve never worked with virtualization before. In order to clear up any confusion around Docker, let us take a look at what it is and why it exists. This will help you to decide if Docker is right for you or not.

What is Docker?

Docker first launched back in 2013 after several years of development. At its core, Docker is a tool designed to make it easier to run applications throughout the DevOps process. Docker is like a virtual machine in that it runs on a computer but acts like it is its own computer. This allows for users to create, deploy and run applications through the use of containers, the main building block of Docker. Every piece of an application that is needed to run is included within a container, such as libraries and various dependencies, and this allows Docker to run those containers as if they were their own system. What’s more, this allows for Docker containers to be shipped out to other users with everything they need to run the container included.

Docker is an open-source platform, too. This means that anyone who wants to can contribute to Docker and use it to fit their own needs. If they find that Docker doesn’t have a feature which they require, they can open up the program and add the features they want without having to worry about breaking any laws.

Docker enables users to separate their applications from the hardware and infrastructure they have in place to speed up delivery. Infrastructure can be managed in the same way that applications are. Together with the flexibility of the open-source nature, this makes Docker a powerful tool in speeding up the shipping, testing and deployment of code to reduce the time between creating it and testing it.

In action, what this looks like is as follows. The user uses Docker to download or open up an image file. This file is then deployed as a container. That container itself is a self-contained application. Instead of running a virtual machine in order to then run the application, the application itself is run like a virtual machine and the user can see if it works or not by whether or not it is working. This sounds redundant but it is an important note. If run in a virtual machine, the application not working may be tied to the virtual machine or the underlying hardware and so there are many reasons it may not work. In Docker, it doesn’t work if there is an issue with the application itself.

With the What of Docker out of the way, let’s turn now to the Why: Why is Docker gaining such attention in DevOps and the problems it solves.

What Problems Does Docker Solve (And When is Docker Not Recommended)?

There are several key benefits to using Docker in your DevOps workflow. These benefits make using Docker a great fit for many of your needs. Docker, however, isn’t some magical program that will fit every need you may have. In looking at the benefits of Docker, it is important to also look at the times that Docker won’t cut it. This way, you know for sure whether Docker is right for you. But first, let’s look at those benefits.

One of the biggest benefits of using Docker is its isolation. Docker containers include all the settings and dependencies necessary to run them. This means that the dependencies of the container will not affect any of the configurations of the computer they are being deployed on. Nor will the containers mess with any other containers that may be running at the same time. When you run a separate container for each part of an application (such as a web server, front end and database that is used for hosting a web site), you are able to ensure that none of the dependencies conflict with each other. Containers could be designed to use entirely different underlying hardware from each other but run at the same time perfectly smooth through Docker’s virtualization. This makes it much easier to ensure everything is running properly and makes sharing and deploying applications via container much simpler for everyone involved.

With this also comes a component of reproducibility. A Docker container is guaranteed to run the same on any system that is running Docker. The system specifications of a container are stored on what is called a Dockerfile. Sharing the Dockerfile with your fellow team members allows you to ensure that all images they build make use of that Dockerfile so that all the containers can run the same. This cuts out the guesswork of having to problem-solve issues relating to hardware.

The containerization of the various components of an application can offer a level of security. When an application is run in a traditional fashion, there is the risk that an issue with one component can cause the rest of the components to fail. When an application is run through Docker containers, a failure in one part can leave the other containers unaffected. This can make trouble solving much easier. Other security issues may arise, however, due to Docker’s containerization. If security is important for your large applications, a more detailed look at your specific needs should be taken before using Docker.

Another benefit of Docker is the Docker Hub. Docker Hub is a directory of Docker images which have been put online and shared to be used by any Docker user who wishes to. There are many images and applications that can be found on Docker Hub. All you need to do to use them is to download the pre-made images and put them into place. This can make your various Docker setups quick and easy.

Since Docker containers don’t need to run an entire virtual operating system, it is much quicker and more effective to run Docker containers. A virtual system needs to run its own operating system and this can really tax the hardware being used. Since everything Docker needs to run is included in the container, the virtualized system takes hardly any additional resources at all. One of the great things about this is how much quicker it makes Docker when compared to a virtual system. What would take five minutes to boot on a virtual system takes closer to five seconds when used through Docker.

Because of all these benefits, Docker is particularly great for use in the following manners.

When you are learning technology, you can use Docker in order to skip spending any time on installation and configuration. Since everything you need to run the program is included in the Docker container, you can launch the container to get your hands on new applications quickly to see if they interest you or are relevant to solving the various issues you are looking to tackle.

Docker is also fantastic for simple uses such as setting up and running a Minecraft server. Many simply applications such as this already have supported images available on Docker Hub and you can quickly grab them, deploy them and walk away. This can reduce the time necessary for setting up basic applications and get you up and running in a matter of seconds.

As mentioned above, isolation is a big feature of Docker and this is fantastic for running multiple applications on a server. You can reduce the number of issues you have with a single server by keeping the various applications compartmentalized through Docker containers. This allows you to prevent any possible problems with dependency management that you may have to deal with otherwise. Teams can lose hours or even days trying to troubleshoot dependency issues which could have been avoided entirely through the implementation of Docker.

By far, the best use of Docker is for development teams. On any given developer team, there is sure to be a multitude of different setups in terms of hardware and underlying infrastructure. Since the use of a Dockerfile allows containers to be created with a uniform infrastructure in place, using Docker removes the variability between developer systems and allows for the exchange of applications and the testing of those applications to be streamlined. Cutting out the variability and decreasing the time between building and testing makes Docker absolutely amazing for DevOps.

At the same time that Docker makes DevOps smoother, there are, of course, areas in which it isn’t the most effective tool to use. When it comes to the following situations, you are better off looking for alternatives to Docker.

If your application is too complicated, then a pre-made Dockerfile or a previously created image likely won’t cut it. If you find that you are going to need to build, edit and handle communication between several containers spread across several servers then the amount of time necessary to get set up is going to be rather high and you will be better off looking for another solution outside of Docker.

You may also find Docker to be insufficient for your needs if performance is of critical importance for your application. Docker is far faster when compared to virtual machines but it still adds another cost onto the performance of the system it is running on. Running a process inside of a container won’t be as quick as when you run that application on the system’s native operating system. If every second matters for your application then Docker is only going to be a hindrance.

What Problems Does Docker Solve

Docker is also still a new piece of technology and as such, it is still under development. As new features are added to Docker, you will need to upgrade it in order to access them. Backwards compatibility between releases of Docker isn’t guaranteed, though. This means that you may find yourself upgrading often and risking your entire setup every time that you do so. This level of uncertainty can be stressful to some, so consider it for yourself before adopting Docker.

Another downside is the way that Docker makes use of the native OS on the system it is running. What it means here is that if your DevOps team is using multiple operating systems, then Docker won’t fit your needs. In this particular case, you are better off using virtual machines.

Because Docker was designed with applications that run on the command line in mind, Docker is not well suited for applications which require a graphical interface. There are ways in which you can run a graphical interface inside a Docker container, like making use of X11 forwarding, but even then these function poorly. If your application is of a visual nature, you are better using the native OS or a virtual machine.

Docker also has security issues which you should be aware of. Since the kernel of the OS is shared between the various containers in use, any vulnerability of that kernel is also present in your active containers. If you have a container that allows access to resources like memory then denial-of-service attacks could be used to starve the other containers active on the host system. Someone that breaks into a container could possibly break out of the container and carry over the privileges from that container to the host system. Docker images could also be poisoned and tampered with to make it easier for an attacker to gain access to your system if you aren’t careful. This is especially bad if you use containers as databases for any type of secure information like usernames and passwords. All of these security issues are present and important to be aware of before you start using Docker.

As Docker continues to grow, many of these issues are sure to be addressed and fixed. The rate at which Docker is being adopted throughout the tech world almost guarantees that Docker is only going to become more secure and more comprehensive in the years to come.

What Are Containers?

A container is a unit of software that has been standardized. The standardization sees the software packaged along with all of the necessary code and dependencies required in order to run the application on any given computing environment. Containers have existed since long before Docker but Docker has popularized their use by simplifying the process of accessing and making use of them. Docker containers are lightweight and capable of standing alone. Everything that is needed to run the application is in the container except for the Docker program which can be downloaded freely and installed on any computer in a matter of minutes. Docker containers begin as images, which we’ll look at in a moment. It is only once the image is run that it becomes a container itself.

A metaphor for containers that have been used often is that of a shipping container on a boat. Pretend you are shipping a bunch of office chairs. You could stack these chairs on the boat by themselves but they risk being thrown around by waves and other environmental factors. This is what is it like when you run applications on your native system and juggle conflicting dependencies and the like. The far easier way to ship those chairs would be to put them into a shipping container. Now the container is solid and locked in place so those chairs don’t go anywhere when waves toss the ship around. If one shipping container falls off the ship, the others are still secure in their place. This is what using containers for applications is like: the containers secure everything within themselves to make it easier for the computer (the ship).

As mentioned previously, containers and virtual machines may seem quite similar to each other in that they both isolate resources and have a similar allocation of said resources. Containers don’t virtualize the hardware the same way that virtual machines do, though. A virtual machine is an abstraction of the physical hardware. You have your underlying infrastructure, then the virtual machine monitor on top of which sits the guest operating system in which your app will run. A container, on the other hand, is an abstraction at the layer of the app. You have your infrastructure, followed by the host operating system on which Docker runs and each container then runs through that. This allows for more applications to be handled by the system, quicker. It is important to note that Docker containers can be used together with a virtual machine if one so chooses and so, while they make for an apt comparison, they do not need to be thought of as replacements for each other in the strictest sense.

What Are Docker Images?

A Docker image is a kind of file which is used in order to run code in a Docker container. Made up of several layers, an image is built from the instructions for a working version of an application. When Docker runs an image file, it turns that image into a container. In the metaphor of the ship we discussed above, the Docker image would be the layout plan for how each shipping container would be laid out. Without the image, there can be no container because there would be nothing to make up the inside of that container. Another way to think of Docker images is to consider them to be a snapshot of the application in question. They are an “image” of the application running complete with everything it requires to run and so, when that image is then turned into a container, it has all of these elements present.

Each Docker image is made up to include the system libraries, tools and dependencies necessary for the executable code of the application it represents. Because an image is made up of multiple layers, developers are able to reuse image layers for different projects where applicable. The re-use of layers from images allows developers to save time since they don’t need to make every layer of the image themselves. Images tend to start with a base image, though they can be made from scratch if necessary. The static layers of the image rest underneath a readable/ writable top layer. Layers get added to the base image in order to finetune how the code will run in the container that opens. Each layer of the image can be viewed in Docker using simple commands (which we will be learning shortly).

When Docker opens a container using an image, a writable layer is created for the image. This new writable layer is called the container layer because its purpose is to host any of the changes that are made to the container while running. The container layer can store new files, modifications or deleted files. This allows the container to be customized rather than simply to run as a static application. Since these changes are saved as a unique layer on the particular instance of that container, this allows for multiple containers to run from the same underlying image but run uniquely due to what has happened on the container layer.

So, a Docker container is a running instance of a Docker image. Docker images are files made up of several layers in which all the information necessary to run a container is in place. If an image makes a container then the question that we still have left to answer is: What makes an image? For that, we need to talk about Dockerfiles.

What Are Dockerfiles?

Dockerfile is a text document which has all of the necessary commands that a user needs in order to assemble an image. A Dockerfile lets Docker automatically build images by following the commands in the file. Say you have grabbed a Docker image off of Docker Hub. When you launch that image, you will open up a corresponding container. But say you wanted to deploy multiple instances of that container from the single image you downloaded? Doing this can be a bit of a hassle. Or, say you downloaded an image for Ubuntu that was necessary for development but you wanted to modify the image to upgrade some of the software or add in extra packages that you require for your development project. In this case, you could go ahead and manually edit the image. If you have more than one image at hand that you need to work with, this again becomes a hassle. For all of these tasks, a Dockerfile could be used to quickly build the same image multiple times and save you from having to do it yourself.

Basically, the Dockerfile serves as the set of instructions that Docker uses to build an image. If we look at that shipping metaphor again, there is an important difference we can make between a cargo ship and Docker. Namely, it is a really big deal when you lose a container on a cargo ship. The metaphor also doesn’t fit Dockerfiles very well. Another way of looking at it is to think of Docker containers like plants. When a plant dies, you can plant a new seed in order to replace it. The rest of the pot (the dirt and soil) remains the same and you will end up with a nearly identical plant. The plant is the Docker container, the sproutling is the image and this leaves the Dockerfile as the seed. By using the Dockerfile, you get the images which lead to the plants (Docker containers). While you could just use an image, using a Dockerfile provides you the advantage of ensuring that your build uses the latest versions available of the software in question.

let’s look at the keywords that is used in a Dockerfile.

  • ADD: This copies files from a source location on the host system and adds them into the container’s filesystem at the destination that has been set out.
  • ARG: Similar to ENV, this defines variables that users can then pass to the builder. ENV defined variables, however, will always override an ARG instruction of the same name.
  • CMD: This is used in order to execute specified commands within the container in question.
  • COPY: This copies files (or directories) from a specified location and then adds them to the filesystem of the image.
  • ENTRYPOINT: This designates an application to be used every time a new container is created with the image.
  • ENV: This is used to set environment variables which can be used to control how an application runs or to configure data locations. ENV variables can also be specified for later use within the Dockerfile itself. ENV set values will persist when a container is run from the image whereas ARG variables are only available during the build of the Docker image.
  • EXPOSE: Is used in order to inform Docker that the container listens on the network port specified and is used to allow networking between the container and the world outside.
  • FROM: This simply defines the base image that is being used to begin the build process.
  • HEALTHCHECK: This tells Docker how to test a container to check if it is still functioning properly. When a container is checked and passes, it is healthy. If a container fails a certain number of checks in a row then it becomes unhealthy.
  • LABEL: This adds metadata to an image.
  • MAINTAINER: This defines a full name and an email address for the creator of the image.
  • ONBUILD: This adds an instruction to the image to be executed at a later time when the image is used as the base of another build. Any build instruction can be set as a trigger.
  • RUN: This is the primary way in which to execute commands.
  • SHELL: This allows you to override the default shell that is used for shell commands.
  • STOPSIGNAL: This is used to set the system call signal that is sent to the container in order to tell it to exit. By default, when you tell a container to stop, it is sent a signal then given a short period to exit gracefully before sending a stronger signal to kill the container. Using STOPSIGNAL allows you to override the default signal to set your own.
  • USER: Sets the username that will run the container.
  • VOLUME: Enables access from the container to a specified directory on the host.
  • WORKDIR: Sets a path for where a command that has been defined using CMD will be executed.
nv-author-image

Era Innovator

Era Innovator is a growing Technical Information Provider and a Web and App development company in India that offers clients ceaseless experience. Here you can find all the latest Tech related content which will help you in your daily needs.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.