Docker is an application that allows deploying programs inside sandbox packages called containers, which provide far more efficiency that commonly used virtual machines. Docker application was created in France in 2013. The official website of the project is www.docker.com.
Docker allows a user to create a sandbox container that contains the application with all the required dependencies. The container may be later used for running the prepared software multiple times, or for future software development.
Docker may be used for creating and managing distributed software systems, due to the fact that the user is able to relatively quickly modify the application by changing the containers that form its services and processes. By adding new containers to the network, the user can easily improve performance and effectiveness of the produced system. Docker containers may work on many physical or virtual machines, and their internal environment is not affected by their hosts configurations.
Software containers are not purely cryptographic tools. However, due to portability, efficiency and flexibility of sandboxes, they are a great way for deploying and testing security solutions, and for performing all types of software operations.
At present, there exist a few more container solutions but Docker is definitely one of the most popular ones. It is worth mentioning that a lot of popular cloud service providers, like Amazon or Microsoft, added support for Docker images.
The overhead caused by adding additional layers by Docker is much smaller than the cost of running the whole virtual machine. Instead of creating another fully operational operating system, Docker containers use the low level functionalities of the host, modifying only the necessary functionalities located in the upper layers of the host system.
At first, Docker was available only for Linux operating systems, but over time the support for Windows was added. Docker uses modern functionalities available in operating systems that provide various types of resource isolation. For example, when running on Linux, Docker takes advantage of kernel features, like namespaces, cgoups, aufs file system and virtualization interfaces (libcontainer, libvirt, and LXC).
The application that runs inside Docker is isolated from the host operating system in terms of the file system, other processes and users, CPU and memory, network interfaces, and input/output devices.
One of the reasons of Docker popularity is the simplicity of its usage. After creating a Docker image, the user can carry out the work by using a few simple commands.
A Docker image is the package containing the application which is going to be used, together with all its dependencies and configuration parameters. Each time an image is started, Docker creates a new container and initiates it with fresh parameters. Of course, a number of separate containers based on the same image can be created and run at one time. Every created container receives its own unique id number.
- docker pull image_name: downloads the specified image from Docker repository.
- docker images: lists all available images.
- docker run image_name: starts the specified image, and performs the predefined operations. Each time the command run is used, a new container is created.
- docker ps: lists all currently running images.
- docker stop container_id: stops the specified container.
- docker rm container_id: removes the specified container.
Creating Docker images
At present, there exist hundreds of available Docker images in public repositories. It is possible to download them by using the docker pull command, and work with the containers created from them.
It is also possible to create custom Docker images, by modifying the existing ones. It may be easily achieved by creating a customized Dockerfile.
A Dockerfile usually consists of just a few lines, and each line contains a keyword and a corresponding value. The first line should specify the parent image which is going to be modified (keyword: FROM). The next lines contain additional dependencies that should be loaded into the image (keyword: ADD), the scripts and applications that should be installed or run (keyword: RUN, CMD), the ports that should be exposed (keyword: EXPOSE), and so on.
Having the file, a new Docker image is created by the command docker build, which takes as parameters a target image name (combined with an optional user and a tag) and the Dockerfile.
The created Docker image can be uploaded to the specified registry by the command:
docker push image_name
It is possible to create multiple Docker images that would be connected via network and that would be able to exchange data. Generally, it is recommended to create many Docker images, each one of them running just one service, and allow them all to work together within one network.
A Docker network can be created by using a command:
docker network create network_name
After that, the user can specify which network should be used by a Docker container by adding a --net parameter to the run command. To make a few Docker containers cooperate, they should be run in the same Docker network.
An additional tool, called Docker Compose, was created to make communication between containers even easier. It allows packing multiple images into groups, defined by a docker-compose.yml file, and managing them by using two commands docker-compose up and docker-compose stop.