If you want to get hands-on real-world experience with understanding and deploying Docker Containers. Or if you have an idea and want to see it containerized online, I suggest you get a Cloud VPS, for as little as €2.95 a month.

This article is a part of our complete series of articles on Docker. Click here to access the Free Series.

Docker’s architecture.

Docker uses a client-server architecture. So we can say that the Docker Daemon and Docker Client are separate binaries.

The client can communicate to separate daemons what this means is that you can use the client to talk to daemons and execute commands.

Daemons are responsible for doing all the heavy-lifting, this includes:

  1. Building Docker Containers
  2. Running Docker Containers
  3. Distributing Docker Containers

On the question of how does the Client communicates with the Daemon, the answer is via REST API. This is achieved using:

  1. Unix Sockets
  2. Network Interface

So when you type in a command on your Client, it gets sent to the Docker Daemon via REST API, which in turn executes the command.

Docker Daemon (dockerd)

dockerd is the persistence process that manages docker objects. It listens to Docker API requests and executes the commands given. Few of the objects that are managed are:

  1. Images
  2. Containers
  3. Volumes
  4. Networks

Docker Client

Docker Client is what the end user interacts with in order to type in the commands. Let’s say we try to execute the docker run command, from the CLI perspective Docker will the command we will be executing however the command will be passed onto the Daemon and executed.

Docker Registries

Like we store our code on GitHub, we need a place to store our containers, that’s where the Docker Images comes in. We store these images in Docker Registries.

Docker has a Public Docker Hub, which is the place where users can upload their Docker Images, many officially created Images are also maintained over there. Out of the box, Docker does not come with any Images. We will be downloading them as per our needs.

Flow for a Docker Architecture

The docker build command instructs the Docker daemon to create an image (dotted line). A corresponding Docker file must be available for this. If you don’t want to create the image yourself, but rather load it from a repository in the Docker hub, the docker pull command is used (dashed line). If the Docker daemon is instructed to start a container via docker run , the background program first checks whether the corresponding container image is available locally. If this is the case, the container is executed (solid line). If the daemon cannot find the image, it automatically initiates a pull from the repository.

Docker Components


Images are what everything in the practical world of Docker is built upon. You launch your containers from images. Images are the “build” part of Docker’s lifestyle cycle. Think of them as a step-by-step guide that is to be followed by the system to get your desired Container up and running. For example:
• Add a file.
• Run a command.
• Open a port.
You can consider images to end up being the “source code” for your containers. They are highly portable and will be shared, stored, and updated. In the book, we’ll learn how to use existing images along with build our own images.


Docker can help you build and deploy containers within which you can package your applications and solutions. As we’ve simply learned, containers are launched from images and can contain a number of running processes. You can consider pictures as the building or packing aspect of Docker and the containers as the running or execution facet of Docker.
A Docker container is:
• An image format.
• A couple of standard operations.
• An execution environment.
Docker borrows the idea of the typical shipping container, used to move
products globally, as a model because of its containers. But rather than shipping goods, Docker containers ship software.

Each container contains a software program picture — its ‘cargo’ — and, like its physical counterpart, allows a couple of operations to be performed. For instance, it can be created, started, halted, restarted, and destroyed.
Just like a shipping container, Docker doesn’t value the contents of the container when performing these actions; for instance, whether a container is definitely a web server, a database, or a credit card application server. Each container can be loaded exactly like any other container.

Docker engine

The heart of the Docker project is the Docker engine. This is an open source client-server application , the current version of which is available to users on all established platforms.

The basic architecture of the Docker engine can be divided into three components: a daemon with server functions, a programming interface (API) based on the programming paradigm REST (Representational State Transfer) and the terminal of the operating system (Command-Line Interface, CLI) as a user interface (Client).

  • The Docker daemon: The Docker engine uses a daemon process as a server. The Docker daemon runs in the background on the host system and is used for central control of the Docker engine. In this function, he creates and manages all images, containers or networks.
  • The REST-API: The REST-API specifies a number of interfaces that allow other programs to communicate with the Docker daemon and to give it instructions. One of these programs is the terminal of the operating system.
  • The terminal: Docker uses the terminal of the operating system as a client program. This interacts with the Docker daemon via the REST API and enables users to control it through scripts or user input.

With Docker, users start, stop and manage software containers directly from the terminal. The daemon is addressed using the docker command and instructions such as build , pull (download) or run (start). Client and server can be on the same system. Alternatively, users have the option of addressing a Docker daemon on another system. Depending on the type of connection to be established, communication between the client and server takes place via the REST API, UNIX sockets or a network interface.

Next, lets start with installing Docker and riding this ride to Docker expertise.

Categories: Knowledgebase


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: