Docker Tutorial for Beginners

Hi, welcome to my channel industry 4.0. In this video, I shall provide a detailed tutorial for beginners to understand the Docker software. Please like & share the video and subscribe to my channel. So, let’s start.
What is Docker?
Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.
Docker, a subset of the Moby project, is a software framework for building, running, and managing containers on servers and the cloud. The term "docker" may refer to either the tools (the commands and a daemon) or to the Dockerfile file format.
It used to be that when you wanted to run a web application, you bought a server, installed Linux, set up a LAMP stack, and ran the app. If your app got popular, you practiced good load balancing by setting up a second server to ensure the application wouldn't crash from too much traffic.
Times have changed, though, and instead of focusing on single servers, the Internet is built upon arrays of inter-dependent and redundant servers in a system commonly called "the cloud". Thanks to innovations like Linux kernel namespaces and cgroups, the concept of a server could be removed from the constraints of hardware and instead became, essentially, a piece of software. These software-based servers are called containers, and they're a hybrid mix of the Linux OS they're running on plus a hyper-localized runtime environment (the contents of the container).
History
Before going to in depth, you will know little history of Docker.

Docker Inc. was founded by Kamel Founadi, Solomon Hykes, and Sebastien Pahl during the Y Combinator Summer 2010 startup incubator group and launched in 2011.The startup was also one of the 12 startups in Founder's Den first cohort.[44] Hykes started the Docker project in France as an internal project within dotCloud, a platform-as-a-service company.
Docker debuted to the public in Santa Clara at PyCon in 2013. It was released as open-source in March 2013.[20] At the time, it used LXC as its default execution environment. One year later, with the release of version 0.9, Docker replaced LXC with its own component, libcontainer, which was written in the Go programming language.
In 2017, Docker created the Moby project for open research and development.
Background
Now, you will understand the background of Docker.
Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.
Understanding containers
Container technology can be thought of as three different categories:
Builder: a tool or series of tools used to build a container, such as distrobuilder for LXC, or a Dockerfile for Docker.
Engine: an application used to run a container. For Docker, this refers to the docker command and the dockerd daemon. For others, this can refer to the containerd daemon and relevant commands (such as podman.)
Orchestration: technology used to manage many containers, including Kubernetes and OKD.
Containers often deliver both an application and configuration, meaning that a sysadmin doesn't have to spend as much time getting an application in a container to run compared to when an application is installed from a traditional source. Dockerhub and Quay.io are repositories offering images for use by container engines.
The greatest appeal of containers, though, is their ability to "die" gracefully and respawn when load balancing demands it. Whether a container's demise is caused by a crash or because it's simply no longer needed because server traffic is low, containers are "cheap" to start, and they're designed to seamlessly appear and disappear. Because containers are meant to be ephemeral and to spawn new instances as often as required, it's expected that monitoring and managing them is not done by a human in real time, but is instead automated.
Why use Docker
One of the great things about open source is that you have choice in what technology you use to accomplish a task. The Docker engine can be useful for lone developers who need a lightweight, clean environment for testing, but without a need for complex orchestration. If Docker is available on your system and everyone around you is familiar with the Docker toolchain, then Docker Community Edition (docker-ce) is a great way to get started with containers.
Dockerhub and Quay.io are repositories offering images for your container engine of choice. If Docker Community Edition is unavailable or is unsupported, then Podman is a wise option. The effort to ensure open standards prevail is ongoing, so the important long-term strategy for your container solution should be to stick with projects that respect and foster open source and open standards. Proprietary extras may seem appealing at first, but as is usually the case, you lose the flexibility of choice once you commit your tools to a product that fails to allow for migration. Containers can be liberating, as long as they're liberated.

Operation
Now we will discuss the operation of Docker.
Docker can package an application and its dependencies in a virtual container that can run on any Linux, Windows, or macOS computer. This enables the application to run in a variety of locations, such as on-premises, in public (see decentralized computing, distributed computing, and cloud computing) or private cloud. When running on Linux, Docker uses the resource isolation features of the Linux kernel (such as cgroups and kernel namespaces) and a union-capable file system (such as OverlayFS) to allow containers to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.Docker on macOS uses a Linux virtual machine to run the containers.
Because Docker containers are lightweight, a single server or virtual machine can run several containers simultaneously. A 2018 analysis found that a typical Docker use case involves running eight containers per host, and that a quarter of analyzed organizations run 18 or more per host.
The Linux kernel's support for namespaces mostly isolates an application's view of the operating environment, including process trees, network, user IDs and mounted file systems, while the kernel's cgroups provide resource limiting for memory and CPU. Since version 0.9, Docker includes its own component (called "libcontainer") to use virtualization facilities provided directly by the Linux kernel, in addition to using abstracted virtualization interfaces via libvirt, LXC and systemd-nspawn.
Docker implements a high-level API to provide lightweight containers that run processes in isolation. Docker containers are standard processes, so it is possible to use kernel features to monitor their execution—including for example the use of tools like strace to observe and intercede with system calls.
Components
You have to must know the components of Docker.
The Docker software as a service offering consists of three components:
Software: The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface (CLI), that allows users to interact with Docker daemons.
Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.
A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.
Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. The main public registry is Docker Hub. Docker Hub is the default registry where Docker looks for images. Docker registries also allow the creation of notifications based on events.
Tools
Docker Compose is a tool for defining and running multi-container Docker applications. It uses YAML files to configure the application's services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container. The docker-compose.yml file is used to define an application's services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more. The first public beta version of Docker Compose (version 0.0.1) was released on December 21, 2013. The first production-ready version (1.0) was made available on October 16, 2014.
Docker Swarm provides native clustering functionality for Docker containers, which turns a group of Docker engines into a single virtual Docker engine. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. The docker swarm CLI utility allows users to run Swarm containers, create discovery tokens, list nodes in the cluster, and more. The docker node CLI utility allows users to run various commands to manage nodes in a swarm, for example, listing the nodes in a swarm, updating nodes, and removing nodes from the swarm. Docker manages swarms using the Raft consensus algorithm. According to Raft, for an update to be performed, the majority of Swarm nodes need to agree on the update.
Docker Volume facilitates the independent persistence of data, allowing data to remain even after the container is deleted or re-created.
Alternatives to Docker
Linux containers have facilitated a massive shift in high-availability computing. There are many toolsets out there to help you run services, or even your entire operating system, in containers. The Open Container Initiative (OCI) is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, you have a choice when choosing a container toolchain, including Docker, CRI-O, Podman, LXC, and others.


Container utilities
By design, containers can multiply quickly, whether you're running lots of different services or you're running many instances of a few services. Should you decide to run services in containers, you probably need software designed to host and manage those containers. This is broadly known as container orchestration. While Docker and other container engines like Podman and CRI-O are good utilities for container definitions and images, it's their job to create and run containers, not help you organize and manage them. Projects like Kubernetes and OKD provide container orchestration for Docker, Podman, CRI-O, and more.
When running any of these in production, you may want to invest in support through a downstream project like OpenShift (which is based on OKD.)
What you need to know about Docker community edition
The open source components of Docker are gathered in a product called Docker Community Edition, or docker-ce. These include the Docker engine and a set of Terminal commands to help administrators manage all the Docker containers they are using. You can install this toolchain by searching for docker in your distribution's package manager.
Why is Docker Used in DevOps?
Now, we will discuss why Docker is in use in DevOps!
Well, Docker has a “run” option through which a container can get created. Container life is bounded by the process of life. That means as soon as the process is finished, containers will also get terminated. DevOps Training can also give you a fair idea of the implementation of Docker. Following the command can help you in knowing what commands are available in Docker:
1). Real-Time Usage Cases of Docker
Now that you know what is Docker and what its real-time usage is in DevOps let us know a few real-time uses of it. Following are the few real-time usages of Docker.
2). Environment Standardization
Docker introduced environment standardization by minimizing the inconsistency between different environments. It is a technology that makes the development environment repeatable, and companies can ensure that every team member is working in the same domain. Leading companies use Docker for development testing and production.
Above all, Docker composes configuration files so that each team member has access to create an environment of their own.
3). Consistent and Faster Configuration
Docker configuration files are simple. You need to put your configuration into the code and deploy it. Docker supports a wide variety of environments, enabling you to use the same configuration repeatedly.
The most advantageous part about using Docker is an accelerated project setup for the fresh developers. You can keep the development environment the same for every developer.
Once you have the consistency kicking in, you can skip the time taking environment settings and let the fresh developers start any programming right away. Above all, Docker saves time for deployment documentation and preparation set up for the process.
4). Better Disaster Recovery
Disaster never knocks at the door without an invitation. However, Docker provides for backup at times of disaster or disturbances. The lost data can get retrieved later if there is any serious issue. For instance, if there is a hardware failure, a company is more likely to lose the data. But Docker can easily help to replicate the file to the new hardware.
6). Helps in Better Adoption of DevOps
The DevOps community is gradually using Docker to standardize the environment. Docker keeps the production environment consistent with the testing environment.
Standardization also plays an instrumental role in automation. The ever-changing interface can frustrate the team members because of the standard development environment.
List of Docker Commands with Example
We have Docker explained in-depth in this video. By now, you must have gained a complete idea of what Docker is and its real-time usage.
Now let us list out a few Docker commands with examples. Using the installation wizard, the user can install Docker on any machine. Docker installer can be located on the community page of Docker.
You can also refer to the Docker command cheat sheet as a handy reference. For the Linux system, Docker is usually available as a distribution package manager. The following command is used to install Docker on Fedora. These commands will help you understand how to use Docker.
Read: What is Ansible? Ansible Tutorial Guide for Beginners
1). Command to Install the Process
//sudo dnf install Docker
2). Command to Start the Process
sudo systemct1 start Docker
3). Command to Enable the Process
//sudo systemct1 enable Docker
The steps for other Linux versions same steps will be used for this.
4). How to Create a Container?
$ sudo Docker run –it busybox is /bin/
Name: We use Docker to create the containers and users can give a new and unique name to these containers. Docker can also give a default name to the Docker.
It: It stands for interactive. This terminal gets connected to virtual TTY and so the running processes get interacted to the output terminal.
Busybox: The base image is used to create the container. It is like a zip file that contains the necessary files to deploy and develop the application.
Echo: It is a command that usually executes the commands that are contained in the busybox.
5). Command to See the List of Cached Images
In Docker, when images are used for the first time, they are downloaded and cached to speed up the things. To check the local images, we can use the following command:
// sudo Docker images
6). Command to See Background Running Containers
The status of any of the background running container can be checked by the following command:
//sudo Docker ps
7). Command to Kill Running Containers
Following command can be used to stop a container:
Sudo Docker stop [name of your container]
#example
Sudo Docker stop snooze
A running container is stopped through this command and the container is kept in cache even after deletion. The same command is executed again by the following command:
Sudo Docker start snooze
Read: What is Chef? Chef Tutorial Guide for Beginner
8). The Command to Check Container Existence
The existence of any container can be checked by the following command:
Docker ps
All running containers can be enlisted by following the above command. While to display, running and non-running containers can be checked by the following command:
Docker ps –a
9). Mounting Process
The –v parameter is used to map or mount a folder to the host that is also a folder inside any container. For this first time, we will have to create a file:
Echo ‘Hello world’ >hello
By using an external text editor of busybox, we can open the file through the following command:
//sudo Docker run –it busybox vi hello
Here, no output will be displayed as ‘vi’ will call an isolated process and will not be able to access any external file that is outside of the container area. Here, in such situation, we will have to mount the desired file and it will be done through the following command:
#the :z in /app:z -> is for SELinux, non-Linux can ignore this //sudo Docker run –it –v “$(pwd)” :/app:z busybox vi app/text
By the above command, an actual folder with the name $pwd will be mounted to the: /aa folder of the container. In case, if the container does not exist then it will be created. Following options can also be used with this command:
-v option: It will overwrite any pre-existing folder of the container. In case, if it already exists then it will be replaced by the newer ones.
This command can access your system as well and the system resources will be used through this command
As the changes done by this command are done to the folder, so even if the container gets killed then even the changes will be persisted.
What is The Docker Ops Perspective?
The Ops Side will include:
Download an image
Start a new container
Log in to the new container
Run a command inside of it
And then destroy it.
We get two major components when we install Docker.
the Docker client
the Docker daemon
The daemon gears the Docker Remote API. The client debates to the daemon through a local IPC/Unix socket at /var/run/docker.sock in a default Linux installation. This occurs on Windows by means of a named pipe at npipe:////./pipe/docker_engine. We may test that the client and daemon are running and can talk to each other with the docker version command.
$ docker version
Client:
Version: 17.05.0-ce
API version: 1.29
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64
Server:
Version: 17.05.0-ce
API version: 1.29 (minimum version 1.12)
Go version: go1.7.5
Git commit: 89658be
Built: Thu May 4 22:10:54 2017
OS/Arch: linux/amd64
Experimental: false
We should be good to go if we get a response back from the Client and Server components. Try the command again with sudo in front of it: sudo docker version, if we are using Linux and get an error response from the Server component. We will require adding our user account to the local docker group if it works with sudo.
Images
An object that contains an OS filesystem and an application is known as a Docker image. It’s like a virtual machine template if we work in operations. An image is effectively a stopped container in the Docker world. We may think of an image as a class if we are working as developers. Run the docker image ls command on our Docker host.
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
It would have no images if we are working from a freshly installed Docker host and will look similar to the output above. Receiving images onto our Docker host is called “pulling”. Pull the Ubuntu latest if we are following along with Linux. Pull the microsoft/powershell:nanoserver image, if we are following along on Windows. An image holds sufficient of an operating system (OS), in addition to all the code and dependencies to run whatever application it’s designed for.
Containers
We may use the docker container run command to launch a container from it.
For Linux:
$ docker container run -it ubuntu:latest /bin/bash
root@6dc20d508db0:/#
For Windows:
> docker container run -it microsoft/powershell:nanoserver PowerShell.exe
Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:>
We should note from above that the shell prompt has changed in each instance. This is due to our shell is now attached to the shell of the new container – Exactly we are inside of the new container! The docker container run tells the Docker daemon to start a new container. The -it flags express the daemon to create the container interactive and to attach our current terminal to the shell of the container. Next, the command tells Docker that we want the container to be based on the Ubuntu: latest image or the Microsoft/PowerShell: Nano server image if we’re following along with Windows. Lastly, we tell Docker which process we want to run inside of the container. We’re running a Bash shell for the Linux example, for the Windows containers were running PowerShell. Run a ps command from inside of the container to list all running processes.

Please note that how many more processes are running on our Docker host compared to the containers we ran. We pressed Ctrl-PQ to exit from the container in a previous step. Doing this from inside of a container will exit us from the container without killing it. We may understand all running containers on our system using the docker container ls command.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
e2b69eeb55cb ubuntu:latest "/bin/bash" 6 mins Up 6 min vigilant_borg
The yield above displays a single running container. We created earlier this container. The occurrence of our container in this output shows that it’s quite running. We can also realize that it was created 6 minutes ago and has been running for 6 minutes.
Attaching to running containers
We may assign our shell to run containers with the docker container exec command. Let’s connect back to it as the container from the previous steps is still running.
Linux example:
This instance places a container called vigilant_borg. Remember to substitute vigilant_borg with the name or ID of the container running on our Docker host as the name of our container will be different.
$ docker container exec -it vigilant_borg bash
root@e2b69eeb55cb:/#
Windows example:
This instance mentions a container called pensive_hamilton. Remember to substitute pensive_hamilton with the name or ID of the container running on our Docker host as The name of our container will be different.
> docker container exec -it pensive_hamilton PowerShell.exe
Windows PowerShell
Copyright (C) 2016 Microsoft Corporation. All rights reserved.
PS C:>
Please note here that our shell prompt has changed again. We are back inside the container. The format of the docker container exec command is docker container exec -options. We used the -it options to attach our shell to the container’s shell in our instance. We placed the container by name and expressed it to run the bash shell. That was PowerShell in the Windows example. We might simply have referenced the container by its ID. Exit the container again by pressing Ctrl-PQ. Our shell prompt should be back to our Docker host. Run the docker container ls command again to verify that our container is still running.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES
E3b78eeb88cb ubuntu: latest "/bin/bash" 9 mins Up 9 min vigilant_borg
Now stop the container and destroy it using the docker container stop and docker container rm commands. Remember to substitute the names/IDs of our own containers.
$ docker container stop vigilant_borg
vigilant_borg
$$
docker container rm vigilant_borg
vigilant_borg
Confirm that the container was successfully deleted by running another docker container ls command.
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
So, we are sure that now you are aware about the Docker basics!
How are Companies Adapting to Docker?
Companies have found ways to adapt to Docker. It is a technology dominantly used by frontline companies to make the environment convenient for the developers. By experiencing the benefits of the platform, developers have got the hang of the tool and have a better experience with DevOps. Here are a few ways how companies found their ways around the tool.
1). Company-wide Adoption is not Happening Overnight
The adoption is not happening overnight. Companies do not entertain unrealistic expectations. Initially, companies used to run Docker Development and a non-Docker project first. Docker was used for development on small projects, and gradually it was adopted in bigger ones. Companies are progressively moving towards the complete adoption of Docker.
2). Use Docker in Workflow and Deployment
Docker is not limited to deployment only. Docker is used in workflows such as during development. It helps set up the environment and save time for new developers to start projects in their preferred programming language. Docker is used in different stages and gives developers a chance to try new technologies. It integrates the continuous deployment and allows collaboration between the team members to share docker images. The company uses Docker to improve workflow.
3). Providing Incentives for Using Docker
Initially, developers were sceptical about using new technology. Companies adapted to Docker by providing an incentive to the developers, and companies mostly encouraged the developers to use it for development deployment and production. The organic usage of Toka only comes when developers think that that new technology is helpful for their work.
4). A Proper Setup to Use Docker
Docker can simplify the environment setup and configuration. It is still a good idea to have docker setup instructions. An in-house team is dedicated to the companies to answer the questions.
Leave a Reply

To leave a comment, please Login or Register

Comments (0)