Learn Docker in Just 30 Minutes | Hope Tutors

Preamble

This handbook guide is specifically designed for DevOps and SysOps professionals who wants to learn the outline of Docker. Although this is not a definitive work on the subject, it strives to grasp the essential concepts around Docker technology. And it is important to note that fundamental understanding of Linux / UNIX is necessary to step into docker. Tune in with Ubuntu 14.04 LTS OS to follow.

Introduction

Containerization:

Containerization is a process of packaging necessary elements to run a software such as system libraries, system tools, runtime, code, and settings. Regardless the environment, the containerized software application runs on all the OS platforms.

Virtualization:

Generating a virtual version of various actual components like the computer hardware, storage, and networks is known as virtualization.

Containerization and Virtualization – distinction:

Generally, a virtual machine imitates a whole of a server. In an environment of the archetypal virtualized server environment, every so-called virtual machine guest contains a whole OS besides the libraries, drivers, and binaries, and at last the real application. A hypervisor that operates the physical server hardware by running on a host operating system has every Virtual Machine run on top of it. But this method is the cause for virtual machine duplication and thus wastes the server memory. And this chaos results in the limitation of virtual machine one could launch in the servers.

Permitting the virtual machine instances to share lone host OS along with the libraries, drivers, and binaries, the featured containerization is inevitable. This approach minimizes redundant riches since every container retains the application and relevant binaries or libraries. Containers utilize the same host OS all over again, rather than installing and deploying an OS for every guest virtual machine. This is commonly referred to as operating system-level virtualization. Meanwhile, the role of a hypervisor is tackled by a containerization engine, like Docker, which installs the host OS.

2. CONCEPTS

Image or Persona

A Docker Image is a package that can be executed which contains everything needed to run an application, namely the code, a runtime, libraries, environment variables, and configuration files.

Container

A Docker Container runs on domestic Linux and distributes the kernel of the host machine with other docker containers. It runs a detached process, taking no extra memory and thus attaining the state it lightweight.

Compose

Docker Compose is a featured tool to describe and run multi-container Docker applications. With Compose, we can use an Ain’t Markup Language (YAML) file to configure the application’s services. Then, with a sole command line, we can generate and start all the services from the existing configuration.

Machine

Docker Machine is a contrivance that supports the user to lodge Docker Engine on virtual hosts and reign the hosts along with its commands. We can start, stop, inspect, and restart a supervised host, reform the Docker client and daemon, and configure a Docker client to communicate with the host.

Registry

Docker Registry is an individual and orphaned server-side application that is known for its high scalability that stores and lets the user distribute Docker images.

Swarm

Docker Swarm is a set of machines that run Docker and conjoin into a cluster group after which the user can continue running the Docker commands, only this time they are executed on a cluster by a Swarm manager. The machines in a Swarm may be both physical and virtual. Once joined to a Swarm, they are cited as nodes.

3. INSTALLATION

Docker:

Docker comes packed with two editions:

1.Community Edition (CE)
2.Enterprise Edition (EE).

The process installation differs a bit according to the Operating System we are using. It is advisable to detach current archaic Docker installations from the OS so that we could install updated Docker Community Edition (CE).

Docker is not an isolated program. It is a bunch of programs such as docker, docker-containerd, docker-containerd-shim, docker-init, docker-runc, docker-compose, docker-containerd-ctr, dockerd docker-proxy that comes in a compact package. All the Docker-relevant data such as images and containers will be stored in storage paths like: /var/lib/docker and /var/run/docker.

4. PREPARING THE IMAGES

Necessity for the Base Image:

It is important to prepare a Base Image prior, which in turn obtains other customized set of images. All the needed primary package sets need to be installed and configured right away in Base Image per se. Usually there are two ways to compose a new image. And they are:

  • Manual
  • Dockerfile

Prior to begin, we have to create a copy of the GitHub repo into the Linux system. Install the Git on the system before executing the below commands.

$ cd /opt
$ sudo git clone
   https://github.com/docker/docker.github.io

$ sudo chown -R $USER:$USER kickstart-docker

$ cd /opt/kickstart-docker

Building manual image

We can build new images manually by adding up containers from an accessible image and endowing it. Here we build the Base image with the support of centos:latest image from the Docker Hub. This methodology is used to build Base image.

$ sudo docker pull centos:latest
$ sudo docker run -t -i –rm centos:latest /bin/bash
$ yum install -y epel-release
$ yum update -y
$ yum install -y wget vim net-tools initscripts gcc make tar
$ yum install -y python-devel python-setuptools
$ easy_install supervisor
$ easy_install pip
$ mkdir /etc/supervisord.d
$ history –c

Then, open a fresh terminal window to engage the live influencing container to create the Base image, demo/base:v1. We have to use the CONTAINER ID to commit new image.

$ sudo docker ps
$ sudo docker commit -m=”Installed packages for preparing Base image.” -a=”Truman Capote” demo/base:v1
$ sudo docker images
Cleanup the containers used to build the images.
$ sudo docker rm $(sudo docker ps -a -f status=exited -q)

Building Image with Dockerfile

 Now we are going to build App andRedis images. And these images are built on the basis of the instructions from the Docker files that is obtainable in the copied demo storage. Look at the Docker files to grasp the packages that are installed during the build process.

$ cd /opt/kickstart-docker

$ sudo docker build -t demo/app:v1 -f docker/app-dockerfile .

$ sudo docker build -t demo/redis:v1 -f docker/redis-dockerfile .

     After the above commands are executed, we need to get the images in the system. Then, Check ifdemo/app:v1,demo/redis:v1 is in there.

$ sudo docker images

Cleanup the containers used to build the images.

$ sudo docker rm $(sudo docker ps -a -f status=exited -q)

Importing and Exporting the Image

 It is possible to export an image after archiving and conversely import it from that archive. If porting of the images are needed, we can do the export and import process. Additionally, we may use central Docker registry to store the images.

$ sudo docker save demo/base:v1 > demo-base-v1.tar

$ sudo docker load < demo-base-v1.tar

     To shove off the image to a distant Docker registry, use the below command.

$ sudo docker push <REGISTRY_ENDPOINT>/demo/base:v1

For that distant repository we can use services like AWS ECR, and Docker hub.

5. INDUCE THE CONTAINER

 Comprehending the Configurations

     Numerous configurations settings are available in Docker. We need to grasp some important Docker commands before starting to use it efficiently. For high level commands, use the below command

$ sudo docker help

     It is noteworthy to know the primary commands that govern the work behaviour and the lifecycle of Docker container. These few configurations are crucial.

$ sudo docker run –help

 

Load the Configurations settings

First, understand how to load custom configurations and induce containers by manual process. This gives the perception on the flexiblity and effectiveness of the configurations it uses.

 The below command induces a container named demo-redis from image demo/redis:v1 with host machine port 3000 indicating to container port 6379  Here we combine various host machine paths to containers path. We have usedSupervisord as the primal command. The flag -d is used confirm the container is disengaged and runs in background. We can permit a container to attach to another by using –link flag.

     And the commands are

$ sudo docker run -d -i -t –rm –name demo-redis -h demo-redis -p 3000:6379 –network demo-stack -v /opt/kickstart-docker/redis/data:/opt/redis/data -v /opt/kickstart-docker/redis/log:/opt/redis/log -v /opt/kickstart-docker/redis/conf/main.conf:/opt/redis/conf/redis.conf -v /opt/kickstart-docker/supervisor/conf/main.conf:/etc/supervisord.conf -v /opt/kickstart-docker/supervisor/conf/redis.ini:/etc/supervisord.d/redis.ini demo/redis:v1 /usr/bin/supervisord

    Examine whether the containers are started appropriately by using below command.

$ sudo docker ps

Cleanup the container demo-redis by stopping it and removing it.

$ sudo docker stop demo-redis

$ sudo docker rm demo-redis

6. LAUNCHING THE NETWORK

 Then we have to launch a new network interface to produce the containers inside that network. To generate a brand-new network interface for the containers, using the below command would help. This creates a linking network, but not a host network with mentioned IP limits, subnet configuration and name.

$ sudo docker network create -d bridge –subnet=100.3.0.0/16 –ip-range=100.3.1.0/24 –gateway=100.3.1.0 demo-stack

    If we could induce the containers within this network per se, it gets IP address on the basis of a specified range. Use below command to examine the freshly created network configuration.

$  sudo docker network inspect demo-stack

    We can catalogue all the accessible networks on the host with below command. The newly created network set should be recorded here.

$ sudo docker network ls

7. DEPLOY THE APPLICATION

Necessity for the Base Image

It is important to prepare a Base Image prior, which in turn obtains other customized sets of images. All the needed primary package sets need to be installed and configured right away in Base Image per se. Usually, there are two ways to compose a new image. And they are:

  • Manual
  • Dockerfile

Prior to begin, we have to create a copy of the GitHub repo into the Linux system. Install the Git on the system before executing the below commands.

$ cd /opt
$ sudo git clone https://github.com/cloud-works/kickstart-docker.git kickstart-docker
$ sudo chown -R $USER:$USER kickstart-docker
$ cd /opt/kickstart-docker

Building manual image

We can build new images manually by adding up containers from an acccessible image and endowing it. Here we build the Base image with the support of centos:latest image from the Docker Hub. This methodology is used to build Base image.

$ sudo docker pull centos:latest
$ sudo docker run -t -i –rm centos:latest /bin/bash
$ yum install -y epel-release
$ yum update -y
$ yum install -y wget vim net-tools initscripts gcc make tar
$ yum install -y python-devel python-setuptools
$ easy_install supervisor
$ easy_install pip
$ mkdir /etc/supervisord.d
$ history –c

Then, open a fresh terminal window to engage the live influencing container to create the Base image, demo/base:v1. We have to use the CONTAINER ID to commit new image.

$ sudo docker ps
$ sudo docker commit -m=”Installed packages for preparing Base image.” -a=”Truman Capote” demo/base:v1
$ sudo docker images
Cleanup the containers used to build the images.
$ sudo docker rm $(sudo docker ps -a -f status=exited -q)

8. Managing the Swarm Cluster

Create new Machines

     The first step to deploy the Swarm Cluster is to generate and launch virtual machines using the provider VirtualBox

$ sudo docker-machine create –driver virtualbox –virtualbox-memory “1024” –virtualbox-disk-size “20000” –virtualbox-cpu-count “1” demo-mgr

$ sudo docker-machine create –driver virtualbox –virtualbox-memory “1024” –virtualbox-disk-size “20000” –virtualbox-cpu-count “1” demo-wkr

Inspect whether the machines are starting properly by this command.

$ sudo docker-machine ls

Form the Cluster function

To collage a cluster, we have to at the minimum have one managernode and one worker node. The command here will go back and forth to SSH demo-mgr virtual machine and initiates the cluster in it and boosts it as manager node. All the management commands need to be executed only from the manger node. To maintain high availability, we should have at least 3 manager nodes to maintain quorum so that we can withstand the potential failures.

$ sudo docker-machine ssh demo-mgr “sudo docker swarm init –advertise-addr 192.168.99.100”

Related Courses

Ansible Course in Chennai

Ansible Training in Chennai

4.8
Apply to enroll 3122

February 22, 2020
© 2023 Hope Tutors. All rights reserved.

Site Optimized by GigCodes.com

Request CALL BACK