Skip to the content.

Learning Objectives

What is Docker?

Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. By taking advantage of Docker’s methodologies for shipping, testing, and deploying code quickly, you can significantly reduce the delay between writing code and running it in production.

Containerization vs virtualization

Virtualization – Fixed hardware allocation.
Containerization - No Fixed Hardware
Process isolation ( Dependency on os is removed )

Advantages of Containerization

In comparison to the traditional virtualization functionalities of hypervisors, Docker containers eliminate the need for a separate guest operating system for every new virtual machine. Docker implements a high-level API to provide lightweight containers that run processes in isolation. A Docker container enables rapid deployment with minimum run-time requirements. It also ensures better management and simplified portability. This helps developers and operations team in rapid deployment of an application.

Understanding Docker Terminologies

We should be comformatable with four terms:

1) Docker Images:
Combinations of binaries/libraries which are necessary for one software application.
2) Docker Containers:
When image is installed and in comes into running condition, it is called container.
3) Docker Host:
Machine on which docker is installed, is called as Docker host.
4) Docker Client:
Terminal used to run docker run commands ( Git bash )

On linux machine, git bash will work like docker client.

Listing out important commands in Docker

Docker Commands
Working on Images :
1) To download a docker image
docker pull

2) To see the list of docker images
docker image ls
(or)
docker images

3) To delete a docker image from docker host
docker rmi <image_name/image_id>

4) To upload a docker image into docker hub
docker push

5) To tag an image
docker tag ipaddress_of_local_registry:5000/image_name

6) To build an image from a customised container
docker commit container_name/container_id>

7) To create an image from docker file
docker build -t

8) To search for a docker image
docker search

9) To delete all images that are not attached to containers
docker system prune -a

Working on containers

10) To see the list of all running continers
docker container ls

11) To see the list of running and stopped containers
docker ps -a

12) To start a container
docker start container_name/container_id

13) To stop a running container
docker stop container_name/container_id

14) To restart a running container
docker restart container_name/container_id
To restart after 10 seconds
docker restart -t 10 container_name/container_id

15) To delete a stopped container
docker rm container_name/container_id

16) To delete a running container
docker rm -f container_name/container id

17) To stop all running containers
docker stop $(docker ps -aq)

18) To restart all containers
docker restart $(docker ps -aq)

19) To remove all stopped containers
docker rm $(docker ps -aq)

20) To remove all contianers(running and stopped)
docker rm -f $(docker ps -aq)

21) To see the logs generated by a container
docker logs container_name/container_id

22) To see the ports used by a container
docker port container_name/container_id

23) To get detailed info about a container
docker inspect container_name/container_id

24) To go into the shell of a running contianer which is moved into background
docker attach container_name/container id

25) To execute anycommand in a container
docker exec -it container_name/container_id command
Eg: To launch the bash shell in a contianer
docker exec -it container_name/container_id bash

26) To create a container from a docker image ( imp )
docker run image_name

Run command options in Docker

-it for opening an interactive terminal in a container

–name Used for giving a name to a container

-d Used for running the container in detached mode as a background process

-e Used for passing environment varaibles to the container

-p Used for port mapping between port of container with the dockerhost port.

-P Used for automatic port mapping ie, it will map the internal port of the container
with some port on host machine.
This host port will be some number greater than 30000

-v Used for attaching a volume to the container

–volume-from Used for sharing volume between containers

–network Used to run the contianer on a specific network

–link Used for linking the container for creating a multi container architecture

–memory Used to specify the maximum amount of ram that the container can use

Downloading the image and creating container:

To download tomcat image
docker pull tomee

To check downloaded images
docker images

To create a container from an image
docker run –name mytomcat -p 7070:8080 tomee
Note: If two containers are running at same port, inorder to avoid conflict while accessing the application we use “Port mapping”.

Stopping the container and removing the container

To stop the container we use the command
docker stop container_name

To remove the container
docker rm -f container_name

Understanding detached mode and interactive mode

To create ubuntu container
docker run –name myubuntu -it ubuntu

Observation: -it stands for interavtive mode. You have automatically entered into ubuntu bash

Scenario 1:
Start tomcat as a container and name it as “webserver”. Perform port mapping and run this container in detached mode

docker run –name webserver -p 7070:8080 -d tomee

To access homepage of the tomcat container
Launch any browser:
public_ip_of_dockerhost:7070

—————————————————————————-
Scenario 2:
Start jenkins as a container in detached mode , name is as “devserver”, perform port mapping

docker run -d –name devserver -p 9090:8080 jenkins

To access home page of jenkins ( In browser)
public_ip_of_dockerhost:9090

—————————————————————————-

Scenario 3:
Start nginx as a container and name as “appserver”, run this in detached mode , perform automatic port mapping

Generally we pull the image and run the image

Instead of pulling, i directly used this command

docker run –name appserver -P -d nginx
( if image is not available, it perform pull operation automatically )
( Capital P , will perform automatic port mapping )

—————————————————————————–
To start mysql as container, open interactive terminal in it, create a sample table.

docker run –name mydb -d -e MYSQL_ROOT_PASSWORD=sunil mysql:5
To check
docker container ls

I want to open bash terminal of mysql
docker exec -it mydb bash

To connect to mysql database
mysql -u root -p

Multi container architecture using docker

This can be done in 2 ways
1) –link
2) docker-compose

EXample 1: Start two busybox containers and create link between them

Create 1st busy box container
docker run –name c10 -it busybox
How to come out of the container without exit
( ctrl + p + q)

Create 2nd busy box container and establish link to c1 container
docker run –name c20 –link c10:c10-alias -it busybox ( c10-alias is alias name)

How to check link is established for not?
ping c10

Ctrl +c ( to come out from ping )
(ctrl + p + q)

Example 2: Creating development environment using docker
————————————————————
Start mysql as container and link it with wordpress container.

Developer should be able to create wordpress website

1) To start mysql as container
docker run –name mydb -d -e MYSQL_ROOT_PASSWORD=sunil mysql:5

( if container is already in use , remove it using docker rm -f mydb)

Check whether the container is running or not.
docker container ls

2) To start wordpress container
docker run –name mysite -d -p 5050:80 –link mydb:mysql wordpress

Check wordpress installed or not.
Open browser
public_ip:5050
18.138.58.3:5050

Create LAMP Architecture using docker

L – linux
A – apache tomcat
M – mysql
P – php
(Linux os we already have)
1) To start mysql as container
docker run –name mydb -d -e MYSQL_ROOT_PASSWORD=sunil mysql:5

2) To start tomcat as container
docker run –name apache -d -p 6060:8080 –link mydb:mysql tomcat

To see the list of containers
docker container ls

To check if tomcat is linked with mysql
docker inspect apache ( apache is the name of the container)

3) To start php as container
docker run –name php -d –link apache:tomcat –link mydb:mysql php

Create CI-CD environment, where jenkins container is linked with two tomcat containers.

Lets delete all the container
docker rm -f $(docker ps -aq)

To start jenkins as a container
docker run –name devserver -d -p 7070:8080 jenkins/jenkins

To check jenkins is running or not?
Open browser
public_ip:7070

We need two tomcat containers ( qa server and prod server)
docker run –name qaserver -d -p 8080:8080 –link devserver:jenkins tomee

To check the tomcat use public_ip but port number will be 8080
docker run –name prodserver -d -p 9090:8080 –link devserver:jenkins tomcat

Creating testing environment using docker:

Create selenium hub container, and link it with two node containers. One node with firefox installed, another node with chrome installed. Tester should be able to run selenuim automation programs for testing the application on multiple browsers.

Search for selenium:
We have a image - selenium/hub

To start selenium/hub as container
docker run –name hub -d -p 4444:4444 selenium/hub

In hub.docker.com
we also have- selenium/node-chrome-debug ( It is ubuntu container with chrome)

To start it as a container and link to hub ( previous container)
docker run –name chrome -d -p 5901:5900 –link hub:selenium selenium/node-chrome-debug

In hub.docker.com
we also have- selenium/node-firefox-debug

To start it as a container and link to hub ( It is ubuntu container with firefox)
docker run –name firefox -d -p 5902:5900 –link hub:selenium selenium/node-firefox-debug

To see the list of container
docker container ls

Note: firefox and chrome containers are GUI containers.
To see the GUI interface to chrome / firefox container

Download and install vnc viewer
In VNC viewer search bar
public_ip_dockerhost:5901

Password - secret

————————————————————————————

All the commands we learnt till date are adhoc commands.

In the previous usecase we have installed two containers ( chrome and firefox)
Lets say you need 80 containers?
Do we need to run 80 commands?

Instead of 80 commands, we can use docker compose

Docker compose

This is a feature of docker using which we can create multicontainer architecture using yaml files. This yaml file contains information about the containers that we want to launch and how they have to be linked with each other.Yaml is a file format. It is not a scripting language. Yaml will store the data in key value pairs
Lefthand side - Key
Righthand side - Value
Yaml file is space indented.

Sample Yaml file
————————-

bahadoorsoft:
trainers:
bahadoor: Devops
daya: Python
Coordinators:
sai: Devops
bahadoor: AWS

bahadoorsoft – root element

To validate the abvove Yaml file
Open http://www.yamllint.com/
Paste the above code – Go button

——————————————————————————————————

Installing Docker compose

1) Open https://docs.docker.com/compose/install/
2) Go to linux section
Copy and pase the below two commands

sudo curl -L “https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

How to check docker compose is installed or not?

docker-compose –version

Create a docker compose file for setting up dev environment.

mysql container is linked with wordpress container.
vim docker-compose.yml (Name of the file should be docker-compose.yml)

version: ‘3’

services:
mydb:

image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: bahadoor

mysite:
image: wordpress
ports:

Lets remove all the running container
docker rm -f $(docker ps -aq)

How to start the above services from dockerfile
docker-compose up

We got lot of logs coming on the screen. to avoid it we use -d option

docker-compose stop

Remove the container docker rm -f $(docker ps -aq)

docker-compose up -d

To check wordpress
public_ip:5050
To stop both the containers
docker-compose stop

——————————————————————————————————————-

Create a docker compose file for setting up LAMP architecture

vim docker-compose.yml


version: ‘3’

services:
mydb:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: bahadoor

apache:
image: tomee
ports:

php:
image: php
links:

docker-compose up -d

To see the list of the containers
docker container ls
(Observation - we are unable to see the php container)
docker ps -a

————————————————————————————————————–
Ex: Docker-compose file for setting up CI-CD Environment.
jenkins container is linked with two tomcat containers

vim docker-compose.yml


version: ‘3’
services:
devserver:
image: jenkins/jenkins
ports:

qaserver:
image: tomee
ports:

——————————————————————————————————–

prodserver:
image: tomee
ports:

To check:
public_ip:7070 (To check jenkins)
public_ip:8899 (Tomcat qa server)
public_ip:9090 (Tomcat prod server)

Docker-compose file to set up testing environment.
selenium hub container is linked with two node containers.

vim docker-compose.yml


version: ‘3’
services:
hub:
image: selenium/hub
ports:

chrome:
image: selenium/node-chrome-debug
ports:

firefox:
image: selenium/node-firefox-debug
ports:

docker container ls

As it is GUI container, we can access using VNC viewer
Open VNC viewer
52.77.219.115:5901
password: secret

—————————————————————————————————

Docker volumes

Docker containers are ephemeral (temporary). Where as the data processed by the container should be permanent. Generally, when a container is deleted all its data will be lost. To preserve the data, even after deleting the container, we use volumes.

Volumes are of two types
1) Simple docker volumes
2) Docker volume containers ( Sharable volume)

Simple docker volumes

These volumes are used only when we want to access the data, even after the container is deleted. But this data cannot be shared with other containers.

Usecase
1) Create a directory called /data ,

start centos as container and mount /data as volume.
Create files in mounted volume in centos container,
exit from the container and delete the container. Check if the files are still available.

Lets create a folder with the name

mkdir /data

docker run –name c1 -it -v /data centos (v option is used to attach volume)

ls ( Now, we can see the data folder also in the container)

cd data
touch file1 file2
ls
exit (To come out of the container)
docker inspect c1

We can see under mounts “data” folder it located in the host machine.
Copy the path

/var/lib/docker/volumes/c5c85f87fdc3b46b57bb15f2473786fe7d49250227d1e9dc537bc594db001fc6/_data

Now, lets delete te container
docker rm -f c1

After deleting the container, lets go to the location of the data folder

cd /var/lib/docker/volumes/d867766f70722eaf8cba651bc1d64c60e9f49c5b1f1ebb9e781260f777f3c7e8/_data
ls ( we can see file1 file2 )

(Observe , the container is deleted but still the data is persistant )

——————————————————————————————————————–

Docker volume containers

These are also known as reusable volume. The volume used by one container can be shared with other containers. Even if all the containers are deleted, data will still be available on the docker host.

Ex:
sudo su -

Lets create a directory /data
mkdir /data

Lets Start centos as container
docker run –name c1 -it -v /data centos
ls ( we can see the list of files and dir in centos )

cd data
ls ( currently we have no files )

Lets create some files
touch file1 file2 ( These two files are available in c1 container)

Comeout of the container without exit
Ctrl +p Ctrl +q ( container will still runs in background )

Lets Start another centos as container ( c2 container should use the same volume as c1)
docker run –name c2 -it –volumes-from c1 centos

cd data
ls ( we can see the files created by c1 )

Lets create some more files
touch file3 file4
ls ( we see 4 files )

Comeout of the container without exit
Ctrl +p Ctrl +q ( container will still runs in background )

Lets Start another centos as container
docker run –name c3 -it –volumes-from c2 centos

Using docker file:
This is a simple text file, which uses predefinied keywords for creating customized docker images.
Key words used in docker file ( case sensitive )

1) FROM – used to specify the base image from which the docker file has to be created.

2) MAINTAINER – This represents name of the organization or the author who created this docker file.

3) CMD – This is used to specify the initial command that should be executed when the container starts.

4) ENTRYPOINT - used to specify the default process that should be executed when container starts. It can also be used for accepting arguments from the CMD instruction.

5) RUN – Used for running linux commands within the container. It is generally helpful for installing the software in the container.

6) USER – used to specify the default user who should login into the container.

7) WORKDIR – Used to specify default working directory in the container

8) COPY – Copying the files from the host machine to the container.

9) ADD – Used for copying files from host to container, it can also be used for downloading files from remote servers.

10) ENV – used for specifying the environment variables that should be passed to the container.

EXPOSE – Used to specify the internal port of the container
VOLUME – used to specify the default volume that should be attached to the container.
LABEL – used for giving label to the container
STOPSIGNAL – Used to specify the key sequences that have to be passed in order to stop the container.

————————————————————————————————————————–

Create a dockerfile by taking nginx as the base image and specify the maintainer as logiclabs. Construct an image from the dockerfile.

Creating customized docker images by using docker file.

$ sudo su -
vim dockerfile

FROM nginx
MAINTAINER logiclabs

TO build an image from the dockerfile
docker build -t mynginx .

(t stands for tag, . stands for current working dir mynginx is the new image name )

When ever i start my container, i want a program to get executed.

vim dockerfile

FROM centos
MAINTAINER logiclabs
CMD [“date”]

To build an image from the dockerfile
docker build -t mycentos .

TO see the image
docker images

Running conainer from the image
docker run -it mycentos

In one docker file, we can have one CMD instruction. If we give two CMD instruction, it executes the latest one Lets try

vim dockerfile
FROM centos
MAINTAINER logiclabs
CMD [“date”]
CMD [“ls”, “-la”]

docker build -t mycentos .

docker run -it mycentos
( Observation, we get ls -la output )

—————————————————————————————————————
In ubuntu container, I want to install git in it.

Lets remove the docker file
rm dockerfile

vim dockerfile

FROM ubuntu
MAINTAINER logiclabs
RUN apt-get update
RUN apt-get install -y git

Note: CMD – will run when container starts.
RUN – will executed when image is created.

docker build -t myubuntu .

Lets see the images list and space consumed by our image
docker images

docker run -it myubuntu
git –version
exit

———————————————————————————————————

Lets perform version controlling in docker file

mkdir docker
mv dockerfile docker
cd docker
ls

docker# git init
docker# git status
docker# git add .

docker# git commit -m “a”

( we get error we need to config git)
docker# git config –global user.name “bahadoor009”
docker# git config –global user.email “bahadoor009@gmail.com”

Now, run the above commit command ( git commit )

docker# vim dockerfile ( lets make some changes add another RUN command )

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y default-jdk

:wq

docker# git add .
docker# git commit -m “b”

Now lets see the docker file
vim dockerfile ( we see the latest one )

Now, I want to have previous version
git log –oneline ( to see the list of all the commits)

We want to move to “a” commit ( take note of commit id )

git reset –hard 10841c3

Now lets see the docker file
vim dockerfile ( we see the old one )


Cache busting

Whenever an image is build from a dockerfile, docker reads its memory and checks which instructions were already executed. These steps will not be reexecuted. It will execute only the latest instructions. This is a time saving mechanism provided by docker.

But, the disadvantage is, we can end up installing software packages from a repository which is updated long time back.

Ex:

cd docker
vim dockerfile

Lets just add one more instruction

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y tree

Lets build an image

docker build -t myubuntu .

( Observe the output, Step 2, 3, 4 is using cache. Only step 5 is executed freshly )

Advantage: time saving mechanism

Disadvantage : Lets say, you are running after 4 months, We are installing tree from apt which is updated long time back. )

To avoid this disadvanatge we use cache busting

Note: cache busting is implemented using && symbol.
Which ever statement in the docker file has && will be re-executed.

vim dockerfile

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update && apt-get install -y git tree

Lets build an image
docker build -t myubuntu .

( Observe the output, step 3 - It is not using cache )

cd data
ls ( we can see 4 files )
touch file5 file6
ls

Comeout of the container without exit
Ctrl +p Ctrl +q ( container will still runs in background )

Now, lets connect to any container which is running in the background
docker attach c1
ls ( you can see all the files )
exit

Identify the mount location
$ docker inspect c1
( search for the mount section )

Take a note of the source path

/var/lib/docker/volumes/e22a9b39372615727b964151b6c8108d6c02b13114a3fcce255df0cee7609e15/_data

Lets remove all the container
docker rm -f c1 c2 c3

Lets go to the source path
cd /var/lib/docker/volumes/e22a9b39372615727b964151b6c8108d6c02b13114a3fcce255df0cee7609e15/_data
ls ( we can see all the files )

Creating customized docker images

Whenever docker container is deleted, all the softwares that we have installed within the container will also be deleted.

If we can save the container as an image, then we can preserve the softwares.

This creation of customized docker images can be done in two ways.
1) using docker commit command
2) using docker file

Using docker commit
docker run –name c1 -it ubuntu

Update apt repository

apt-get update
apt-get install git

To check the git
git –version
exit

To save the container as image (snapshot)
docker commit c1 myubuntu

To see the list of images
docker images ( you can see the image which you have created )

Now lets run the image which we have created
docker run –name c2 -it myubuntu
git –version ( git is pre installed )

Using docker file

This is a simple text file, which uses predefinied keywords for creating customized docker images.

Key words used in docker file ( case sensitive )

1) FROM – used to specify the base image from which the docker file has to be created.

2) MAINTAINER – This represents name of the organization or the author who created this docker file.

3) CMD – This is used to specify the initial command that should be executed when the container starts.

4) ENTRYPOINT - used to specify the default process that should be executed when container starts. It can also be used for accepting arguments from the CMD instruction.

5) RUN – Used for running linux commands within the container. It is generally helpful for installing the software in the container.

6) USER – used to specify the default user who should login into the container.

7) WORKDIR – Used to specify default working directory in the container

8) COPY – Copying the files from the host machine to the container.

9) ADD – Used for copying files from host to container, it can also be used for downloading files from remote servers.

10) ENV – used for specifying the environment variables that should be passed to the container.

EXPOSE – Used to specify the internal port of the container
VOLUME – used to specify the default volume that should be attached to the container.
LABEL – used for giving label to the container
STOPSIGNAL – Used to specify the key sequences that have to be passed in order to stop the container.

Create a dockerfile by taking nginx as the base image and specify the maintainer as logiclabs. Construct an image from the dockerfile.

Creating customized docker images by using docker file.

$ sudo su -
vim dockerfile

FROM nginx
MAINTAINER logiclabs

TO build an image from the dockerfile
docker build -t mynginx .

( t stands for tag, . stands for current working dir mynginx is the new image name )

To see the image
docker images


When ever i start my container, i want a program to get executed.
vim dockerfile

FROM centos
MAINTAINER logiclabs
CMD [“date”]

TO build an image from the dockerfile
docker build -t mycentos .

To see the image
docker images

Running conainer from the image
docker run -it mycentos

———————————————————————————————————————–

In one docker file, we can have one CMD instruction.
If we give two CMD instruction, it executes the latest one
Lets try

vim dockerfile
FROM centos
MAINTAINER logiclabs
CMD [“date”]
CMD [“ls”, “-la”]

:wq

docker build -t mycentos .

docker run -it mycentos
( Observation, we get ls -la output )

————————————————————————————————————————-
In ubuntu container, I want to install git in it.

Lets remove the docker file
rm dockerfile
vim dockerfile

FROM ubuntu
MAINTAINER logiclabs
RUN apt-get update
RUN apt-get install -y git

:wq

Note: CMD – will run when container starts.
RUN – will executed when image is created.

docker build -t myubuntu .

Lets see the images list and space consumed by our image
docker images
docker run -it myubuntu
git –version
exit

————————————————————————————————————————–

Lets perform version controlling in docker file

mkdir docker
mv dockerfile docker
cd docker
ls

docker# git init
docker# git status

docker# git add .

docker# git commit -m “a”

( we get error we need to config git)
docker# git config –global user.name “sunildevops77”
docker# git config –global user.email “sunildevops77@gmail.com”

Now, run the above commit command ( git commit )

docker# vim dockerfile ( lets make some changes add another RUN command )

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y default-jdk

:wq

docker# git add .
docker# git commit -m “b”

Now lets see the docker file
vim dockerfile ( we see the latest one )

Now, I want to have previous version
git log –oneline ( to see the list of all the commits)

We want to move to “a” commit ( take note of commit id )

git reset –hard 10841c3

Now lets see the docker file
vim dockerfile ( we see the old one )

———————————————————————————————————————-

Cache busting

Whenever an image is build from a dockerfile, docker reads its memory and checks which instructions were already executed.
These steps will not be reexecuted. It will execute only the latest instructions. This is a time saving mechanism provided by docker.

But, the disadvantage is, we can end up installing software packages from a repository which is updated long time back.

Ex:

cd docker
vim dockerfile

Lets just add one more instruction

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y tree

:wq

Lets build an image
docker build -t myubuntu .

( Observe the output, Step 2, 3, 4 is using cache. Only step 5 is executed freshly )

Advantage: time saving mechanism

Disadvantage : Lets say, you are running after 4 months, We are installing tree from apt which is updated long time back. )

To avoid this disadvanatge we use cache busting:

Note: cache busting is implemented using && symbol.
Which ever statement in the docker file has && will be re-executed.

vim dockerfile

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update && apt-get install -y git tree

:wq

Lets build an image
docker build -t myubuntu .

( Observe the output, step 3 - It is not using cache )

Ex: Create a dockerfile, for using ubuntu as base image, and install java in it.
Download jenkins.war and make execution of “java -jar jenkins.war” as the default process.

Every docker image come with default process.
As long as default process is running, the container will be running condition.

The moment, the default process is closed, the container will be exited.

Lets remove all the container
docker rm -f $(docker ps -aq)

Observation 1:
When we start ubuntu container, we use below command
docker run –name c1 -it ubuntu

To comeout of the container we use Ctrl + p + q

docker container ls
( our container c1 is running in the background )

Observation 2:
When we start jenkins container, we use below command
docker run –name j1 -d -P jenkins/jenkins

Now, I want to open interactive terminal to enter jenkins
docker exec -it j1 bash

( In ubuntu container, I can directly go into -it terminal, where as in jenkins i am running an additional command exec ? )

Lets try to go to interactive terminal in docker run command )
docker run –name j2 -it jenkins/jenkins
( we are not getting interactive terminal )

I want to run tomcat as container
docker run –name t1 -d -P tomee

Lets find the reason

docker container ls ( to see the list of containers )

Observer the command section.
It tells you the default process that gets executed, when we start the container.

Container Default process
tomcat catalina.sh
jenkins /bin/tini
ubuntu /bin/bash

bash – is nothing but the terminal.

For linux based container, the default process is shell process
( ex of shell process are bash shell, bourne shell etc )
Hence we are able to enter -it mode in ubuntu )

We are trying to change the default process of the container.
————————————————————-

vim dockerfile

FROM ubuntu
MAINTAINER logiclabs

RUN apt-get update
RUN apt-get install -y default-jdk

ADD http://mirrors.jenkins.io/war-stable/latest/jenkins.war /
ENTRYPOINT [“java”,”-jar”,”jenkins.war”]

:wq

Build an image from the dockerfile
docker build -t myubuntu .

To see the list of images ( we can see our new image )
docker image ls

To start container from new image
docker run myubuntu ( Observe the logs generated on the screen, we got logs related to jenkins , jenkins is fully up and running )

Its an ubuntu container, it is behaving as a jenkins container )

Ctrl +c

RUn the below command
docker ps -a

For myubuntu the command is java -jar jenkins.war
For ubuntu the commans is /bin/bash


Working on docker registry
Registry is a location where docker images are saved.
Types of registry
1) public registry
2) private registry

public registry is hub.docker.com
Images uploaded here are available for everyone.

Usecase: Create a customized ubuntu image, by installing tree in it.

Save this container as an image, and upload this image in docker hub.

Step 1: Create a new account in hub.docker.com

Step 2: Creating our own container
docker run –name c5 -it ubuntu

Lets install tree package in this container
apt-get update
apt-get install tree
exit

Step 3: Save the above container as an image
docker commit c5 sunildevops77/ubuntu_img291

( sunildevops77/ubuntu_img291 – is the image name )

Note: Image name should start with docker_id/

To see the list of images
docker image ls ( we can see the new image )

To upload the image to hub.docker.com ( docker login command is used )

docker login ( provide docker_id and password )

To upload the image
docker push
docker push sunildevops77/ubuntu_img291

login to docker hub to see your image


Container orchestration

This is the process of running docker containers in a distributed environment, on multiple docker host machines.
All these containers can have a single service running on them and they share the resources between eachother, even running on different host machines.

Docker swarm is the tool used for performing container orchestration

Advantages

1) Load balancing
2) scaling of containers
3) performing rolling updates
4) handling failover scenarios


Machine on which docker swarm is installed is called as manager.
Other machines are called as workers.

Lets create 3 machines
Name is as Manager, Worker1, Worker2

All the above machines should have docker installed in it.
Install docker using get.docker.com

( Optional step to change the prompt )
After installing docker in the 1st machine ( Manager ), Lets change the host name.
Host name will be available in the file hostname. We will change the hostname to manager.

vim /etc/hostname
Manager

:wq

After changing the hostname, lets restart the machine
init 6

Similary repeat the same in worker1 and worker2

Connect to Manager, install docker swarm in it.

$ sudo su -

Command to install docker swarm in manager machine

docker swarm init –advertise-addr private_ip_of_manager
docker swarm init –advertise-addr 172.31.27.151

Please read the log messages

Now, we need to add workers to manager
Copy the docker swarm join command in the log and run in the worker1 and worker2

Open another gitbash terminal, connect to worker1

sudo su -

docker swarm join –token SWMTKN-1-0etsmfa26vreeytq278q8ohhi73il7j1lpnrzzlowuld1r8yex-9x04pjmiq85jxjzjayzlglh1c 172.31.27.151:2377

Repeat for worker2

To see the no of nodes from the manager

Manager # docker node ls ( we can see manager, worker1 and worker 2)

Load balancing:
Each docker container is designed to withstand a specific user load.
When the load increases, we can replica containers in docker swarm and distribute the load.

Ex: Start tomcat in docker swarm with 5 replicas and name it as webserver.

Manager# docker service create –name webserver -p 9090:8080 –replicas 5 tomee

( 5 conainers with the same service, distributed load in 3 machines)

How to see where thay are running?
Manager# docker service ps webserver

Lets take the note
Manager - 1 container
Worker1 - 2 container
Worker2 - 2 container


Note: Only one tomcat is running and load is shrared to 3 machines

Lets check
public_ip_manager:9090 ( Will show tomcat page )
public_ip_worker1:9090 ( Will show tomcat page )
public_ip_worker2:9090 ( Will show tomcat page )

Ex 2: Start mysql in docker swarm with 3 replicas.

Manager# docker service create –name mydb –replicas 3 -e MYSQL_ROOT_PASSWORD=sunil mysql:5

How to see where thay are running?
Manager# docker service ps mydb

To know the total no of services running in docker swarm
Manager# docker service ls

If you delete a container, it will create another container.

Now,
Manager# docker service ps mydb

We can see one container is running in Manager machine
I want to delete the container which is running in manager

Manager# docker container ls
( we can see 1 mysql container, 1 tomcat container )

Take note of the container_id of mysql
67238f47bc60

To delete the container
docker rm -f 67238f47bc60

Now lets check the mydb service
docker service ps mydb ( we can see one service is failed, automatically 2nd service is started)
At anypoint of time, 3 container will be running.


Scaling of containers:

When business requirement increases, we should be able to increase the no of replicas.
Similarly, we should also be able to decrease the replica count based on business requirement. This scaling should be done without any downtime.

Ex 3: Start nginx with 5 replicas, later scale the services to 10.

docker service create –name appserver -p 8080:80 –replicas 5 nginx

docker service ps appserver

Command to scale
docker service scale appserver=10

To check
docker service ps appserver

Now I want only two containers
docker service scale appserver=2

To check
docker service ps appserver

To remove a node from the docker swarm
Two ways
1) Manager can drain
2) Node can leave

To see the list of nodes
docker node ls

docker node update –availability drain Worker1

All the container running in Worker1 , will be migrated to Worker2 or manager.

docker service ps mydb
docker node ls

To add the node
docker node update –availability active Worker1

docker node ls

2nd Way ( Node can leave )

Lets Connect to worker2 from git bash

Worker2# docker swarm leave

——————————————————————————————————

To see the list of services
docker service ls

To delete the services
Manager# docker service rm appserver mydb webserver

Rolling Updates

The services running in docker swarm, can be updated to any other version
without any downtime. This is perfomed by docker swarm by updating one replica after another. This is called as rolling update.

Ex: Create redis 3 service with 6 replicas. Update from redis 3 to redis 4 version.

docker service create –name myredis –replicas 6 redis:3

To check the replicas
docker service ps myredis

To update
docker service update –image redis:4 myredis

docker service ps myredis

I want to display running containers not shutdown containers

docker service ps myredis grep Shutdown ( We get shutdown container )
docker service ps myredis grep -v Shutdown ( -v used for inverse operation )

Performing rolling rollback , to downgrade to redis:3 version

docker service update –rollback myredis

To check redis:3 is running with 6 replicas and other version are shutdown.
docker service ps myredis

To add new nodes, in future, we need to docker swarm join command.
To generate the command
docker swarm join-token worker ( We will get the command )

docker swarm join –token SWMTKN-1-0etsmfa26vreeytq278q8ohhi73il7j1lpnrzzlowuld1r8yex-9x04pjmiq85jxjzjayzlglh1c 172.31.27.151:2377

To add a new machine as a manager

docker swarm join-token manager

docker swarm join –token SWMTKN-1-5wbamgr8x7gxabwtlm1j1i91bm5ilzotgna6bc0edubtwtjxi1-3jmzi67qdn5aawvielkcng2e4 172.31.34.112:2377

If there are two managers, one will be leader

docker node ls ( we can see who is the leader )
Decision of which is machine should be leader is automatic.

If one manager goes down, other manager automatically become leader.

To promote worker1 as a manager node
docker node promote Worker1

To demote Worker1 and make him back as a worker
docker node demote Worker1

Docker networking

Docker networking is primarily used to establish communication between Docker containers and the outside world via the host machine where the Docker daemon is running.
When we install docker, docker installs a network by default in the host machine called docker0
Docker network Drivers :

Docker handles communication between containers by creating a default bridge network, so you often don’t have to deal with networking and can instead focus on creating and running containers. This default bridge network works in most cases.

The Bridge Driver

This is the default. Whenever you start Docker, a bridge network gets created and all newly started containers will connect automatically to the default bridge network.

The Host Driver

As the name suggests, host drivers use the networking provided by the host machine. And it removes network isolation between the container and the host machine where Docker is running. For example, If you run a container that binds to port 80 and uses host networking, the container’s application is available on port 80 on the host’s IP address. You can use the host network if you don’t want to rely on Docker’s networking but instead rely on the host machine networking.

The None Driver

The none network driver does not attach containers to any network. Containers do not access the external network or communicate with other containers. You can use it when you want to disable the networking on a container.

The Overlay Driver

The Overlay driver is for multi-host network communication, as with Docker Swarm or Kubernetes. It allows containers across the host to communicate with each other without worrying about the setup. Think of an overlay network as a distributed virtualized network that’s built on top of an existing computer network.

Basic Docker Networking Commands

connect ——> Connect a container to a network
create ——> Create a network
disconnect ——> Disconnect a container from a network
inspect ——> Display detailed information on one or more networks
ls ——> List networks
prune ——> Remove all unused networks
rm ——> Remove one or more networks

——————-BINGO..!!———————