Archive for the ‘ How tos ’ Category

How to get the original file from an RPM.

You might have accidentally deleted a configuration or binary file which was installed as part of a package OR may be you modified the original file and you want to restore the original as you didn’t take a back – this blog will help you in resolving similar issues.

The steps below are for Redhat/CentOS based Linux systems, where the package was installed using rpm or yum. The steps basically outline how to grab the rpm package, unpack and gain access to the files inside the rpm. I will demo the steps i used to recover ntp.conf –

1. Identify the package owning/containing the file –

[root@tester ~]# rpm -qf /etc/ntp.conf
ntp-4.2.6p5-1.el6.centos.x86_64

2. download the original package –
We will download the rpm package in /tmp in order to unpack it later –

[root@tester ~]# cd /tmp/
[root@tester tmp]# yumdownloader ntp-4.2.6p5-1.el6.centos.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.aol.com
 * epel: reflector.westga.edu
 * extras: centos-distro.cavecreek.net
 * updates: lug.mtu.edu
ntp-4.2.6p5-1.el6.centos.x86_64.rpm    | 592 kB     00:00
[root@tester tmp]# ls -lh ntp-4.2.6p5-1.el6.centos.x86_64.rpm
-rw-r--r--. 1 root root 592K Mar  9 03:19 ntp-4.2.6p5-1.el6.centos.x86_64.rpm

Note – you can following the steps in this link to install yumdownloader or alternative means to download a package. For a short answer, just run ‘yum install yum-utils’ to install yumdownloader.

3. extrack RPM package –

We will use rpm2cpio to extract the RPM package and then pipe to cpio to copy the files from the archive –

[root@tester tmp]# rpm2cpio ntp-4.2.6p5-1.el6.centos.x86_64.rpm | cpio -i --make-directories
3344 blocks
[root@tester tmp]# ls
etc  ntp-4.2.6p5-1.el6.centos.x86_64.rpm  usr  var  yum_save_tx-2014-03-09-01-00h9I83Y.yumtx

4. Access the file you are looking for –

Once we extracted the rpm package, the directory structure is easy to navigate – for instance if we looking for ntp.conf, it is under etc/ntp.conf – the directory structure mirrors that of the OS –

[root@tester tmp]# ls -al etc/
total 28
drwxr-xr-x. 6 root root 4096 Mar  9 03:19 .
drwxrwxrwt. 6 root root 4096 Mar  9 03:19 ..
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 dhcp
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 ntp
-rw-r--r--. 1 root root 1778 Mar  9 03:19 ntp.conf
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 rc.d
drwxr-xr-x. 2 root root 4096 Mar  9 03:19 sysconfig
[root@tester tmp]# ls -al etc/ntp
ntp/      ntp.conf
[root@tester tmp]# ls -al etc/ntp.conf
-rw-r--r--. 1 root root 1778 Mar  9 03:19 etc/ntp.conf

At this point, you can view the files in the original rpm and copy the ones you need. You might also find the link below that I referenced for quickly re-installing original files using

yum reinstall ntp

.

References –

https://access.redhat.com/solutions/10154
https://www.g-loaded.eu/2012/03/26/restore-original-configuration-files-from-rpm-packages/

tcpdump – how to grep or save output in real time

Tcpdump is a handy tool for capturing network packets. It will keep on capturing packets until it receives a SIGINT or SIGTERM signal, or the specified number of packets have been processed. If you have tried to pipe the output of tcpdump to a file or tried to grep it, you will notice a significant delay before you even see an output. The reason behind that is, tcpdump buffers output in 4k byte chunks and it doesn’t flush it until 4k of data is captured.

To get around the buffering, you can use ‘-l’ option to see the packets captured in real time in order to ‘grep’ or ‘tee’ output to a file. From the man page –


-l     Make stdout line buffered.  Useful if you want to see the data while capturing it.  
     E.g. "tcpdump  -l  |  tee dat" or "tcpdump  -l   > dat  &  tail  -f  dat"

Send output to a file while watching the captured packets in real time –

root@linubuvma:~# tcpdump -l -i any -qn port 53 | tee -a /tmp/dnslogs
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
09:02:48.772892 IP 192.168.10.206.29185 > 192.168.10.109.53: UDP, length 33
09:02:48.773196 IP 192.168.10.206.35333 > 192.168.10.109.53: UDP, length 33
09:02:48.775062 IP 192.168.10.109.53 > 192.168.10.206.29185: UDP, length 78
09:02:48.775085 IP 192.168.10.109.53 > 192.168.10.206.35333: UDP, length 117
09:02:50.274318 IP 192.168.10.206.46983 > 192.168.10.109.53: UDP, length 33
09:02:50.274695 IP 192.168.10.206.55061 > 192.168.10.109.53: UDP, length 33
09:02:50.275531 IP 192.168.10.109.53 > 192.168.10.206.46983: UDP, length 78
09:02:50.276384 IP 192.168.10.109.53 > 192.168.10.206.55061: UDP, length 117

Grep text pattern in real time –

root@linubuvma:~# tcpdump -l -i any -vv |grep --color -i google
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
linubuvma.home.net.34647 > ns1.home.net.domain: [bad udp cksum 0x96c1  0x4797!] 34365+ A? google.com. (28)
linubuvma.home.net.34647 > ns1.home.net.domain: [bad udp cksum 0x96c1  0x9bf1!] 12744+ AAAA? google.com. (28)
ns1.home.net.domain > linubuvma.home.net.34647: [udp sum ok] 12744 q: AAAA? google.com. 1/0/0 google.com. AAAA 2607:f8b0:4002:c07::66 (56)
ns1.home.net.domain > linubuvma.home.net.34647: [udp sum ok] 34365 q: A? google.com. 6/0/0 google.com. A 74.125.196.139, google.com. A 74.125.196.100, google.com. A 74.125.196.101, google.com. A 74.125.196.102, google.com. A 74.125.196.113, google.com. A 74.125.196.138 (124)
173 packets captured
240 packets received by filter
0 packets dropped by kernel

A handy cheat sheet for tcpdump – https://comparite.ch/tcpdumpcs

References –
http://www.tcpdump.org/tcpdump_man.html
http://unix.stackexchange.com/questions/15989/how-to-process-pipe-tcpdumps-output-in-realtime

Redhat satellite or Spacewalk – real time push to clients.

By default, a client waits for a set of interval (minutes) configured in /etc/sysconfig/rhn/rhnsd to pull scheduled tasks from satellite server. For instance, if a remote command is set to be executed on client or a patch is waiting to be applied, rhn_check has to wait at least for 60 minutes to pick up the task.

For real time command execution or patch or configuration deployment, the following steps have to be performed on server and client –

1. Server : Install osa-dispatcher

root:homevm:~:# rpm -q osa-dispatcher
osa-dispatcher-5.11.43-1.el6.noarch

root:homevm:~:# service osa-dispatcher status

root:homevm:~:# chkconfig osa-dispatcher on

root:homevm:~:# chkconfig osa-dispatcher --list
osa-dispatcher  0:off   1:off   2:on    3:on    4:on    5:on    6:off

2. Client : Install and enable osad (OSA daemon).

# yum install osad -y
# chkconfig osad on
# /etc/init.d/osad restart

3. Client : Make sure the deploy and run options are enabled –

# rhn-actions-control --enable-run
# rhn-actions-control --enable-deploy

# rhn-actions-control --report
deploy is enabled
diff is enabled
upload is enabled
mtime_upload is enabled
run is enabled

Extra steps in case you encounter SSL certificate issues –
OSA is picky on SSL certificte verification, make sure the right CA cert is deployed on client, and the serverURL on up2date should match with the CN on the server certificate.

1. Copy RHN certificate from satellite server to client, make sure the cert has not expired and the CN matches server name.

root:homevm:~:# openssl x509 -in /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT -noout -subject
subject= /C=US/ST=CA/L=SanFrancisco/O=home.net/OU=spacewalk.home.net/CN=homevm.home.net

root:homevm:~:# openssl x509 -in /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT -noout -dates
notBefore=Aug  2 06:04:05 2014 GMT
notAfter=Jul 27 06:04:05 2036 GMT

root:homevm:~:# scp /var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT root@client:/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT

[root@blackhat rpm-gpg]# grep -i serverurl /etc/sysconfig/rhn/up2date 
serverURL=http://homevm.home.net/XMLRPC

2. If you get certificate error, during package deployment, copy the RPM GPG public keys from satellite to the clients
On Server side –

root:homevm:/etc/pki/rpm-gpg:# ls -al RPM-GPG-KEY-*
-rw-r--r-- 1 root root 1706 Nov 30  2013 RPM-GPG-KEY-CentOS-6
-rw-r--r-- 1 root root 1730 Nov 30  2013 RPM-GPG-KEY-CentOS-Debug-6
-rw-r--r-- 1 root root 1730 Nov 30  2013 RPM-GPG-KEY-CentOS-Security-6
-rw-r--r-- 1 root root 1734 Nov 30  2013 RPM-GPG-KEY-CentOS-Testing-6
-rw-r--r-- 1 root root 1649 Nov  4  2012 RPM-GPG-KEY-EPEL-6
-rw-r--r-- 1 root root 1011 Feb  5  2011 RPM-GPG-KEY-oracle

root:homevm:/etc/pki/rpm-gpg:# scp RPM-GPG-KEY-* root@client:/etc/pki/rpm-gpg

On client side -
[bash]
# rpm --import RPM-GPG-KEY-CentOS-*

References –
https://access.redhat.com/documentation/en-US/Red_Hat_Network_Satellite/5.3/html/Installation_Guide/s1-maintenance-push-clients.html

Reduce or shrink the size of non root LVM mount.

In a system with limited disk size, you might run out of disk space in one LVM mount while having plenty of space in another mount. If both LVMs are in the same volume group (VGs), you can easily take away some of the free space from one LVM and add it to the one with low disk space. Both lvreduce and lvresize commands can be used to shrink the LVM. In this example, we will use lvresize.

Note – the steps below have to be done with care, there is a potential for losing data. If the data in the existing partition is critical, make sure you take a backup.

Shrink LVM by example – we will reduce the LVM for /usr/local file system mount from 2.0G to approximately 1.5G.

1. Unmount partition after confirming no file is in use from the partition.

root:homevm:~:# df -Pvh /usr/local
/dev/mapper/vg00-lvol04  2.0G   68M  1.9G   4% /usr/local

root:homevm:~:# lsof /dev/mapper/vg00-lvol04 

root:homevm:~:# umount /usr/local/

2. Do a file system consistency check –

root:homevm:~:# e2fsck -f /dev/mapper/vg00-lvol04 
e2fsck 1.41.12 (17-May-2010)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg00-lvol04: 46/131072 files (0.0% non-contiguous), 25423/524288 blocks

3. Reduce the file system first, so that the logical volume is always at least as large as the file system expects it to be.

root:homevm:~:# resize2fs /dev/mapper/vg00-lvol04 1400M
resize2fs 1.41.12 (17-May-2010)
Resizing the filesystem on /dev/mapper/vg00-lvol04 to 358400 (4k) blocks.
The filesystem on /dev/mapper/vg00-lvol04 is now 358400 blocks long.

root:homevm:~:# mount /usr/local/

root:homevm:~:# lvresize -L 1500M /dev/mapper/vg00-lvol04 
  Rounding size to boundary between physical extents: 1.47 GiB
  WARNING: Reducing active and open logical volume to 1.47 GiB
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lvol04? [y/n]: y
  Reducing logical volume lvol04 to 1.47 GiB
  Logical volume lvol04 successfully resized

root:homevm:~:# resize2fs /dev/mapper/vg00-lvol04 
resize2fs /dev/mapper/vg00-lvol04
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/mapper/vg00-lvol04 is mounted on /usr/local; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/mapper/vg00-lvol04 to 385024 (4k) blocks.
The filesystem on /dev/mapper/vg00-lvol04 is now 385024 blocks long.

root:homevm:~:# df -Pvh /usr/local
/dev/mapper/vg00-lvol04  1.5G   68M  1.4G   5% /usr/local

References –
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/Logical_Volume_Manager_Administration/

When will the SSL certificate for a site expire or in how many days will an SSL certificate expire?

If you are a system administrator, at least once in your career you might have worked with managing SSL certificates as well as making sure that SSL certificates are renewed before they expire. I have seen Linux admins using Nagios to monitor SSL certificates and get notified a few days before expiry and in some cases admins setup a cron job which polls the sites to be monitored and send out an email if any of the certs for a site are going to expire soon.

Googling for information on how to check SSL certificate expiration for a site might return results like this one on openssl s_client.

My favorite tool for getting certificate expiry is the Nagios plugin utility – check_http. The check_http script displays the exact date/time the SSL certificate for a given site expires as well as how many days are left before expiry.

Installation –

apt-get install nagios-plugins
yum install nagios-plugins-all

In my system, the plugins were installed under /usr/lib/nagios/plugins directory –

root@linubuvma:/usr/lib/nagios/plugins# cat /etc/issue
Ubuntu 14.04.5 LTS \n \l

root@linubuvma:/usr/lib/nagios/plugins# pwd
/usr/lib/nagios/plugins

root@linubuvma:/usr/lib/nagios/plugins# ls
check_apt      check_dbi       check_dns       check_host       check_ifoperstatus  check_ldap   check_mrtg         check_nntp      check_ntp_time  check_ping   check_rta_multi  check_spop   check_time   negate
check_breeze   check_dhcp      check_dummy     check_hpjd       check_ifstatus      check_ldaps  check_mrtgtraf     check_nntps     check_nwstat    check_pop    check_sensors    check_ssh    check_udp    urlize
check_by_ssh   check_dig       check_file_age  check_http       check_imap          check_load   check_mysql        check_nt        check_oracle    check_procs  check_simap      check_ssmtp  check_ups    utils.pm
check_clamd    check_disk      check_flexlm    check_icmp       check_ircd          check_log    check_mysql_query  check_ntp       check_overcr    check_real   check_smtp       check_swap   check_users  utils.sh
check_cluster  check_disk_smb  check_ftp       check_ide_smart  check_jabber        check_mailq  check_nagios       check_ntp_peer  check_pgsql     check_rpc    check_snmp       check_tcp    check_wave

How to get the expiry information?

The -C option of check_http is what we are looking for. The help page for check_http explains the -C option as below –

-C, --certificate=INTEGER
Minimum number of days a certificate has to be valid. Port defaults to 443
(when this option is used the URL is not checked.)

Let us test it if any of the sites below have certificates which expire in the coming 30 days –

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H yahoo.com -C 30
OK - Certificate 'www.yahoo.com' will expire on 10/30/2017 23:59.

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H gmail.com -C 30
OK - Certificate 'mail.google.com' will expire on 03/09/2017 13:34.

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H linuxfreelancer.com -C 30
OK - Certificate 'linuxfreelancer.com' will expire on 08/12/2017 03:01.

In order for check_http to show us how many days are left before the SSL certificate expires, we give it a much longer number of days (-C) –

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H yahoo.com -C 1000
WARNING - Certificate 'www.yahoo.com' expires in 298 day(s) (10/30/2017 23:59).

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H gmail.com -C 1000
WARNING - Certificate 'mail.google.com' expires in 63 day(s) (03/09/2017 13:34).

root@linubuvma:/usr/lib/nagios/plugins# ./check_http -t 60 -H linuxfreelancer.com -C 1000
WARNING - Certificate 'linuxfreelancer.com' expires in 219 day(s) (08/12/2017 03:01).

If the output doesn’t show the number of days left or the status is ‘OK’, keep on increasing the number of days. The ‘-t’ option is the connection timeout in seconds. In addition to running it interactively, check_http is very useful for scripting as well as automated monitoring.

In part 1 of this series, we saw how to pull Docker images from Docker Hub and launch Docker containers. We interacted with a running Docker container by running some bash commands, in this tutorial we will see how to use Dockerfile to automate image building for quicker deployment of applications in a container.

Dockerfile is a text file containing a set of instructions or commands in order to build a Docker image.

Prerequisites

1. Complete the tutorial on part 1 before proceeding. You will need a Docker engine running and the latest official Ubuntu Docker images locally hosted.

2. Create directories

$mkdir ~/docker-flask 
$cd ~/docker-flask

3. Add Dockerfile : ~/docker-flask/Dockerfile
The commands below will be used to create the Docker image. It will pull the latest Ubuntu official Docker image as a first layer or base. Then it will resynchronize the apt package index files from their sources.

A /flask directory will be created in the image, followed by installing Flask and running our Flask app, which we will write in next step.

FROM ubuntu:latest
RUN apt-get update && apt-get install -y python-pip python-dev
COPY . /flask
WORKDIR /flask
RUN pip install -r requirements.txt
EXPOSE 80
ENTRYPOINT ["python"]
CMD ["app.py"]

4. Write flask app : ~/docker-flask/app.py
Let us write a practical app, rather than just printing hello world. The flask app will return the user agent information of the visitor if the index page is visited.

We will also have a URL under /status/ followed by a valid HTTP status code. Given this HTTP status code by the visitor, the flask web server will generate the same HTTP status code header. For instance, if the user visits http://localhost/status/502, the flask server will respond with ‘502 BAD GATEWAY’ HTTP header.

Let us write it under ~/docker-flask/app.py

from flask import Flask
from flask import request, jsonify

app = Flask(__name__)

@app.route('/')
def user_agent():
    user_agent = request.headers.get('User-Agent')
    return 'Your browser is %s.' % user_agent

@app.route('/status/<int:httpcode>')
def get_status(httpcode):
    httpcode = int(httpcode)
    if httpcode < 100 or httpcode >= 600:
        return jsonify({'Status': 'Invalid HTTP status code'})
    elif httpcode >= 100 and httpcode < 500:
        return jsonify({'Status': 'UP'}) , httpcode
    else:
        return jsonify({'Status': 'DOWN'}) , httpcode

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0', port=80)

5. requirements.txt : ~/docker-flask/requirements.txt

cat ~/docker-flask/requirements.txt
Flask==0.12

By now, your directory structure should look similar to this –

daniel@lindell:~/docker-flask$ pwd
/home/daniel/docker-flask

daniel@lindell:~/docker-flask$ ls
app.py  Dockerfile  requirements.txt

Time to build the Docker image –

sudo docker build -t flaskweb:latest .

This will execute the series of commands under Dockerfile. If successful, you will end up with a Docker image named flaskweb and tagged latest –

root@lindell:~# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
flaskweb            latest              6b45443b6380        22 minutes ago      440.4 MB
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

If you encounter any errors, validate you don’t have any syntax errors on Dockerfile.

It is time to run the container –

daniel@lindell:~/docker-flask$ sudo docker run -d -p 80:80 flaskweb
d9af9a1c92bff45b56fc97d13935972b65e3554bfe22ec2f3c102fd26bd20e4c

daniel@lindell:~/docker-flask$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED              STATUS              PORTS                NAMES
d9af9a1c92bf        flaskweb            "python app.py"     About a minute ago   Up About a minute   0.0.0.0:80->80/tcp   drunk_mcnulty

In this case, both the host and container will be listening on port 80, feel free to modify this according to your setup.

Test it, we will use httpie to query the web server, if you don’t have httpie installed, you can use ‘curl -I’ to get the full header –

daniel@lindell:~/blog/docker-flask$ http http://localhost/
HTTP/1.0 200 OK
Content-Length: 29
Content-Type: text/html; charset=utf-8
Date: Fri, 30 Dec 2016 14:35:53 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

Your browser is HTTPie/0.9.2.

daniel@lindell:~/blog/docker-flask$ http http://localhost/status/404
HTTP/1.0 404 NOT FOUND
Content-Length: 21
Content-Type: application/json
Date: Fri, 30 Dec 2016 14:35:56 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

{
    "Status": "UP"
}

daniel@lindell:~/blog/docker-flask$ http http://localhost/status/502
HTTP/1.0 502 BAD GATEWAY
Content-Length: 23
Content-Type: application/json
Date: Fri, 30 Dec 2016 14:35:58 GMT
Server: Werkzeug/0.11.13 Python/2.7.12

{
    "Status": "DOWN"
}

Full clean up – if you want to start all over again or want to delete the container and images we have created, i have outlined the steps below. The first step is to stop the running container using ‘docker stop’ command, pass it the first few digits of the container ID.

Once the container is stopped, use ‘docker rm’ to delete the container. At this point, we can proceed with deleting the image as the image is not attached to any running container. Use ‘docker rmi’ to delete the image. We will keep the base Ubuntu image for future use.

daniel@lindell:/tmp$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                NAMES
d9af9a1c92bf        flaskweb            "python app.py"     12 minutes ago      Up 12 minutes       0.0.0.0:80->80/tcp   drunk_mcnulty

daniel@lindell:/tmp$ sudo docker stop d9a
d9a

daniel@lindell:/tmp$ sudo docker rm d9a
d9a

daniel@lindell:/tmp$ sudo docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

daniel@lindell:/tmp$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
flaskweb            latest              6b45443b6380        39 minutes ago      440.4 MB
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

daniel@lindell:/tmp$ sudo docker rmi 6b45
Untagged: flaskweb:latest
Deleted: sha256:6b45443b63805583f41fbf60aaf5cf746b871fdcfa8fe1c6d5adfb52870e7c89
Deleted: sha256:02062a8ea251d993f54e15f9e5654e40894449430acd045476000cd9ebbdf459
Deleted: sha256:fa2439cd5bc8a53152877c1dc3b12a60ab808bcfe5078549ea5e945f462330da
Deleted: sha256:3bac38b223d80a4db6c4283fd56275fe05ceeab6a1dfd81871aa14c6cda387df
Deleted: sha256:d97357dc5d7454e3b7757f2c348323c84d1902dd806792c53d1fd0ca7813b091
Deleted: sha256:b55dd5bd3326ec4657dc389f4aae69c34a7ba222872f7b868eb8de69d7f69dab
Deleted: sha256:eab59ae84eb136339d08fbacd2905a1ee80a0c875e8e14a4d5184fac30445714
Deleted: sha256:588253a9066c49786fcd0121353e7f0f2cea05cebbc6b9cef67f0c823d23dce8
Deleted: sha256:fe9f27a1cb9165531a1f5149c16ebcd522422e4ac2610035bbbcada7fd0b7551
Deleted: sha256:18ca1bc40895f6f97cae28fa5707bde537ac27023762303f98912c11549431ae

daniel@lindell:/tmp$ sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              104bec311bcd        2 weeks ago         129 MB

References –
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
http://flask.pocoo.org/docs/0.12/quickstart/