Author Archive

Google cloud platform – NEXT 2017

As of the beginning of 2017, Amazon Web Services(AWS) is the leader in Cloud based infrastructure as a service(IAS), followed by Microsoft. The cloud business is still competitive and many enterprises have yet to migrate fully to the cloud. Cloud service providers are continuously competing in the quality of service, diversity and range of services provided, price etc.

A new entrant to the Cloud business is Google, which has recently started targeting big enterprises as well as individual developers and small businesses. The core infrastructure Google had used for years internally to service global users on such services as Gmail, Google maps, Google search is now being offered to users. The Gartner magic quadrant for 2016 has put it in the visionaries quadrant

Follow NEXT in twitter
Google cloud in Facebook

To get started with Google cloud platform(GCP), go to the documentation page for GCP.

For list of solution and products offered by GCP – GCP products.

Linux – run a scheduled command once

When we think of running scheduled tasks in Linux, the first tool which comes to mind to most Linux users and admins is cron. Cron is very popular and useful when you want to run a task regularly – say after a given interval, hourly, weekly or even every time the system reboots. The scheduled tasks are faithfully executed by the crond daemon based on the scheduling we set, if somehow crond missed the task because the machine was not running 24/7, then anacron takes care of it. My topic today though is about at which executes a scheduled task only ones at a later time.

1. Adding future commands interactively

Let us schedule to run a specific command 10 minutes from now, press CTRL+D once you have entered the command –

daniel@lindell:~$ at now +10 minutes
at> ps aux &> /tmp/at.log
[[PRESS CTRL+D HERE]]
job 4 at Wed Mar  1 21:24:00 2017

Now the above command ‘ps aux’ is scheduled to run 10 minutes from now, only once. We can check the pending jobs using atq command –

daniel@lindell:~$ atq
4	Wed Mar  1 21:24:00 2017 a daniel

2. Remove scheduled jobs from queue using atrm or at -r

daniel@lindell:~$ at now +1 minutes
at> ps aux > /tmp/atps.logs
at> <EOT>
job 8 at Wed Mar  1 21:25:00 2017
daniel@lindell:~$ atq
8	Wed Mar  1 21:25:00 2017 a daniel
daniel@lindell:~$ atrm 8
daniel@lindell:~$ atq
daniel@lindell:~$ 

3. Run jobs from a script or file.

In some cases the job you want to run is a script –

daniel@lindell:~$ at -f /tmp/myscript.sh 8:00 AM tomorrow
daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel

4. Embed shell commands inline –

at now +10 minutes <<-EOF
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi
EOF

5. View contents of scheduled task using ‘at -c JOBNUMBER’ :

daniel@lindell:~$ at now +10 minutes <<-EOF
> if [ -d ~/pythonscripts ]; then
>  find ~/pythonscripts/ -type f -iname '*.pyc' -delete
> fi
> EOF
job 13 at Wed Mar  1 21:51:00 2017

daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel
12	Wed Mar  1 21:45:00 2017 a daniel
13	Wed Mar  1 21:51:00 2017 a daniel


daniel@lindell:~$ at -c 13
 [[ TRUNCATED ENVIRONMENTAL STUFF ]]
cd /home/daniel || {
	 echo 'Execution directory inaccessible' >&2
	 exit 1
}
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi

In this small tutorial about at utility, we saw some of the use cases for at – especially where we had to execute a scheduled task only once. The time specification it uses is human friendly, example it supports time specs such as midnight, noon, teatime or today. Feel free to read the man pages for details.

References –

https://linux.die.net/man/1/at

Ansible non standard ssh port

How to run playbooks against a host running ssh on a port other than port 22.

Ansible is a simple automation or configuration management tool, which allows to execute a command/script on remote hosts in an adhoc or using playbooks. It is push based, and uses ssh to run the playbooks against remote hosts. The below steps are on how to run ansible playbooks on a host running ssh on port 2222.

One of the hosts managed by ansible is running in a non-default port. It is a docker container listening on port 2222. Actually ssh in container listens on port 22, but the host redirect port 2222 on host to port 22 on container.

1. Use environment variable –


 ansible-playbook tasks/app-deployment.yml --check -e ansible_ssh_port=2222

2. specify the port in the inventory or hosts file –

Under hosts file set the hostname to have the format ‘server:port’ –

[docker-hosts]
docker1:2222

Let us run the playbook now –

root@linubuvma:/tmp/ansible# cat tasks/app-deployment.yml
- hosts: docker-hosts
  vars:
    app_version: 1.1.0
  tasks:
  - name: install git
    apt: name=git state=latest
  - name: Checkout the application from git
    git: repo=https://github.com/docker/docker-py.git dest=/srv/www/myapp version={{ app_version }}
    register: app_checkout_result


root@linubuvma:/tmp/ansible# ansible-playbook tasks/app-deployment.yml

PLAY [docker-hosts] ************************************************************

TASK: [install git] ***********************************************************
changed: [docker1]

TASK: [Checkout the application from git] *************************************
changed: [docker1]

PLAY RECAP ********************************************************************
docker1                    : ok=2    changed=2    unreachable=0    failed=0

References –

http://docs.ansible.com/
http://docs.ansible.com/ansible/intro_inventory.html

How to add a disk to a running VM

This tutorial will show you how to make Vmware autodetect a new virtual disk in a running Linux VM without rebooting it.

1. Start by adding the SCSI virtual disk

In my case I am using VMware workstation and following this link to add the disk. Use the steps relevant for your system, it shouldn’t be that difficult.

2. Status of the disk on the VM

By running fdisk on my Linux VM, I can see that it has two disks attached – /dev/sda and /dev/sdb – both having the same size of 107.4GB. The later is an LVM partition, which I can resize on the fly.

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1 

[root@lincenvma ~]# ls /sys/class/scsi_host/
host0  host1  host2

Apparently after adding the disk, the VM didn’t automatically detect the disk, that takes us to the next step of re-scanning the disks.

3. Rescan Scsi bus

This is where we run the trigger command, to scan the SCSI bus for everything – channel number, SCSI target ID, and LUN values. Check the /var/log/dmesg log file or run dmesg command in another window to see the action live –

[root@lincenvma ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

[root@lincenvma ~]# dmesg
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 61 00 00 00
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
 sdc: unknown partition table
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Attached SCSI disk


[root@lincenvma ~]# tail -f /var/log/messages
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB)
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Write Protect is off
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sdc: unknown partition table
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Attached SCSI disk

We can see that the system detected the new disk and identified it as /dev/sdc.

In RHEL/CentOS 5.4 or above, the script /usr/bin/rescan-scsi-bus.sh will have the same effect.

4. Validate

At the bottom, fdisk shows the new disk as /dev/sdc with size 1073 MB –

[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1  /dev/sdc

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

From here, you can partition the disk and mount it directly or create a PV and merge it with the existing LVM to increase the size of the LVM.

References –

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/adding_storage-device-or-path.html

http://serverfault.com/questions/490397/what-does-in-echo-sys-class-scsi-host-host0-scan-mean

https://www.vmware.com/support/ws5/doc/ws_disk_add_virtual.html

How to get the original file from an RPM.

You might have accidentally deleted a configuration or binary file which was installed as part of a package OR may be you modified the original file and you want to restore the original as you didn’t take a back – this blog will help you in resolving similar issues.

The steps below are for Redhat/CentOS based Linux systems, where the package was installed using rpm or yum. The steps basically outline how to grab the rpm package, unpack and gain access to the files inside the rpm. I will demo the steps i used to recover ntp.conf –

1. Identify the package owning/containing the file –

[root@tester ~]# rpm -qf /etc/ntp.conf
ntp-4.2.6p5-1.el6.centos.x86_64

2. download the original package –
We will download the rpm package in /tmp in order to unpack it later –

[root@tester ~]# cd /tmp/
[root@tester tmp]# yumdownloader ntp-4.2.6p5-1.el6.centos.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: centos.aol.com
 * epel: reflector.westga.edu
 * extras: centos-distro.cavecreek.net
 * updates: lug.mtu.edu
ntp-4.2.6p5-1.el6.centos.x86_64.rpm    | 592 kB     00:00
[root@tester tmp]# ls -lh ntp-4.2.6p5-1.el6.centos.x86_64.rpm
-rw-r--r--. 1 root root 592K Mar  9 03:19 ntp-4.2.6p5-1.el6.centos.x86_64.rpm

Note – you can following the steps in this link to install yumdownloader or alternative means to download a package. For a short answer, just run ‘yum install yum-utils’ to install yumdownloader.

3. extrack RPM package –

We will use rpm2cpio to extract the RPM package and then pipe to cpio to copy the files from the archive –

[root@tester tmp]# rpm2cpio ntp-4.2.6p5-1.el6.centos.x86_64.rpm | cpio -i --make-directories
3344 blocks
[root@tester tmp]# ls
etc  ntp-4.2.6p5-1.el6.centos.x86_64.rpm  usr  var  yum_save_tx-2014-03-09-01-00h9I83Y.yumtx

4. Access the file you are looking for –

Once we extracted the rpm package, the directory structure is easy to navigate – for instance if we looking for ntp.conf, it is under etc/ntp.conf – the directory structure mirrors that of the OS –

[root@tester tmp]# ls -al etc/
total 28
drwxr-xr-x. 6 root root 4096 Mar  9 03:19 .
drwxrwxrwt. 6 root root 4096 Mar  9 03:19 ..
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 dhcp
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 ntp
-rw-r--r--. 1 root root 1778 Mar  9 03:19 ntp.conf
drwxr-xr-x. 3 root root 4096 Mar  9 03:19 rc.d
drwxr-xr-x. 2 root root 4096 Mar  9 03:19 sysconfig
[root@tester tmp]# ls -al etc/ntp
ntp/      ntp.conf
[root@tester tmp]# ls -al etc/ntp.conf
-rw-r--r--. 1 root root 1778 Mar  9 03:19 etc/ntp.conf

At this point, you can view the files in the original rpm and copy the ones you need. You might also find the link below that I referenced for quickly re-installing original files using

yum reinstall ntp

.

References –

https://access.redhat.com/solutions/10154
https://www.g-loaded.eu/2012/03/26/restore-original-configuration-files-from-rpm-packages/

tcpdump – how to grep or save output in real time

Tcpdump is a handy tool for capturing network packets. It will keep on capturing packets until it receives a SIGINT or SIGTERM signal, or the specified number of packets have been processed. If you have tried to pipe the output of tcpdump to a file or tried to grep it, you will notice a significant delay before you even see an output. The reason behind that is, tcpdump buffers output in 4k byte chunks and it doesn’t flush it until 4k of data is captured.

To get around the buffering, you can use ‘-l’ option to see the packets captured in real time in order to ‘grep’ or ‘tee’ output to a file. From the man page –


-l     Make stdout line buffered.  Useful if you want to see the data while capturing it.  
     E.g. "tcpdump  -l  |  tee dat" or "tcpdump  -l   > dat  &  tail  -f  dat"

Send output to a file while watching the captured packets in real time –

root@linubuvma:~# tcpdump -l -i any -qn port 53 | tee -a /tmp/dnslogs
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
09:02:48.772892 IP 192.168.10.206.29185 &gt; 192.168.10.109.53: UDP, length 33
09:02:48.773196 IP 192.168.10.206.35333 &gt; 192.168.10.109.53: UDP, length 33
09:02:48.775062 IP 192.168.10.109.53 &gt; 192.168.10.206.29185: UDP, length 78
09:02:48.775085 IP 192.168.10.109.53 &gt; 192.168.10.206.35333: UDP, length 117
09:02:50.274318 IP 192.168.10.206.46983 &gt; 192.168.10.109.53: UDP, length 33
09:02:50.274695 IP 192.168.10.206.55061 &gt; 192.168.10.109.53: UDP, length 33
09:02:50.275531 IP 192.168.10.109.53 &gt; 192.168.10.206.46983: UDP, length 78
09:02:50.276384 IP 192.168.10.109.53 &gt; 192.168.10.206.55061: UDP, length 117

Grep text pattern in real time –

root@linubuvma:~# tcpdump -l -i any -vv |grep --color -i google
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
linubuvma.home.net.34647 &gt; ns1.home.net.domain: [bad udp cksum 0x96c1  0x4797!] 34365+ A? google.com. (28)
linubuvma.home.net.34647 &gt; ns1.home.net.domain: [bad udp cksum 0x96c1  0x9bf1!] 12744+ AAAA? google.com. (28)
ns1.home.net.domain &gt; linubuvma.home.net.34647: [udp sum ok] 12744 q: AAAA? google.com. 1/0/0 google.com. AAAA 2607:f8b0:4002:c07::66 (56)
ns1.home.net.domain &gt; linubuvma.home.net.34647: [udp sum ok] 34365 q: A? google.com. 6/0/0 google.com. A 74.125.196.139, google.com. A 74.125.196.100, google.com. A 74.125.196.101, google.com. A 74.125.196.102, google.com. A 74.125.196.113, google.com. A 74.125.196.138 (124)
173 packets captured
240 packets received by filter
0 packets dropped by kernel

A handy cheat sheet for tcpdump – https://comparite.ch/tcpdumpcs

References –
http://www.tcpdump.org/tcpdump_man.html
http://unix.stackexchange.com/questions/15989/how-to-process-pipe-tcpdumps-output-in-realtime