Archive for the ‘ How tos ’ Category

C programming Language – Code snippets

C Programming Language, 2nd Edition

Compiling and running the sample codes using gcc :

gcc sample.c -o sample

Chapter 2 – Types, Operators and Expressions

1.Convert to lower case.


int main(int argc, char *argv[])


     if(*argv[1] >='A' && *argv[1] <='Z') { putchar(*argv[1] + 'a' -'A'); *++argv[1]; }
      {putchar(*argv[1]); *++argv[1]; }

return 0;

2. Get bits


unsigned getbits(unsigned x,int p, int n);

int main()

   int x=16;

return 0;

unsigned getbits(unsigned x, int p, int n)
   return ( x >> (p+1-n)) & ~(~0 << n);

3. Count one bits


int bitcount(unsigned x);
int main()

  unsigned short x=38;

  printf("%d has %d 1 bits\n",x,bitcount(x));
  return 0;

int bitcount(unsigned x)
   int b;
   for(b=0; x!=0; x>>=1)
    if (x&1) b++;
  return b;

4. Remove character from string


int main(int argc, char *argv[])


if (argc !=3)  { printf("usage: del string char\n"); return -1;}


   if(*argv[1] != *argv[2]) { putchar(*argv[1]); *++argv[1]; }
   { *++argv[1]; continue; }
return 0;


5. Convert x to binary

#define  LEN 16
int main()

  int x=112,counter=0;
  int binary[LEN]={0};
     binary[counter]=x%2; x/=2; counter++;
while(counter>=0) { printf("%d",binary[counter]); counter--; }
return 0;

6. Convert char to integer


#define NUM 1

int main(int argc, char *argv[])

 int counter=1,n=0;

 if(argc!=2) { printf("usage: atoi arglist\n"); return -1;}

      if(*argv[NUM]>='0' && *argv[NUM]<='9') { n=10*n+(*argv[NUM]-'0'); *++argv[NUM]; }
       { *++argv[NUM]; continue; }
 return 0;


Reference –

Linux – fast file search

Linux – fast file system search with locate and updatedb



command is the most commonly used search utility in Linux. GNU find searches the directory tree rooted at each given starting-point by evaluating the given expression from left to right, according to the rules of precedence.

There is an alternative and fast way of searching for files and directories in Linux though and that is the


command, and it goes hand in had with the


utility which keeps an indexed database of the files in your system. The locate tools simply reads the database created by updatedb.

Installation –

sudo apt-get -y install mlocate           [Debian/Ubuntu]
sudo yum -y install mlocate               [CentOS/Redhat]

updatedb is usually has a daily cron job to update the default database(‘/var/lib/mlocate/mlocate.db’). To manually update the database, you can manually run the ‘updatedb’ command first. That will take a while depending on the number of files you have on your system, the last time updatedb ran or other file related changes.

First time – update the default database, run any of the below command depending on your requirements. Most likely, the first and/or third command is what you need.

updatedb -U /some/path      # if interested in indexing specific directory that you will search frequently.
updatedb -v                 # verbose mode

Time to search
locate command is the utility to search for entries in a mlocate database.

Some examples –

locate cron         # any file or directory with cron in its name
locate -i cron      # case insensitive
locate -c cron      # only print number of found entries
locate -r 'cron$'   # regex - only files or directories with names ending in cron.
locate -r '/usr/.*ipaddress.*whl$'   # regex for eg. /usr/share/python-wheels/ipaddress-0.0.0-py2.py3-none-any.whl

locate can also print the statistics on count of files, directories, size used by updatedb default directory.

root@cloudclient:/tmp# locate -S
Database /var/lib/mlocate/mlocate.db:
	28,339 directories
	185,661 files
	11,616,040 bytes in file names
	4,481,938 bytes used to store database

Customizing updatedb
updatedb can be customized to output the search database to a different file than the default db, in addition to this we can change the directory to index other than the default root tree. We can then tell locate to use the custom db.

In the below example, I am indexing the files under home directory in /tmp/home.db database, and then run locate to use this custom DB. As you can see the number of files and directories is way lower and thus the search much faster although since it has to scan specific directory.

$ updatedb -U ~ -o /tmp/home.db
$ locate -d /tmp/home.db cron
$ locate -d /tmp/home.db -S
Database /tmp/home.db:
	3,530 directories
	29,943 files
	2,635,675 bytes in file names
	762,621 bytes used to store database

References –

Infoblox dns api

Infoblox dns management – using the REST api with Python

Infoblox provides a product to manage your DNS, DHCP and IPAM through a single management interface. In this short article, I will walk you through automating some of the day to day operations work in managing DNS using Infoblox REST API. The REST based api tool can be also used to manage DHCP and IPAM.

The Infoblox WAPI is the REST interface we will interact with. In a highly available DNS setup, the WAPI requests go to the HA Grid Master IP or hostname. The requests typically have arguments and body. A great resource that helped me get started is a github repo of Infoblox Api python modules.

Clone the Infoblox Python modules repo to get started –

cd /tmp
git clone

The class initialization of infoblox api ( ) holds certain parameters, including ones used for authentication. Set this values according to your environment.

        """ Class initialization method
        :param iba_ipaddr: IBA IP address of management interface
        :param iba_user: IBA user name
        :param iba_password: IBA user password
        :param iba_wapi_version: IBA WAPI version (example: 1.0)
        :param iba_dns_view: IBA default view
        :param iba_network_view: IBA default network view
        :param iba_verify_ssl: IBA SSL certificate validation (example: False)

Once you have the right parameters, you can write scripts which utilize the module. Here is a simple python script to get A record record details, given an IP address and domain.

Make sure you work under the directory where you cloned the infoblox github repo –

Script path: /tmp/
Usage example: python /tmp/

Script to pull A record details of a DNS zone –

#!/usr/bin/env python

import infoblox
import sys
import requests
import json
import socket

def Usage():
    print "{0} {1} {2}".format(sys.argv[0], 'IP-ADDRESS','FQDN')

if len(sys.argv)<3:

    print "Not valid IP."

# Create a session

ibx=infoblox.Infoblox(ibx_server, ibx_username, ibx_password, ibx_version, ibx_dns_view, ibx_net_view, iba_verify_ssl=False)

# Get address details
payload='{"ipv4addr": '  + json.JSONEncoder().encode(myip) + ',' + '"name": ' + json.JSONEncoder().encode(myfqdn) + '}'
my_url='https://' + ibx.iba_host + '/wapi/v' + ibx.iba_wapi_version + '/record:a'
r = requests.get(url=my_url, auth=(ibx.iba_user, ibx.iba_password), verify=ibx.iba_verify_ssl, data=payload)
data = r.json()
print data

You can also use the existing class methods defined in the infoblox module. In the below example, I am using the ‘create_cname_record’ method to create an Alias.

ibx=infoblox.Infoblox(ibx_server, ibx_username, ibx_password, ibx_version, ibx_dns_view, ibx_net_view, iba_verify_ssl=False)
ibx.create_cname_record(canonical, name)

If you can’t find the particular method in the infoblox module, it should’t be difficult to write one. Follow the api reference documentation on the structure of the WAPI Api calls.

Note – in some cases, you have to make multiple api calls to perform certain tasks. One example is updating the TTL for a DNS entry. On the first call, you need to get the host reference id and on second call update the TTL. The below example shows a simple python script to update the TTL (in seconds) for an existing FQDN entry.

Usage example - python 600 script –

#!/usr/bin/env python

import infoblox
import sys
import json
import requests

def Usage():
    print "{0} {1} {2}".format(sys.argv[0], 'ExistingFQDN', 'TTL')

if len(sys.argv)<3:


# Create a session
ibx=infoblox.Infoblox(ibx_server, ibx_username, ibx_password, ibx_version, ibx_dns_view, ibx_net_view, iba_verify_ssl=False)
# Validate oldname exists
if ibxhost['name'] != oldname:
    print oldname + " does not exist."
# update data
my_url = 'https://' + ibx.iba_host + '/wapi/v' + ibx.iba_wapi_version + '/' + host_ref
r = requests.put(url=my_url, auth=(ibx.iba_user, ibx.iba_password), verify=ibx.iba_verify_ssl, data=payload)
if r.ok:
    print("TTL updated successfully.")
    print("Error - {}".format(r.content))

References –

Products page –

Rest API documentation –


Ansible : rolling upgrades or updates.

Making a change to live servers in production is something which has to be done with extreme care and planning. Several deployment types such as blue/green, canary, rolling update are in use today to minimize user impact. Ansible can be used to orchestrate a zero-downtime rolling change to a service.

A typical upgrade of an application, such as patching, might go like this –

  1. disable monitoring alerts for a node
  2. disable or pull out from load balancer
  3. make changes to server
  4. Reboot node
  5. wait for node to be UP and do sanity check
  6. put node back to load balancer
  7. turn on monitoring of node

Rinse and repeat.

Ansible would be a great choice in orchestrating above steps. Let us start with an inventory of web servers, a load balancer and a monitoring node with nagios –




The web servers are running apache2, and we will patch apache and the kernel. For the patch to take effect, the servers need to be recycled. We will perform the patching one node at a time, wait for the node to be healthy and go to the next. The first portion of our playbook would be something like this –

- hosts: webservers
  serial: 1

  - name: Stop apache service
    service: name=httpd state=stopped

  - name: update apache
    yum: name=httpd state=latest
  - name: Update Kernel
    yum: name=kernel state=latest
  - name: Reboot server
    shell: /sbin/reboot -r +1

  - name: Wait for webserver to come up
    wait_for: host={{ inventory_hostname }} port=80 state=started delay=65 timeout=300

I haven’t included the playbook tasks for disabling/enabling monitoring as well as removing/adding node to the load balancer. The procedures might differ depending on what type of monitoring system or load balancer technology you are using. In addition to this, the sanity check show is a simple port 80 probing, in reality a much more sophisticated validation can be done.

References –

Getting started with Google Cloud Platform(GCP)

Google provides the same cloud services as other cloud providers such as Amazon Web Services(AWS) and Microsoft (Azure). It refers it as Google Cloud Platform or GCP. You can easily get started by signing up for free –

List of all products provided in GCP –

Google provides several ways to interact with its services-

1. GCP console (web ui)
GCP console is a web user interface which lets you interact with GCP resources. You can view, create, update and delete cloud resources from this page.

How to create a Linux vm(instance) using the console –

2. Command Line Interface (gcloud cli toolset)
Install gcloud :

The gcloud toolkit is a command line interface tool to interact with GCP resources. Very useful in automating cloud tasks, with its command completion and help pages, it is almost a necessity to familiarize yourself with this tool.

How to create an instance using gcloud cli –

3. Cloud deployment manager
GCP deployment manager allows you to create, delete and update GCP resources in parallel by declaring a set of templates written in jinja2 or python. Templates can be shared with other teams and can be re-used with little modification.

What deployment manager is and how it works –

How to deploy an a GCP instance using deployment manager –

4. APIs
Google provides application programming interface(APIs) to interact with its GCP services. Google recommends using the client libraries over directly calling the RESTful apis.

a. Client libraries

List of client libraries for different programming languages –

How to interact with Google Compute Engine(GCE) using the Python client library –

b. RESTful or raw APIs

API Reference –

Method for creating an instance –

References –
Google Cloud Platform Services overview

Ansible – How to run a portion of a playbook using tags.

If you have a large playbook it may become useful to be able to run a specific part of it or only a single task without running the whole playbook. Both plays and tasks support a “tags:” attribute for this reason.

In this specific scenario, I have a playbook which configures all productions servers from the moment the servers boot till they start taking traffic. While testing the plays in dev environment, I was debugging an issue on the parts which does dns configuration. This is where the “tags” attributes comes handy –

1. Tag the task –

- name: Configure resolv.conf
  template: src=resolv.conf.j2 dest=/etc/resolv.conf
  when: ansible_hostname != "ns1"
    - dnsconfig

2. Run only the tasks tagged with a specific name –

root@linubuvma:/etc/ansible# ansible-playbook -i dc1/hosts dc1/site.yml --tags "dnsconfig" --check

PLAY [Setup data center 1 servers] *****************************************************

TASK: [common | Configure resolv.conf] ****************************************
skipping: [ns1]
changed: [docker]
ok: [ns2]
ok: [whitehat]
ok: [mail]
ok: [www]
ok: [ftp]

PLAY RECAP ********************************************************************
whitehat                   : ok=1    changed=0    unreachable=0    failed=0
docker                     : ok=1    changed=1    unreachable=0    failed=0
ns1                        : ok=0    changed=0    unreachable=0    failed=0
ns2                        : ok=1    changed=0    unreachable=0    failed=0
mail                        : ok=1    changed=0    unreachable=0    failed=0
www                   : ok=1    changed=0    unreachable=0    failed=0
ftp                   : ok=1    changed=0    unreachable=0    failed=0

Ansible playbook will run only the task with the specified tag, it will skip the rest of the tasks in the playbook. Use the ‘–list-tags’ flag to view all the tags.

References –