Archive for the ‘ How tos ’ Category

Contents of most text files change during the life of the file , and it is common to find yourself trying to search and replace certain text across multiple files. In Linux, this is a fairly easy task. Let us go through some of the commands you will need to perform this task and then finally construct a single liner to do the job.

  • grep is your best friend when it comes to finding a string in a file. In this case we are looking for the string “REPLACEME” in current directory and across multiple files –
$ grep -r REPLACEME *
host.conf:# The "REPLACEME" line is only used by old versions of the C library.
host.conf:order hosts,REPLACEME,bind
hostname:REPLACEME
hosts.deny:ALL: REPLACEME

If we are interested only in the files which contains this particular text –

$ grep -lr REPLACEME *
host.conf
hostname
hosts.deny
  • sed is a tool of choice for inline editing of files –
$ cat data 
This text will be replaced - REPLACEME
$ sed -i 's/REPLACEME/NEWTEXT/g' data 
$ cat data 
This text will be replaced - NEWTEXT

From here, there are multiple ways to skin the cat – we can loop through the files and do the replacement or we can let the commands do the replacement with a wildcard.

For loop style update -

$ for f in $(grep -lr REPLACEME *); do echo "*** File: ${f} ***" ; sed -i 's/REPLACEME/NEWTEXT/g' $f; done
*** File: host.conf ***
*** File: hostname ***
*** File: hosts.deny ***

$ grep -lr REPLACEME *

$ grep -lr NEWTEXT *
data
host.conf
hostname
hosts.deny

Actually the above for loop is redundant, sed can make changes across multiple files –

 sed -i 's/REPLACEME/NEWTEXT/g' *

How to install Google cloud platform(GCP) sdk – gcloud cli tool


The instructions below were testing in Ubuntu Linux.

gcloud is the command line interface(CLI) tool for interacting with GCP services. Per Google’s product overview page for gcloud – “The Cloud SDK is a set of tools for Cloud Platform. It contains gcloud, gsutil, and bq, which you can use to access Google Compute Engine, Google Cloud Storage, Google BigQuery, and other products and services from the command-line. You can run these tools interactively or in your automated scripts”.

Let us download, install and initialize this tool in an interactive manner, accept all default settings for all prompts –

$ curl https://sdk.cloud.google.com | bash && exec -l $SHELL
$ gcloud init
If above installation steps go well, check its version –
$ gcloud version
Google Cloud SDK 224.0.0
bq 2.0.36
core 2018.11.02
gsutil 4.34
 
A simple way to validate if the CLI is working as expected is to list all the GCP regions –
$ gcloud compute regions list
NAME                     CPUS  DISKS_GB  ADDRESSES  RESERVED_ADDRESSES  STATUS  TURNDOWN_DATE
asia-east1               0/8   0/2048    0/8        0/1                 UP
asia-east2               0/8   0/2048    0/8        0/1                 UP
asia-northeast1          0/8   0/2048    0/8        0/1                 UP
asia-south1              0/8   0/2048    0/8        0/1                 UP
asia-southeast1          0/8   0/2048    0/8        0/1                 UP
australia-southeast1     0/8   0/2048    0/8        0/1                 UP
europe-north1            0/8   0/2048    0/8        0/1                 UP
europe-west1             0/8   0/2048    0/8        0/1                 UP
europe-west2             0/8   0/2048    0/8        0/1                 UP
europe-west3             0/8   0/2048    0/8        0/1                 UP
europe-west4             0/8   0/2048    0/8        0/1                 UP
northamerica-northeast1  0/8   0/2048    0/8        0/1                 UP
southamerica-east1       0/8   0/2048    0/8        0/1                 UP
us-central1              0/8   0/2048    0/8        0/1                 UP
us-east1                 2/8   31/2048   2/8        0/1                 UP
us-east4                 0/8   0/2048    0/8        0/1                 UP
us-west1                 0/8   0/2048    0/8        0/1                 UP
us-west2                 0/8   0/2048    0/8        0/1                 UP

Only the core components of the gcloud sdk are installed during initial installation. For any additional component to interact with GCP, you have to install the additional component. For instance, to install the component for interactive with Google Kubernetes Engine(GKE) you have to install kubectl


gcloud components install kubectl

Many features of GCP are available in Beta only, for that you have to install the beta component –


gcloud components install beta

Stay up to date with  –

gcloud components update 

.

Tab completion and running commands in Beta feature –


$ gcloud beta container  [tab][tab]
binauthz  clusters  get-server-config  images  node-pools  operations  subnets

$ gcloud beta container get-server-config
Fetching server config for us-east1-c
defaultClusterVersion: 1.9.7-gke.7
defaultImageType: COS
validImageTypes:
- COS
- UBUNTU
- COS_CONTAINERD
validMasterVersions:
- 1.11.2-gke.15
- 1.10.9-gke.3
- 1.10.7-gke.9
- 1.10.6-gke.9
- 1.9.7-gke.7
validNodeVersions:
- 1.11.2-gke.15
- 1.11.2-gke.9
- 1.10.9-gke.3
- 1.10.9-gke.0
- 1.10.7-gke.9
- 1.10.7-gke.6
- 1.10.7-gke.2
- 1.10.7-gke.1
- 1.10.6-gke.9
- 1.10.6-gke.6
- 1.10.6-gke.4
- 1.10.6-gke.3
- 1.10.6-gke.2
- 1.10.6-gke.1
- 1.10.5-gke.4
- 1.10.5-gke.3
- 1.10.5-gke.2
- 1.10.5-gke.0
- 1.10.4-gke.3
- 1.10.4-gke.2
- 1.10.4-gke.0
- 1.10.2-gke.4
- 1.10.2-gke.3
- 1.10.2-gke.1
- 1.9.7-gke.7
- 1.9.7-gke.6
- 1.9.7-gke.5
- 1.9.7-gke.4
- 1.9.7-gke.3
- 1.9.7-gke.1
- 1.9.7-gke.0
- 1.9.6-gke.2
- 1.9.6-gke.1
- 1.9.3-gke.0
- 1.8.12-gke.3
- 1.8.12-gke.2
- 1.8.12-gke.1
- 1.8.12-gke.0
- 1.8.10-gke.2
- 1.8.10-gke.0
- 1.8.9-gke.1
- 1.8.8-gke.0
- 1.7.15-gke.0
- 1.7.12-gke.2
- 1.6.13-gke.1

Reference –

Installation – https://cloud.google.com/sdk/docs/downloads-interactive#linux

SDK Components – https://cloud.google.com/sdk/docs/components

Tips and Tricks – https://cloudplatform.googleblog.com/2014/03/tips-and-tricks-command-line-access-to.html

Ansible : How to run playbooks as a shell script


Ansible is a powerful tool for automation, its syntax checking, verbose and dry run mode features make it a reliable and safe tool. It is particularly popular in IT infrastructure automation, such as application deployment or full fledged infrastructure plus app deployment. As an integral part of DevOps tool-set, it falls into the category of Chef, Puppet, Salt or CFEngine for the critical role it plays in IT infrastructure, Application Deployment, Configuration Management and Continuous Delivery.

In this short blog, I am writing about a little known or less popular usage of Ansible – executing it like a shell script. In a Unix-like operating system, any text file with its content starting with a #! aka Shebang, is executed by passing the text file as an argument to the characters following the Shebang. For instance, a text file /tmp/myscript.sh with its content starting with the characters #!/bin/bash is run by the program loader as /bin/bash /tmp/myscript. Following the same logic, we can execute any ansible playbook by simply starting the content of the playbook file with a path to the ansible executable. 

Thus for me to execute my playbooks just like a script, the first thing I need to know is the path to my Ansible executable –

$ which ansible
/usr/local/bin/ansible

And have a playbook – in this case, I will use two playbook – one which adds a user and the second one which deletes the same user as examples.
Notice that I am naming the playbook just like a shell script and made it executable –

$ cat add-user.sh 
#!/usr/local/bin/ansible-playbook
---
- hosts: localhost
  tasks:
  - name: Add user
    user: name={{ username }} comment={{ comment }} state=present shell={{ shell }}
    become: yes

When I execute this script, I will pass the parameters needed to add a user as ansible Extra variables. Now let us run the script in dry run mode first –

$ id john
id: ‘john’: no such user

$ ./add-user.sh -e "username=john comment='John Doe' shell=/bin/bash" -v --check
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]

TASK [Add user] *************************************************************************************************************************************
changed: [localhost] => {"changed": true}

PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

Everything looks good, so let us execute it –

$ ./add-user.sh -e "username=john comment='John Doe' shell=/bin/bash" -v
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]

TASK [Add user] *************************************************************************************************************************************
changed: [localhost] => {"changed": true, "comment": "John Doe", "create_home": true, "group": 1002, "home": "/home/john", "name": "john", "shell": "/bin/bash", "state": "present", "stderr": "useradd: warning: the home directory already exists.\nNot copying any file from skel directory into it.\n", "stderr_lines": ["useradd: warning: the home directory already exists.", "Not copying any file from skel directory into it."], "system": false, "uid": 1002}

PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

$ id john
uid=1002(john) gid=1002(john) groups=1002(john)

Deleting the user is similar, we just write an equivalent playbook and we pass only the username name as an extra var this time –

$ cat del-user.sh
#!/usr/local/bin/ansible-playbook
---
- hosts: localhost
tasks:
- name: Delete user
user: name={{ username }} state=absent
become: yes

$ ./del-user.sh -e username=john -v --check
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]

TASK [Delete user] **********************************************************************************************************************************
changed: [localhost] => {"changed": true}

PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

$ ./del-user.sh -e username=john -v
Using /etc/ansible/ansible.cfg as config file

PLAY [localhost] ************************************************************************************************************************************

TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]

TASK [Delete user] **********************************************************************************************************************************
changed: [localhost] => {"changed": true, "force": false, "name": "john", "remove": false, "state": "absent"}

PLAY RECAP ******************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0

$ id john
id: ‘john’: no such user

You can find more on Ansible in the documentation section of the official site.

What type of storage engine a MySQL table uses?


MySQL supports several storage engines such as InnoDB, MyISAM, BLACKHOLE, CSV. Depending on your use case, you might configure your MySQL table to use certain storage engine. To see the list of storage engines MySQL supports, simply run “SHOW ENGINES\G” under a mysql prompt.

To find out the particular storage engine used by a table, run the ‘show table status’ command for the named table as below. The first example is the mysql user table, which uses InnoDB –


mysql> use mysql;

mysql> show table status like 'user' \G
*************************** 1. row ***************************
Name: user
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 1
Avg_row_length: 16384
Data_length: 16384
Max_data_length: 0
Index_length: 49152
Data_free: 10485760
Auto_increment: 2
Create_time: 2013-08-26 22:52:09
Update_time: NULL
Check_time: NULL
Collation: binary
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

A sample table which uses MyISAM storage engine –


mysql> show table status like 'servers' \G
*************************** 1. row ***************************
Name: servers
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 0
Avg_row_length: 0
Data_length: 0
Max_data_length: 433752939111120895
Index_length: 1024
Data_free: 0
Auto_increment: NULL
Create_time: 2013-08-24 01:42:15
Update_time: 2013-08-24 01:42:15
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment: MySQL Foreign Servers table
1 row in set (0.00 sec)

A table for logging slow queries is stored in a CSV storage engine –


mysql> show table status like 'slow_log' \G
*************************** 1. row ***************************
Name: slow_log
Engine: CSV
Version: 10
Row_format: Dynamic
Rows: 2
Avg_row_length: 0
Data_length: 0
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: NULL
Create_time: NULL
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment: Slow log
1 row in set (0.00 sec)

 

Features of some of the storage engines –

  • InnoDB: is a transaction-safe (ACID compliant) storage engine for MySQL that has commit, rollback, and crash-recovery capabilities to protect user data. 
  • MyISAM: These tables have a small footprint. Table-level locking limits the performance in read/write workloads, so it is often used in read-only or read-mostly workloads in Web and data warehousing configurations.
  • Memory: Stores all data in RAM, for fast access in environments that require quick lookups of non-critical data.
  • CSV: Its tables are really text files with comma-separated values. CSV tables let you import or dump data in CSV format, to exchange data with scripts and applications that read and write that same format.
  • Archive: These compact, unindexed tables are intended for storing and retrieving large amounts of seldom-referenced historical, archived, or security audit information.
  • Blackhole: The Blackhole storage engine accepts but does not store data, similar to the Unix /dev/null device. Queries always return an empty set.

 

References –

https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html

nf_conntrack: table full, dropping packet


I actually saw this error in a Docker host, and Docker uses iptables and allof Docker’s iptables rules are added to the DOCKER chain. In this case though, it wasn’t the Docker iptables rules that were a problem, it is just that limits were reached in the connection tracking of the netfilter module. You might see this error in /var/log/messages or /var/log/kern

The full error message looked like this –

May 29 09:10:37 docker kernel: [74350.150400] nf_conntrack: table full, dropping packet
May 29 09:10:37 docker kernel: [74350.155361] nf_conntrack: table full, dropping packet
May 29 09:10:37 docker kernel: [74350.160282] nf_conntrack: table full, dropping packet
May 29 09:10:37 docker kernel: [74350.181547] nf_conntrack: table full, dropping packet
May 29 09:10:37 docker kernel: [74350.184807] nf_conntrack: table full, dropping packet
May 29 09:10:37 docker kernel: [74350.184913] nf_conntrack: table full, dropping packet

Resolution – increase maximum number of connections being tracked and/or reduce tracking timeouts. Look for these run time kernel parameters –

[root@kauai /]# sysctl net.ipv4.netfilter.ip_conntrack_tcp_timeout_established
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 27000
[root@kauai /]# sysctl net.netfilter.nf_conntrack_generic_timeout
net.netfilter.nf_conntrack_generic_timeout = 60
[root@kauai /]# sysctl net.ipv4.netfilter.ip_conntrack_max
net.ipv4.netfilter.ip_conntrack_max = 64268

These are the settings which resolved my issue, simply doubled the values –

sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established=54000
sysctl -w net.netfilter.nf_conntrack_generic_timeout=120
sysctl -w net.ipv4.netfilter.ip_conntrack_max=128536

To make this permanent, add the lines above to the /etc/sysctl.conf file.

 

References –

https://security.stackexchange.com/questions/43205/nf-conntrack-table-full-dropping-packet

https://docs.docker.com/network/iptables/

IP subnet calculator

Linux – IP subnet calculation with ipcalc


ipcalc is a program to perform simple manipulation of IP addresses and is useful for calculating various network masks given an IP address. Some of the uses of ipcalc are –

  • Validate IP address
  • Display calculated broadcast address
  • Show hostname determined via DNS
  • Display default mask for IP
  • Display network address or prefix

Before using ipcalc, make sure you have the binary installed in your Operating system, if not install it by following below instructions –

1. Installation instructions for various Operating Systems

a. Fedora/Red Hat/CentOS

yum install initscripts

b. Debian/Ubuntu

apt-get install ipcalc

c. MacOS

brew install ipcalc

Install ipcalc on Mac OSX

d. Windows

http://jodies.de/ipcalc-faq/win32.html

 

2. How to use ipcalc

Note below examples were tested in CentOS 6.8, it might not work for other distros or Operating systems. Check the ipcalc documentation for your OS.

a. Check if IP address is valid for IPv4 or IPv6 ( it defaults to IPv4)

[daniel@kauai ~]$ ipcalc -c 1.2.3.4
[daniel@kauai ~]$ ipcalc -c 1.2.3.4/32
[daniel@kauai ~]$ ipcalc -c 1.2.3.444
ipcalc: bad IPv4 address: 1.2.3.444

It will exit with a non-zero status code if the IP address is not valid, with zero if valid. For scripting, use ‘-s’ option for silent, that way it doesn’t display error messages.


[daniel@kauai ~]$ ipcalc -s -c 1.2.3.4
[daniel@kauai ~]$ echo $?
0

[daniel@kauai ~]$ ipcalc -s -c 1.2.3.444
[daniel@kauai ~]$ echo $?
1

b. Show boradcast address 


[daniel@kauai ~]$ ipcalc -b 10.10.0.1/24
BROADCAST=10.10.0.255
[daniel@kauai ~]$ ipcalc -b 10.10.0.1/22
BROADCAST=10.10.3.255
[daniel@kauai ~]$ ipcalc -b 10.10.0.1/8
BROADCAST=10.255.255.255

c. Reverse dns

[daniel@kauai ~]$ ipcalc -h 8.8.8.8
HOSTNAME=google-public-dns-a.google.com

$ ipcalc -h 162.247.79.246
HOSTNAME=securenet-server.net

d.  Display default netmask for IP (class A, B, or C)


[daniel@kauai ~]$ ipcalc -m 10.10.10.1
NETMASK=255.0.0.0
[daniel@kauai ~]$ ipcalc -m 192.168.10.1
NETMASK=255.255.255.0
[daniel@kauai ~]$ ipcalc -m 172.16.0.1
NETMASK=255.255.0.0

 

e. Show network address


[daniel@kauai ~]$ ipcalc -n 10.10.244.8/19
NETWORK=10.10.224.0
[daniel@kauai ~]$ ipcalc -n 10.10.244.8/20
NETWORK=10.10.240.0
[daniel@kauai ~]$ ipcalc -n 10.10.244.8/30
NETWORK=10.10.244.8

 

Split a subnet – this feature might not be supported in all ipcalc versions, check for your OS.

This is the best feature of ipcalc in my opinions, you dont’ have to do the subnet and bits calculation by hand. This feature was available in my Ubuntu 16 VM but not RedHat.


$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.3 LTS
Release: 16.04
Codename: xenial

$ ipcalc -v
0.41

For instance to split a /20 subnet into two subnets of size 1024 each –


ipcalc 10.10.0.0/20 --s 1024 1024
Address: 10.10.0.0 00001010.00001010.0000 0000.00000000
Netmask: 255.255.240.0 = 20 11111111.11111111.1111 0000.00000000
Wildcard: 0.0.15.255 00000000.00000000.0000 1111.11111111
Network: 10.10.0.0/20 00001010.00001010.0000 0000.00000000
HostMin: 10.10.0.1 00001010.00001010.0000 0000.00000001
HostMax: 10.10.15.254 00001010.00001010.0000 1111.11111110
Broadcast: 10.10.15.255 00001010.00001010.0000 1111.11111111
Hosts/Net: 4094 Class A, Private Internet

1. Requested size: 1024 hosts
Netmask: 255.255.248.0 = 21 11111111.11111111.11111 000.00000000
Network: 10.10.0.0/21 00001010.00001010.00000 000.00000000
HostMin: 10.10.0.1 00001010.00001010.00000 000.00000001
HostMax: 10.10.7.254 00001010.00001010.00000 111.11111110
Broadcast: 10.10.7.255 00001010.00001010.00000 111.11111111
Hosts/Net: 2046 Class A, Private Internet

2. Requested size: 1024 hosts
Netmask: 255.255.248.0 = 21 11111111.11111111.11111 000.00000000
Network: 10.10.8.0/21 00001010.00001010.00001 000.00000000
HostMin: 10.10.8.1 00001010.00001010.00001 000.00000001
HostMax: 10.10.15.254 00001010.00001010.00001 111.11111110
Broadcast: 10.10.15.255 00001010.00001010.00001 111.11111111
Hosts/Net: 2046 Class A, Private Internet

Needed size: 4096 addresses.
Used network: 10.10.0.0/20
Unused:

 

Let us split it into 3 subnets of sizes 512, 512 and 1024

ipcalc

 

Useful links – 


https://linux.die.net/man/1/ipcalc

http://jodies.de/ipcalc