Archive for the ‘ How tos ’ Category

How to be your own Certificate Authority(CA) with self signed certificates

This is a hands on tutorial on how you can setup your own Certificate Authority(CA) for internal network use. Once the CA certs are setup, you will generate certificate request(CSR) for your clients and sign them with your CA certs to create SSL certs for your internal network use. If you import your CA certs to your browser, you will be able to visit all internal sites using https without any browser warning, as long as the certs the your internal services are using are signed by your internal CA.

*Demo – Own CA for the home.net internal domain

1. Prepare certificate environment
and default parameters to use when creating CSR –

# mkdir /etc/ssl/CA
# mkdir /etc/ssl/newcerts
# sh -c "echo '100000' > /etc/ssl/CA/serial"
# touch /etc/ssl/CA/index.txt

# cat /etc/ssl/openssl.cnf
 dir		= /etc/ssl		# Where everything is kept
 database	= $dir/CA/index.txt	# database index file.
 certificate	= $dir/certs/home_cacert.pem 	# The CA certificate
 serial		= $dir/CA/serial 		# The current serial number
 private_key	= $dir/private/home_cakey.pem  # The private key
 default_days	= 1825			# how long to certify for
 default_bits		= 2048
 countryName_default		= US
 stateOrProvinceName_default	= California
 0.organizationName_default	= Home Ltd

2. Create self signed root certificate and install the root certificate and key

# openssl req -new -x509 -extensions v3_ca -keyout home_cakey.pem -out home_cacert.pem -days 3650
# mv home_cakey.pem /etc/ssl/private/
# mv home_cacert.pem /etc/ssl/certs/

3. Generate a CSR for the domain you want to issue a certificate –

# openssl genrsa -des3 -out home_server.key 2048
# openssl rsa -in home_server.key -out server.key.insecure
# mv server.key server.key.secure
# mv server.key.insecure server.key

4. Create the CSR now and generate a CA signed certificate

# openssl req -new -key server.key -out server.csr
# openssl ca -in server.csr -config /etc/ssl/openssl.cnf

Directory structure after signing and issuing certificates –

# ls -l /etc/ssl/CA/
total 24
-rw-r--r-- 1 root root 444 Aug 29 18:20 index.txt
-rw-r--r-- 1 root root  21 Aug 29 18:20 index.txt.attr
-rw-r--r-- 1 root root  21 Aug 29 18:16 index.txt.attr.old
-rw-r--r-- 1 root root 328 Aug 29 18:18 index.txt.old
-rw-r--r-- 1 root root   7 Aug 29 18:20 serial
-rw-r--r-- 1 root root   7 Aug 29 18:19 serial.old

# ls -l /etc/ssl/newcerts/
total 32
-rw-r--r-- 1 root root 4612 Aug 29 16:24 100000.pem
-rw-r--r-- 1 root root 4613 Aug 29 16:51 100001.pem
-rw-r--r-- 1 root root 4574 Aug 29 17:50 100002.pem
-rw-r--r-- 1 root root 4619 Aug 29 18:20 100003.pem

# cat /etc/ssl/CA/index.txt
V	190828202443Z		100000	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/CN=www.home.net/emailAddress=daniel@home.net
V	190828205127Z		100001	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/CN=wiki.home.net/emailAddress=daniel@home.net
V	190828215006Z		100002	unknown	/C=US/ST=California/O=Home Ltd/CN=home.net/emailAddress=daniel@home.net
V	190828222038Z		100003	unknown	/C=US/ST=California/O=Home Ltd/OU=Home/CN=homevm.home.net/emailAddress=daniel@home.net

# cat /etc/ssl/CA/serial
10411A

Now that you have your certificate, in this example /etc/ssl/certs/home_cacert.pem, you can import it to your web client such as a web browser, LDAP client etc.

References –

https://help.ubuntu.com/12.04/serverguide/certificates-and-security.html

Server refused to allocate pty

Server refused to allocate pty : pseudoterminal in use reached maximum allowed limit.

You are unlikely to encounter this error in most cases, as the default maximum number of pseudoterminal(pty) in a Linux environment is large enough for typical use cases. The error might occur though under either an admin lowering the pty limit or unusual high number of connections to the system, using ssh or GUI terminal. Under those circumstances, you will see the below error during ssh interaction –

$ssh daniel@192.168.10.103
daniel@192.168.10.103's password:
Server refused to allocate pty

GUI terminal error –

There was an error creating the child process for this terminal
getpt failed: No such file or directory

Per the man page –

” The Linux kernel imposes a limit on the number of available UNIX 98
pseudoterminals. In kernels up to and including 2.6.3, this limit is
configured at kernel compilation time (CONFIG_UNIX98_PTYS), and the
permitted number of pseudoterminals can be up to 2048, with a default
setting of 256. Since kernel 2.6.4, the limit is dynamically
adjustable via /proc/sys/kernel/pty/max, and a corresponding file,
/proc/sys/kernel/pty/nr, indicates how many pseudoterminals are
currently in use.

To resolve this, get a count of pty currently allocated using either of the below commands –


[root@kauai tmp]# sysctl kernel.pty.nr
kernel.pty.nr = 10

[root@kauai tmp]# cat /proc/sys/kernel/pty/nr 
10

You can list the allocated pts names –

# ps aux |grep -o -P '\s+pts/\d+\s+' |sort -u
 pts/0    
 pts/1    
 pts/2    
 pts/3    
 pts/4    
 pts/5    
 pts/6    
 pts/8    
 pts/9    

If the currently allocated count is closer or less than to the limit, which you can find using

/proc/sys/kernel/pty/max

, go ahead increase the max limit as follows, say to 4096 in this example –

sysctl -w kernel.pty.max=4096

References –

http://man7.org/linux/man-pages/man7/pty.7.html

AIDE installation and setup

AIDE (Advanced Intrusion Detection Environment) setup

AIDE is a host-based file and directory integrity checking tool, similar to Tripwire. It creates a snapshot of file details during initialization and stores them in a database. The files that AIDE monitors are user-defined rules, where the admin can specify which directories/files to keep an eye on. The snapshot is basically a message digest of the files/directories information returned by stat command. One AIDE is initialized, it can detect any changes in the future and alert the admin of such changes. AIDE can be configured to run on a scheduled based using cron jobs for instance.

Installation

yum list aide
yum install aide

Initialization

Create AIDE DB – stores snapshot of file or directory stats by scanning the monitored resources.

$ /usr/sbin/aide --init 
$ mv /var/lib/aide/aidb.db.new.gz /var/lib/aide/aide.db.gz

To minimize false positives – Set PRELINKING=no in /etc/sysconfig/prelink and run

 /usr/sbin/prelink -ua 

to restore the binaries to their prelinked state.

Scheduled integrity checks
Add a cron job to check file integrity, say every morning at 8 AM –

echo '0 8 * * * /usr/sbin/aide --check' >> /etc/crontab

Updating DB after making changes or verifying any changes reported during change –

$ aide -c aide.conf --update

References –

AIDE (Advanced Intrusion Detection Environment)

Linux – run a scheduled command once

When we think of running scheduled tasks in Linux, the first tool which comes to mind to most Linux users and admins is cron. Cron is very popular and useful when you want to run a task regularly – say after a given interval, hourly, weekly or even every time the system reboots. The scheduled tasks are faithfully executed by the crond daemon based on the scheduling we set, if somehow crond missed the task because the machine was not running 24/7, then anacron takes care of it. My topic today though is about at which executes a scheduled task only ones at a later time.

1. Adding future commands interactively

Let us schedule to run a specific command 10 minutes from now, press CTRL+D once you have entered the command –

daniel@lindell:~$ at now +10 minutes
at> ps aux &> /tmp/at.log
[[PRESS CTRL+D HERE]]
job 4 at Wed Mar  1 21:24:00 2017

Now the above command ‘ps aux’ is scheduled to run 10 minutes from now, only once. We can check the pending jobs using atq command –

daniel@lindell:~$ atq
4	Wed Mar  1 21:24:00 2017 a daniel

2. Remove scheduled jobs from queue using atrm or at -r

daniel@lindell:~$ at now +1 minutes
at> ps aux > /tmp/atps.logs
at> <EOT>
job 8 at Wed Mar  1 21:25:00 2017
daniel@lindell:~$ atq
8	Wed Mar  1 21:25:00 2017 a daniel
daniel@lindell:~$ atrm 8
daniel@lindell:~$ atq
daniel@lindell:~$ 

3. Run jobs from a script or file.

In some cases the job you want to run is a script –

daniel@lindell:~$ at -f /tmp/myscript.sh 8:00 AM tomorrow
daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel

4. Embed shell commands inline –

at now +10 minutes <<-EOF
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi
EOF

5. View contents of scheduled task using ‘at -c JOBNUMBER’ :

daniel@lindell:~$ at now +10 minutes <<-EOF
> if [ -d ~/pythonscripts ]; then
>  find ~/pythonscripts/ -type f -iname '*.pyc' -delete
> fi
> EOF
job 13 at Wed Mar  1 21:51:00 2017

daniel@lindell:~$ atq
11	Thu Mar  2 08:00:00 2017 a daniel
12	Wed Mar  1 21:45:00 2017 a daniel
13	Wed Mar  1 21:51:00 2017 a daniel


daniel@lindell:~$ at -c 13
 [[ TRUNCATED ENVIRONMENTAL STUFF ]]
cd /home/daniel || {
	 echo 'Execution directory inaccessible' >&2
	 exit 1
}
if [ -d ~/pythonscripts ]; then
 find ~/pythonscripts/ -type f -iname '*.pyc' -delete
fi

In this small tutorial about at utility, we saw some of the use cases for at – especially where we had to execute a scheduled task only once. The time specification it uses is human friendly, example it supports time specs such as midnight, noon, teatime or today. Feel free to read the man pages for details.

References –

https://linux.die.net/man/1/at

Ansible non standard ssh port

How to run playbooks against a host running ssh on a port other than port 22.

Ansible is a simple automation or configuration management tool, which allows to execute a command/script on remote hosts in an adhoc or using playbooks. It is push based, and uses ssh to run the playbooks against remote hosts. The below steps are on how to run ansible playbooks on a host running ssh on port 2222.

One of the hosts managed by ansible is running in a non-default port. It is a docker container listening on port 2222. Actually ssh in container listens on port 22, but the host redirect port 2222 on host to port 22 on container.

1. Use environment variable –


 ansible-playbook tasks/app-deployment.yml --check -e ansible_ssh_port=2222

2. specify the port in the inventory or hosts file –

Under hosts file set the hostname to have the format ‘server:port’ –

[docker-hosts]
docker1:2222

Let us run the playbook now –

root@linubuvma:/tmp/ansible# cat tasks/app-deployment.yml
- hosts: docker-hosts
  vars:
    app_version: 1.1.0
  tasks:
  - name: install git
    apt: name=git state=latest
  - name: Checkout the application from git
    git: repo=https://github.com/docker/docker-py.git dest=/srv/www/myapp version={{ app_version }}
    register: app_checkout_result


root@linubuvma:/tmp/ansible# ansible-playbook tasks/app-deployment.yml

PLAY [docker-hosts] ************************************************************

TASK: [install git] ***********************************************************
changed: [docker1]

TASK: [Checkout the application from git] *************************************
changed: [docker1]

PLAY RECAP ********************************************************************
docker1                    : ok=2    changed=2    unreachable=0    failed=0

References –

http://docs.ansible.com/
http://docs.ansible.com/ansible/intro_inventory.html

How to add a disk to a running VM

This tutorial will show you how to make Vmware autodetect a new virtual disk in a running Linux VM without rebooting it.

1. Start by adding the SCSI virtual disk

In my case I am using VMware workstation and following this link to add the disk. Use the steps relevant for your system, it shouldn’t be that difficult.

2. Status of the disk on the VM

By running fdisk on my Linux VM, I can see that it has two disks attached – /dev/sda and /dev/sdb – both having the same size of 107.4GB. The later is an LVM partition, which I can resize on the fly.

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1 

[root@lincenvma ~]# ls /sys/class/scsi_host/
host0  host1  host2

Apparently after adding the disk, the VM didn’t automatically detect the disk, that takes us to the next step of re-scanning the disks.

3. Rescan Scsi bus

This is where we run the trigger command, to scan the SCSI bus for everything – channel number, SCSI target ID, and LUN values. Check the /var/log/dmesg log file or run dmesg command in another window to see the action live –

[root@lincenvma ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

[root@lincenvma ~]# dmesg
sd 2:0:2:0: [sdc] Write Protect is off
sd 2:0:2:0: [sdc] Mode Sense: 61 00 00 00
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
 sdc: unknown partition table
sd 2:0:2:0: [sdc] Cache data unavailable
sd 2:0:2:0: [sdc] Assuming drive cache: write through
sd 2:0:2:0: [sdc] Attached SCSI disk


[root@lincenvma ~]# tail -f /var/log/messages
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB)
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Write Protect is off
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sdc: unknown partition table
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Cache data unavailable
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Assuming drive cache: write through
Mar 15 01:02:11 lincenvma kernel: sd 2:0:2:0: [sdc] Attached SCSI disk

We can see that the system detected the new disk and identified it as /dev/sdc.

In RHEL/CentOS 5.4 or above, the script /usr/bin/rescan-scsi-bus.sh will have the same effect.

4. Validate

At the bottom, fdisk shows the new disk as /dev/sdc with size 1073 MB –

[root@lincenvma ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda3  /dev/sdb  /dev/sdb1  /dev/sdc

[root@lincenvma ~]# fdisk -l

Disk /dev/sda: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00061f7f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          39      307200   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              39         549     4096000   82  Linux swap / Solaris
Partition 2 does not end on cylinder boundary.
/dev/sda3             549       13055   100453376   83  Linux

Disk /dev/sdb: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00ea04d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1        3265    26226081   8e  Linux LVM

Disk /dev/mapper/vg_target00-lv_target00: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x153067d3

                               Device Boot      Start         End      Blocks   Id  System
/dev/mapper/vg_target00-lv_target00p1               2        2049     2097152   83  Linux


Disk /dev/sdc: 1073 MB, 1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

From here, you can partition the disk and mount it directly or create a PV and merge it with the existing LVM to increase the size of the LVM.

References –

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/adding_storage-device-or-path.html

http://serverfault.com/questions/490397/what-does-in-echo-sys-class-scsi-host-host0-scan-mean

https://www.vmware.com/support/ws5/doc/ws_disk_add_virtual.html