Archive for the ‘ Scripting ’ Category

Linux how to zip a folder

How to zip or compress a folder or directory in Linux

In Linux or similar Operating Systems, zip utility is used to package and compress (archive) files.

Let us get straight to action, we have a folder to compress with zip tool –


daniel@hidmo:/tmp/tutorial$ tree .
.
??? zip-tutorial
    ??? chapter-1
    ?   ??? content
    ??? chapter-2
    ?   ??? readme
    ??? zip.txt

daniel@hidmo:/tmp/tutorial$ zip -r tutorial.zip zip-tutorial/
  adding: zip-tutorial/ (stored 0%)
  adding: zip-tutorial/zip.txt (deflated 55%)
  adding: zip-tutorial/chapter-2/ (stored 0%)
  adding: zip-tutorial/chapter-2/readme (deflated 55%)
  adding: zip-tutorial/chapter-1/ (stored 0%)
  adding: zip-tutorial/chapter-1/content (deflated 57%)

Basically we use “zip -r DESTINATION-FILE.ZIP FOLDER-TO-COMPRESS” to compress directory. Or in short “zip -r DESTINATION-FILE DIRECTORY-TO-COMPRESS“, we can skip the .zip extension.


daniel@hidmo:/tmp/tutorial$ zip -r tutorial zip-tutorial/
updating: zip-tutorial/ (stored 0%)
  adding: zip-tutorial/zip.txt (deflated 55%)
  adding: zip-tutorial/chapter-2/ (stored 0%)
  adding: zip-tutorial/chapter-2/readme (deflated 55%)
  adding: zip-tutorial/chapter-1/ (stored 0%)
  adding: zip-tutorial/chapter-1/content (deflated 57%)


To view the contents of the compressed folder without uncompressing it –

daniel@hidmo:/tmp/tutorial$ unzip -l tutorial.zip 
Archive:  tutorial.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  2019-10-07 21:45   zip-tutorial/
     1202  2019-10-07 21:45   zip-tutorial/zip.txt
        0  2019-10-07 21:45   zip-tutorial/chapter-2/
     1202  2019-10-07 21:45   zip-tutorial/chapter-2/readme
        0  2019-10-07 21:44   zip-tutorial/chapter-1/
      722  2019-10-07 21:44   zip-tutorial/chapter-1/content
---------                     -------
     3126                     6 files

References –

https://linux.die.net/man/1/zip

https://superuser.com/questions/216617/view-list-of-files-in-zip-archive-on-linux

Linux – how to avoid running an alias command in shell


In some cases, you might have multiple binaries, scripts or aliases with the same name in your system. Under certain circumstances you want to run only a built-in shell command, but no an alias of the command. Here are some ways to do it.

The “ls” command is usually aliased to color the output, for instance –

$ type ls
ls is aliased to `ls --color=auto'

Precede the command with “command” or “\”

$ command ls /tmp/tutorial/
chapter-one  readme

$ \ls /tmp/tutorial/
chapter-one  readme

References –

https://www.tldp.org/LDP/abs/html/aliases.html

https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html

Linux – Cannot assign requested address


While running a performance test on a local web service, I encountered below error –

$ ab -n 600000 -c 10000 http://localhost:8080/test
...
Benchmarking localhost (be patient)

Test aborted after 10 failures

apr_socket_connect(): Cannot assign requested address (99)

Clearly the number of concurrent requests(-n) and concurrent connections(-c) is high. But would it be possible to tweak my system so that it can handle this? Apparently yes. Doing some reading no Ephemeral port range. For a typical TCP connection, a 4-tuple of source IP/port and destination IP/port is required. In our case, the source and destination IP is fixed (127.0.0.1) as well as the destination port (8080). How many source port range do we have?

$ cat /proc/sys/net/ipv4/ip_local_port_range 
32768	60999

$ echo $((60999-32768))
28231

By increasing this port range, the system will accept more concurrent connections. Run below command under root –

root@lindell:~# echo "16000 65535" > /proc/sys/net/ipv4/ip_local_port_range
root@lindell:~# cat /proc/sys/net/ipv4/ip_local_port_range
16000	65535

The performance test now runs successfully –


$ ab -n 600000 -c 10000 http://localhost:8080/test
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 60000 requests
Completed 120000 requests
Completed 180000 requests
Completed 240000 requests
Completed 300000 requests
Completed 360000 requests
Completed 420000 requests
Completed 480000 requests
Completed 540000 requests
Completed 600000 requests
Finished 600000 requests


Server Software:        
Server Hostname:        localhost
Server Port:            8080

Document Path:          /test
Document Length:        13 bytes

Concurrency Level:      10000
Time taken for tests:   122.307 seconds
Complete requests:      600000
Failed requests:        0
Total transferred:      78000000 bytes
HTML transferred:       7800000 bytes
Requests per second:    4905.69 [#/sec] (mean)
Time per request:       2038.449 [ms] (mean)
Time per request:       0.204 [ms] (mean, across all concurrent requests)
Transfer rate:          622.79 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      308  848 180.0    833    3955
Processing:   293 1175 198.5   1190    1967
Waiting:       88  882 210.3    946    1738
Total:        932 2023 208.9   2018    5146

Percentage of the requests served within a certain time (ms)
  50%   2018
  66%   2085
  75%   2115
  80%   2138
  90%   2216
  95%   2298
  98%   2411
  99%   2961
 100%   5146 (longest request)


$ netstat -talpn |grep '127.0.0.1:8080' |wc -l
34241


References –

https://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html

https://httpd.apache.org/docs/2.4/programs/ab.html


In Linux, the find command is most commonly used to search files using different criteria such as file name, size and modified time. Did you know that you can search files using inode number as well? Here is how to do it?

With “ls” we can find the inode number –

$ ls -li /etc/hosts
1576843 -rw-r--r-- 1 root root 311 Jan 21  2017 /etc/hosts

Using “-inum” option of find command, we can locate the filename and its path by its inode number.

$ find /etc -type f -inum 1576843 2>/dev/null 
/etc/hosts

$ cat $(find /etc -type f -inum 1576843 2>/dev/null)
127.0.0.1	localhost
127.0.1.1	ubuntu

References

http://man7.org/linux/man-pages/man7/inode.7.html

http://man7.org/linux/man-pages/man1/find.1.html

Contents of most text files change during the life of the file , and it is common to find yourself trying to search and replace certain text across multiple files. In Linux, this is a fairly easy task. Let us go through some of the commands you will need to perform this task and then finally construct a single liner to do the job.

  • grep is your best friend when it comes to finding a string in a file. In this case we are looking for the string “REPLACEME” in current directory and across multiple files –
$ grep -r REPLACEME *
host.conf:# The "REPLACEME" line is only used by old versions of the C library.
host.conf:order hosts,REPLACEME,bind
hostname:REPLACEME
hosts.deny:ALL: REPLACEME

If we are interested only in the files which contains this particular text –

$ grep -lr REPLACEME *
host.conf
hostname
hosts.deny
  • sed is a tool of choice for inline editing of files –
$ cat data 
This text will be replaced - REPLACEME
$ sed -i 's/REPLACEME/NEWTEXT/g' data 
$ cat data 
This text will be replaced - NEWTEXT

From here, there are multiple ways to skin the cat – we can loop through the files and do the replacement or we can let the commands do the replacement with a wildcard.

For loop style update -

$ for f in $(grep -lr REPLACEME *); do echo "*** File: ${f} ***" ; sed -i 's/REPLACEME/NEWTEXT/g' $f; done
*** File: host.conf ***
*** File: hostname ***
*** File: hosts.deny ***

$ grep -lr REPLACEME *

$ grep -lr NEWTEXT *
data
host.conf
hostname
hosts.deny

Actually the above for loop is redundant, sed can make changes across multiple files –

 sed -i 's/REPLACEME/NEWTEXT/g' *

How to install Google cloud platform(GCP) sdk – gcloud cli tool


The instructions below were testing in Ubuntu Linux.

gcloud is the command line interface(CLI) tool for interacting with GCP services. Per Google’s product overview page for gcloud – “The Cloud SDK is a set of tools for Cloud Platform. It contains gcloud, gsutil, and bq, which you can use to access Google Compute Engine, Google Cloud Storage, Google BigQuery, and other products and services from the command-line. You can run these tools interactively or in your automated scripts”.

Let us download, install and initialize this tool in an interactive manner, accept all default settings for all prompts –

$ curl https://sdk.cloud.google.com | bash && exec -l $SHELL
$ gcloud init
If above installation steps go well, check its version –
$ gcloud version
Google Cloud SDK 224.0.0
bq 2.0.36
core 2018.11.02
gsutil 4.34
 
A simple way to validate if the CLI is working as expected is to list all the GCP regions –
$ gcloud compute regions list
NAME                     CPUS  DISKS_GB  ADDRESSES  RESERVED_ADDRESSES  STATUS  TURNDOWN_DATE
asia-east1               0/8   0/2048    0/8        0/1                 UP
asia-east2               0/8   0/2048    0/8        0/1                 UP
asia-northeast1          0/8   0/2048    0/8        0/1                 UP
asia-south1              0/8   0/2048    0/8        0/1                 UP
asia-southeast1          0/8   0/2048    0/8        0/1                 UP
australia-southeast1     0/8   0/2048    0/8        0/1                 UP
europe-north1            0/8   0/2048    0/8        0/1                 UP
europe-west1             0/8   0/2048    0/8        0/1                 UP
europe-west2             0/8   0/2048    0/8        0/1                 UP
europe-west3             0/8   0/2048    0/8        0/1                 UP
europe-west4             0/8   0/2048    0/8        0/1                 UP
northamerica-northeast1  0/8   0/2048    0/8        0/1                 UP
southamerica-east1       0/8   0/2048    0/8        0/1                 UP
us-central1              0/8   0/2048    0/8        0/1                 UP
us-east1                 2/8   31/2048   2/8        0/1                 UP
us-east4                 0/8   0/2048    0/8        0/1                 UP
us-west1                 0/8   0/2048    0/8        0/1                 UP
us-west2                 0/8   0/2048    0/8        0/1                 UP

Only the core components of the gcloud sdk are installed during initial installation. For any additional component to interact with GCP, you have to install the additional component. For instance, to install the component for interactive with Google Kubernetes Engine(GKE) you have to install kubectl


gcloud components install kubectl

Many features of GCP are available in Beta only, for that you have to install the beta component –


gcloud components install beta

Stay up to date with  –

gcloud components update 

.

Tab completion and running commands in Beta feature –


$ gcloud beta container  [tab][tab]
binauthz  clusters  get-server-config  images  node-pools  operations  subnets

$ gcloud beta container get-server-config
Fetching server config for us-east1-c
defaultClusterVersion: 1.9.7-gke.7
defaultImageType: COS
validImageTypes:
- COS
- UBUNTU
- COS_CONTAINERD
validMasterVersions:
- 1.11.2-gke.15
- 1.10.9-gke.3
- 1.10.7-gke.9
- 1.10.6-gke.9
- 1.9.7-gke.7
validNodeVersions:
- 1.11.2-gke.15
- 1.11.2-gke.9
- 1.10.9-gke.3
- 1.10.9-gke.0
- 1.10.7-gke.9
- 1.10.7-gke.6
- 1.10.7-gke.2
- 1.10.7-gke.1
- 1.10.6-gke.9
- 1.10.6-gke.6
- 1.10.6-gke.4
- 1.10.6-gke.3
- 1.10.6-gke.2
- 1.10.6-gke.1
- 1.10.5-gke.4
- 1.10.5-gke.3
- 1.10.5-gke.2
- 1.10.5-gke.0
- 1.10.4-gke.3
- 1.10.4-gke.2
- 1.10.4-gke.0
- 1.10.2-gke.4
- 1.10.2-gke.3
- 1.10.2-gke.1
- 1.9.7-gke.7
- 1.9.7-gke.6
- 1.9.7-gke.5
- 1.9.7-gke.4
- 1.9.7-gke.3
- 1.9.7-gke.1
- 1.9.7-gke.0
- 1.9.6-gke.2
- 1.9.6-gke.1
- 1.9.3-gke.0
- 1.8.12-gke.3
- 1.8.12-gke.2
- 1.8.12-gke.1
- 1.8.12-gke.0
- 1.8.10-gke.2
- 1.8.10-gke.0
- 1.8.9-gke.1
- 1.8.8-gke.0
- 1.7.15-gke.0
- 1.7.12-gke.2
- 1.6.13-gke.1

Reference –

Installation – https://cloud.google.com/sdk/docs/downloads-interactive#linux

SDK Components – https://cloud.google.com/sdk/docs/components

Tips and Tricks – https://cloudplatform.googleblog.com/2014/03/tips-and-tricks-command-line-access-to.html