curl – use variables to show response times and other parameters
curl is a tool to interact with a server for transferring data. Although it supports various protocols, it is most commonly used with HTTP/S. It is sort of a browser for CLI folks and a go to tool when writing scripts to interact with servers.
In addition to transferring data, how do we show request and response parameters with curl. The answer is using variables, the complete list of variables can be found here.
Example – use “time_total” to show the total time, in seconds, that the full operation lasted.
$ curl -w %{time_total} https://www.gcplinux.com
1.149143
It is best to add the variables in a file and use curl to reference the file for better formatting. Here I have added several http request and response variables I am interested in, such as num_connects, size_download, size_header, time_namelookup, time_pretransfer etc.
daniel@hidmo:/tmp$ cat ccurl.txt
url_effective: %{url_effective}\n
content_type: %{content_type}\n
http_code: %{http_code}\n
http_version: %{http_version}\n
num_connects: %{num_connects}\n
num_redirects: %{num_redirects}\n
remote_ip: %{remote_ip}\n
size_download: %{size_download}\n
size_header: %{size_header}\n
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
daniel@hidmo:/tmp$ curl -H 'Cache-Control: no-cache' -L -w "@ccurl.txt" -o /dev/null -s https://www.gcplinux.com
url_effective: https://gcplinux.com/
content_type: text/html; charset=UTF-8
http_code: 200
http_version: 1.1
num_connects: 2
num_redirects: 1
remote_ip: 162.247.79.246
size_download: 71273
size_header: 537
time_namelookup: 0.008585
time_connect: 0.082511
time_appconnect: 0.264110
time_pretransfer: 0.264293
time_redirect: 1.287257
time_starttransfer: 3.077526
----------
time_total: 3.177939
As far as time related parameters, listed below are the ones you will most likely use –
- time_appconnect The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed. (Added in 7.19.0)
- time_connect The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed.
- time_namelookup The time, in seconds, it took from the start until the name resolving was completed.
- time_pretransfer The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved.
- time_redirect The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. time_redirect shows the complete execution time for multiple redirections. (Added in 7.12.3)
- time_starttransfer The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes time_pretransfer and also the time the server needed to calculate the result.
- time_total The total time, in seconds, that the full operation lasted.
References –
https://curl.haxx.se/docs/manpage.html
https://stackoverflow.com/questions/18215389/how-do-i-measure-request-and-response-times-at-once-using-curl
How to zip or compress a folder or directory in Linux
In Linux or similar Operating Systems, zip utility is used to package and compress (archive) files.
Let us get straight to action, we have a folder to compress with zip tool –
daniel@hidmo:/tmp/tutorial$ tree .
.
??? zip-tutorial
??? chapter-1
? ??? content
??? chapter-2
? ??? readme
??? zip.txt
daniel@hidmo:/tmp/tutorial$ zip -r tutorial.zip zip-tutorial/
adding: zip-tutorial/ (stored 0%)
adding: zip-tutorial/zip.txt (deflated 55%)
adding: zip-tutorial/chapter-2/ (stored 0%)
adding: zip-tutorial/chapter-2/readme (deflated 55%)
adding: zip-tutorial/chapter-1/ (stored 0%)
adding: zip-tutorial/chapter-1/content (deflated 57%)
Basically we use “zip -r DESTINATION-FILE.ZIP FOLDER-TO-COMPRESS” to compress directory. Or in short “zip -r DESTINATION-FILE DIRECTORY-TO-COMPRESS“, we can skip the .zip extension.
daniel@hidmo:/tmp/tutorial$ zip -r tutorial zip-tutorial/
updating: zip-tutorial/ (stored 0%)
adding: zip-tutorial/zip.txt (deflated 55%)
adding: zip-tutorial/chapter-2/ (stored 0%)
adding: zip-tutorial/chapter-2/readme (deflated 55%)
adding: zip-tutorial/chapter-1/ (stored 0%)
adding: zip-tutorial/chapter-1/content (deflated 57%)
To view the contents of the compressed folder without uncompressing it –
daniel@hidmo:/tmp/tutorial$ unzip -l tutorial.zip
Archive: tutorial.zip
Length Date Time Name
--------- ---------- ----- ----
0 2019-10-07 21:45 zip-tutorial/
1202 2019-10-07 21:45 zip-tutorial/zip.txt
0 2019-10-07 21:45 zip-tutorial/chapter-2/
1202 2019-10-07 21:45 zip-tutorial/chapter-2/readme
0 2019-10-07 21:44 zip-tutorial/chapter-1/
722 2019-10-07 21:44 zip-tutorial/chapter-1/content
--------- -------
3126 6 files
References –
https://linux.die.net/man/1/zip
https://superuser.com/questions/216617/view-list-of-files-in-zip-archive-on-linux
Linux – how to avoid running an alias command in shell
In some cases, you might have multiple binaries, scripts or aliases with the same name in your system. Under certain circumstances you want to run only a built-in shell command, but no an alias of the command. Here are some ways to do it.
The “ls” command is usually aliased to color the output, for instance –
$ type ls
ls is aliased to `ls --color=auto'
Precede the command with “command” or “\”
$ command ls /tmp/tutorial/
chapter-one readme
$ \ls /tmp/tutorial/
chapter-one readme
References –
https://www.tldp.org/LDP/abs/html/aliases.html
https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html
Linux – Cannot assign requested address
While running a performance test on a local web service, I encountered below error –
$ ab -n 600000 -c 10000 http://localhost:8080/test
...
Benchmarking localhost (be patient)
Test aborted after 10 failures
apr_socket_connect(): Cannot assign requested address (99)
Clearly the number of concurrent requests(-n) and concurrent connections(-c) is high. But would it be possible to tweak my system so that it can handle this? Apparently yes. Doing some reading no Ephemeral port range. For a typical TCP connection, a 4-tuple of source IP/port and destination IP/port is required. In our case, the source and destination IP is fixed (127.0.0.1) as well as the destination port (8080). How many source port range do we have?
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 60999
$ echo $((60999-32768))
28231
By increasing this port range, the system will accept more concurrent connections. Run below command under root –
root@lindell:~# echo "16000 65535" > /proc/sys/net/ipv4/ip_local_port_range
root@lindell:~# cat /proc/sys/net/ipv4/ip_local_port_range
16000 65535
The performance test now runs successfully –
$ ab -n 600000 -c 10000 http://localhost:8080/test
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 60000 requests
Completed 120000 requests
Completed 180000 requests
Completed 240000 requests
Completed 300000 requests
Completed 360000 requests
Completed 420000 requests
Completed 480000 requests
Completed 540000 requests
Completed 600000 requests
Finished 600000 requests
Server Software:
Server Hostname: localhost
Server Port: 8080
Document Path: /test
Document Length: 13 bytes
Concurrency Level: 10000
Time taken for tests: 122.307 seconds
Complete requests: 600000
Failed requests: 0
Total transferred: 78000000 bytes
HTML transferred: 7800000 bytes
Requests per second: 4905.69 [#/sec] (mean)
Time per request: 2038.449 [ms] (mean)
Time per request: 0.204 [ms] (mean, across all concurrent requests)
Transfer rate: 622.79 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 308 848 180.0 833 3955
Processing: 293 1175 198.5 1190 1967
Waiting: 88 882 210.3 946 1738
Total: 932 2023 208.9 2018 5146
Percentage of the requests served within a certain time (ms)
50% 2018
66% 2085
75% 2115
80% 2138
90% 2216
95% 2298
98% 2411
99% 2961
100% 5146 (longest request)
$ netstat -talpn |grep '127.0.0.1:8080' |wc -l
34241
References –
https://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
https://httpd.apache.org/docs/2.4/programs/ab.html
In Linux, the find command is most commonly used to search files using different criteria such as file name, size and modified time. Did you know that you can search files using inode number as well? Here is how to do it?
With “ls” we can find the inode number –
$ ls -li /etc/hosts
1576843 -rw-r--r-- 1 root root 311 Jan 21 2017 /etc/hosts
Using “-inum” option of find command, we can locate the filename and its path by its inode number.
$ find /etc -type f -inum 1576843 2>/dev/null
/etc/hosts
$ cat $(find /etc -type f -inum 1576843 2>/dev/null)
127.0.0.1 localhost
127.0.1.1 ubuntu
References
http://man7.org/linux/man-pages/man7/inode.7.html
http://man7.org/linux/man-pages/man1/find.1.html
Contents of most text files change during the life of the file , and it is common to find yourself trying to search and replace certain text across multiple files. In Linux, this is a fairly easy task. Let us go through some of the commands you will need to perform this task and then finally construct a single liner to do the job.
- grep is your best friend when it comes to finding a string in a file. In this case we are looking for the string “REPLACEME” in current directory and across multiple files –
$ grep -r REPLACEME *
host.conf:# The "REPLACEME" line is only used by old versions of the C library.
host.conf:order hosts,REPLACEME,bind
hostname:REPLACEME
hosts.deny:ALL: REPLACEME
If we are interested only in the files which contains this particular text –
$ grep -lr REPLACEME *
host.conf
hostname
hosts.deny
- sed is a tool of choice for inline editing of files –
$ cat data
This text will be replaced - REPLACEME
$ sed -i 's/REPLACEME/NEWTEXT/g' data
$ cat data
This text will be replaced - NEWTEXT
From here, there are multiple ways to skin the cat – we can loop through the files and do the replacement or we can let the commands do the replacement with a wildcard.
For loop style update -
$ for f in $(grep -lr REPLACEME *); do echo "*** File: ${f} ***" ; sed -i 's/REPLACEME/NEWTEXT/g' $f; done
*** File: host.conf ***
*** File: hostname ***
*** File: hosts.deny ***
$ grep -lr REPLACEME *
$ grep -lr NEWTEXT *
data
host.conf
hostname
hosts.deny
Actually the above for loop is redundant, sed can make changes across multiple files –
sed -i 's/REPLACEME/NEWTEXT/g' *