Using curl to get help on Linux commands, programming languages and more. The most comprehensive cheat sheet.
If you are looking for a Linux and programming cheat sheet, please check
https://github.com/chubin/cheat.sh
It provides nicely colored help page, with plenty of examples in a CLI. Here are some sample runs I did.
Curl cheat sheet
daniel@hidmo:/tmp$ curl cheat.sh/curl
# Download a single file
curl http://path.to.the/file
# Download a file and specify a new filename
curl http://example.com/file.zip -o new_file.zip
# Download multiple files
curl -O URLOfFirstFile -O URLOfSecondFile
# Download all sequentially numbered files (1-24)
curl http://example.com/pic[1-24].jpg
# Download a file and follow redirects
curl -L http://example.com/file
# Download a file and pass HTTP Authentication
curl -u username:password URL
# Download a file with a Proxy
curl -x proxysever.server.com:PORT http://addressiwantto.access
# Download a file from FTP
curl -u username:password -O ftp://example.com/pub/file.zip
# Get an FTP directory listing
curl ftp://username:password@example.com
# Resume a previously failed download
curl -C - -o partial_file.zip http://example.com/file.zip
# Fetch only the HTTP headers from a response
curl -I http://example.com
# Fetch your external IP and network info as JSON
curl http://ifconfig.me/all/json
# Limit the rate of a download
curl --limit-rate 1000B -O http://path.to.the/file
# POST to a form
curl -F "name=user" -F "password=test" http://example.com
# POST JSON Data
curl -H "Content-Type: application/json" -X POST -d '{"user":"bob","pass":"123"}' http://example.com
# POST data from the standard in / share data on sprunge.us
curl -F 'sprunge=<-' sprunge.us
Python lists cheat list
daniel@hidmo:/tmp$ curl cheat.sh/python/list
# python - Why does += behave unexpectedly on lists?
#
# The general answer is that += tries to call the __iadd__ special
# method, and if that isn't available it tries to use __add__ instead.
# So the issue is with the difference between these special methods.
#
# The __iadd__ special method is for an in-place addition, that is it
# mutates the object that it acts on. The __add__ special method returns
# a new object and is also used for the standard + operator.
#
# So when the += operator is used on an object which has an __iadd__
# defined the object is modified in place. Otherwise it will instead try
# to use the plain __add__ and return a new object.
#
# That is why for mutable types like lists += changes the object's
# value, whereas for immutable types like tuples, strings and integers a
# new object is returned instead (a += b becomes equivalent to a = a +
# b).
#
# For types that support both __iadd__ and __add__ you therefore have to
# be careful which one you use. a += b will call __iadd__ and mutate a,
# whereas a = a + b will create a new object and assign it to a. They
# are not the same operation!
>>> a1 = a2 = [1, 2]
>>> b1 = b2 = [1, 2]
>>> a1 += [3] # Uses __iadd__, modifies a1 in-place
>>> b1 = b1 + [3] # Uses __add__, creates new list, assigns it to b1
>>> a2
[1, 2, 3] # a1 and a2 are still the same list
>>> b2
[1, 2] # whereas only b1 was changed
# For immutable types (where you don't have an __iadd__) a += b and a =
# a + b are equivalent. This is what lets you use += on immutable types,
# which might seem a strange design decision until you consider that
# otherwise you couldn't use += on immutable types like numbers!
#
# [Scott Griffiths] [so/q/2347265] [cc by-sa 3.0]
Golang concurrency cheat sheet
daniel@hidmo:/tmp$ curl cheat.sh/go/concurrency
/*
* go - When should I use concurrency in golang?
*
* Not an expert in Go (yet) but I'd say:
*
* Whenever it is easiest to do so.
*
* The beauty of the concurrency model in Go is that it is not
* fundamentally a multi-core architecture with checks and balances where
* things usually break - it is a multi-threaded paradigm that not only
* fits well into a multi-core architecture, it also fits well into a
* distributed system architecture.
*
* You do not have to make special arrangements for multiple goroutines
* to work together harmoniously - they just do!
*
* Here's an example of a naturally concurrent algorithm - I want to
* merge multiple channels into one. Once all of the input channels are
* exhausted I want to close the output channel.
*
* It is just simpler to use concurrency - in fact it doesn't even look
* like concurrency - it looks almost procedural.
*/
/*
Multiplex a number of channels into one.
*/
func Mux(channels []chan big.Int) chan big.Int {
// Count down as each channel closes. When hits zero - close ch.
var wg sync.WaitGroup
wg.Add(len(channels))
// The channel to output to.
ch := make(chan big.Int, len(channels))
// Make one go per channel.
for _, c := range channels {
go func(c <-chan big.Int) {
// Pump it.
for x := range c {
ch <- x
}
// It closed.
wg.Done()
}(c)
}
// Close the channel when the pumping is finished.
go func() {
// Wait for everyone to be done.
wg.Wait()
// Close.
close(ch)
}()
return ch
}
/*
* The only concession I have to make to concurrency here is to use a
* sync.WaitGroup as a counter for concurrent counting.
*
* Note that this is not purely my own work - I had a great deal of help
* with this here (https:stackoverflow.com/q/19192377/823393).
*
* [OldCurmudgeon] [so/q/19747950] [cc by-sa 3.0]
*/
Please check
https://github.com/chubin/cheat.sh for more information on installation and using its comprehensive features.
curl – use variables to show response times and other parameters
curl is a tool to interact with a server for transferring data. Although it supports various protocols, it is most commonly used with HTTP/S. It is sort of a browser for CLI folks and a go to tool when writing scripts to interact with servers.
In addition to transferring data, how do we show request and response parameters with curl. The answer is using variables, the complete list of variables can be found here.
Example – use “time_total” to show the total time, in seconds, that the full operation lasted.
$ curl -w %{time_total} https://www.gcplinux.com
1.149143
It is best to add the variables in a file and use curl to reference the file for better formatting. Here I have added several http request and response variables I am interested in, such as num_connects, size_download, size_header, time_namelookup, time_pretransfer etc.
daniel@hidmo:/tmp$ cat ccurl.txt
url_effective: %{url_effective}\n
content_type: %{content_type}\n
http_code: %{http_code}\n
http_version: %{http_version}\n
num_connects: %{num_connects}\n
num_redirects: %{num_redirects}\n
remote_ip: %{remote_ip}\n
size_download: %{size_download}\n
size_header: %{size_header}\n
time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
daniel@hidmo:/tmp$ curl -H 'Cache-Control: no-cache' -L -w "@ccurl.txt" -o /dev/null -s https://www.gcplinux.com
url_effective: https://gcplinux.com/
content_type: text/html; charset=UTF-8
http_code: 200
http_version: 1.1
num_connects: 2
num_redirects: 1
remote_ip: 162.247.79.246
size_download: 71273
size_header: 537
time_namelookup: 0.008585
time_connect: 0.082511
time_appconnect: 0.264110
time_pretransfer: 0.264293
time_redirect: 1.287257
time_starttransfer: 3.077526
----------
time_total: 3.177939
As far as time related parameters, listed below are the ones you will most likely use –
- time_appconnect The time, in seconds, it took from the start until the SSL/SSH/etc connect/handshake to the remote host was completed. (Added in 7.19.0)
- time_connect The time, in seconds, it took from the start until the TCP connect to the remote host (or proxy) was completed.
- time_namelookup The time, in seconds, it took from the start until the name resolving was completed.
- time_pretransfer The time, in seconds, it took from the start until the file transfer was just about to begin. This includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved.
- time_redirect The time, in seconds, it took for all redirection steps including name lookup, connect, pretransfer and transfer before the final transaction was started. time_redirect shows the complete execution time for multiple redirections. (Added in 7.12.3)
- time_starttransfer The time, in seconds, it took from the start until the first byte was just about to be transferred. This includes time_pretransfer and also the time the server needed to calculate the result.
- time_total The total time, in seconds, that the full operation lasted.
References –
https://curl.haxx.se/docs/manpage.html
https://stackoverflow.com/questions/18215389/how-do-i-measure-request-and-response-times-at-once-using-curl
How to zip or compress a folder or directory in Linux
In Linux or similar Operating Systems, zip utility is used to package and compress (archive) files.
Let us get straight to action, we have a folder to compress with zip tool –
daniel@hidmo:/tmp/tutorial$ tree .
.
??? zip-tutorial
??? chapter-1
? ??? content
??? chapter-2
? ??? readme
??? zip.txt
daniel@hidmo:/tmp/tutorial$ zip -r tutorial.zip zip-tutorial/
adding: zip-tutorial/ (stored 0%)
adding: zip-tutorial/zip.txt (deflated 55%)
adding: zip-tutorial/chapter-2/ (stored 0%)
adding: zip-tutorial/chapter-2/readme (deflated 55%)
adding: zip-tutorial/chapter-1/ (stored 0%)
adding: zip-tutorial/chapter-1/content (deflated 57%)
Basically we use “zip -r DESTINATION-FILE.ZIP FOLDER-TO-COMPRESS” to compress directory. Or in short “zip -r DESTINATION-FILE DIRECTORY-TO-COMPRESS“, we can skip the .zip extension.
daniel@hidmo:/tmp/tutorial$ zip -r tutorial zip-tutorial/
updating: zip-tutorial/ (stored 0%)
adding: zip-tutorial/zip.txt (deflated 55%)
adding: zip-tutorial/chapter-2/ (stored 0%)
adding: zip-tutorial/chapter-2/readme (deflated 55%)
adding: zip-tutorial/chapter-1/ (stored 0%)
adding: zip-tutorial/chapter-1/content (deflated 57%)
To view the contents of the compressed folder without uncompressing it –
daniel@hidmo:/tmp/tutorial$ unzip -l tutorial.zip
Archive: tutorial.zip
Length Date Time Name
--------- ---------- ----- ----
0 2019-10-07 21:45 zip-tutorial/
1202 2019-10-07 21:45 zip-tutorial/zip.txt
0 2019-10-07 21:45 zip-tutorial/chapter-2/
1202 2019-10-07 21:45 zip-tutorial/chapter-2/readme
0 2019-10-07 21:44 zip-tutorial/chapter-1/
722 2019-10-07 21:44 zip-tutorial/chapter-1/content
--------- -------
3126 6 files
References –
https://linux.die.net/man/1/zip
https://superuser.com/questions/216617/view-list-of-files-in-zip-archive-on-linux
Linux – how to avoid running an alias command in shell
In some cases, you might have multiple binaries, scripts or aliases with the same name in your system. Under certain circumstances you want to run only a built-in shell command, but no an alias of the command. Here are some ways to do it.
The “ls” command is usually aliased to color the output, for instance –
$ type ls
ls is aliased to `ls --color=auto'
Precede the command with “command” or “\”
$ command ls /tmp/tutorial/
chapter-one readme
$ \ls /tmp/tutorial/
chapter-one readme
References –
https://www.tldp.org/LDP/abs/html/aliases.html
https://www.gnu.org/software/bash/manual/html_node/Bash-Builtins.html
Linux – Cannot assign requested address
While running a performance test on a local web service, I encountered below error –
$ ab -n 600000 -c 10000 http://localhost:8080/test
...
Benchmarking localhost (be patient)
Test aborted after 10 failures
apr_socket_connect(): Cannot assign requested address (99)
Clearly the number of concurrent requests(-n) and concurrent connections(-c) is high. But would it be possible to tweak my system so that it can handle this? Apparently yes. Doing some reading no Ephemeral port range. For a typical TCP connection, a 4-tuple of source IP/port and destination IP/port is required. In our case, the source and destination IP is fixed (127.0.0.1) as well as the destination port (8080). How many source port range do we have?
$ cat /proc/sys/net/ipv4/ip_local_port_range
32768 60999
$ echo $((60999-32768))
28231
By increasing this port range, the system will accept more concurrent connections. Run below command under root –
root@lindell:~# echo "16000 65535" > /proc/sys/net/ipv4/ip_local_port_range
root@lindell:~# cat /proc/sys/net/ipv4/ip_local_port_range
16000 65535
The performance test now runs successfully –
$ ab -n 600000 -c 10000 http://localhost:8080/test
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 60000 requests
Completed 120000 requests
Completed 180000 requests
Completed 240000 requests
Completed 300000 requests
Completed 360000 requests
Completed 420000 requests
Completed 480000 requests
Completed 540000 requests
Completed 600000 requests
Finished 600000 requests
Server Software:
Server Hostname: localhost
Server Port: 8080
Document Path: /test
Document Length: 13 bytes
Concurrency Level: 10000
Time taken for tests: 122.307 seconds
Complete requests: 600000
Failed requests: 0
Total transferred: 78000000 bytes
HTML transferred: 7800000 bytes
Requests per second: 4905.69 [#/sec] (mean)
Time per request: 2038.449 [ms] (mean)
Time per request: 0.204 [ms] (mean, across all concurrent requests)
Transfer rate: 622.79 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 308 848 180.0 833 3955
Processing: 293 1175 198.5 1190 1967
Waiting: 88 882 210.3 946 1738
Total: 932 2023 208.9 2018 5146
Percentage of the requests served within a certain time (ms)
50% 2018
66% 2085
75% 2115
80% 2138
90% 2216
95% 2298
98% 2411
99% 2961
100% 5146 (longest request)
$ netstat -talpn |grep '127.0.0.1:8080' |wc -l
34241
References –
https://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html
https://httpd.apache.org/docs/2.4/programs/ab.html
In Linux, the find command is most commonly used to search files using different criteria such as file name, size and modified time. Did you know that you can search files using inode number as well? Here is how to do it?
With “ls” we can find the inode number –
$ ls -li /etc/hosts
1576843 -rw-r--r-- 1 root root 311 Jan 21 2017 /etc/hosts
Using “-inum” option of find command, we can locate the filename and its path by its inode number.
$ find /etc -type f -inum 1576843 2>/dev/null
/etc/hosts
$ cat $(find /etc -type f -inum 1576843 2>/dev/null)
127.0.0.1 localhost
127.0.1.1 ubuntu
References
http://man7.org/linux/man-pages/man7/inode.7.html
http://man7.org/linux/man-pages/man1/find.1.html