Archive for the ‘ Miscellaneous ’ Category

There are several tools for compressing and decompressing files in Linux, you can get a summary of these tools in this link. Zip is one of the utilities used for packaging, compressing (archive) and decompressing files.

Installation

  • Ubuntu
sudo apt-get update
sudo apt-get install zip unzip
  • RedHat or CentOS
sudo yum install unzip

Compress files

Compress files in a directory named tutorial –

$ zip -r tutorial.zip tutorial/
   adding: tutorial/ (stored 0%)
   adding: tutorial/host.conf (deflated 13%)
   adding: tutorial/hostname (stored 0%)
   adding: tutorial/hosts.deny (deflated 44%)
   adding: tutorial/hosts (deflated 35%)
   adding: tutorial/hosts.allow (deflated 42%)
   adding: tutorial/auth_sa.py (deflated 52%)

View contents of zip files, without uncompressing –

$ zip -sf tutorial
 Archive contains:
   tutorial/
   tutorial/host.conf
   tutorial/hostname
   tutorial/hosts.deny
   tutorial/hosts
   tutorial/hosts.allow
   tutorial/auth_sa.py
 Total 7 entries (2487 bytes)

Unzip or decompress

To decompress a zipped file, use the unzip command –

 $ unzip tutorial.zip
Archive:  tutorial.zip
   creating: tutorial/
  inflating: tutorial/host.conf
 extracting: tutorial/hostname
  inflating: tutorial/hosts.deny
  inflating: tutorial/hosts
  inflating: tutorial/hosts.allow
  inflating: tutorial/auth_sa.py

Search and compress

You can also combine find and zip command to search for certain types of files and compress those files in one command –

 $ find . -type f -name '*.conf' -print | zip confi-files -@
  adding: host.conf (deflated 13%)
  adding: colord.conf (deflated 50%)
  adding: ntp.conf (deflated 56%)

$ zip -sf confi-files
Archive contains:
  host.conf
  colord.conf
  ntp.conf
Total 3 entries (1858 bytes)
References -
https://linux.die.net/man/1/zip

Contents of most text files change during the life of the file , and it is common to find yourself trying to search and replace certain text across multiple files. In Linux, this is a fairly easy task. Let us go through some of the commands you will need to perform this task and then finally construct a single liner to do the job.

  • grep is your best friend when it comes to finding a string in a file. In this case we are looking for the string “REPLACEME” in current directory and across multiple files –
$ grep -r REPLACEME *
host.conf:# The "REPLACEME" line is only used by old versions of the C library.
host.conf:order hosts,REPLACEME,bind
hostname:REPLACEME
hosts.deny:ALL: REPLACEME

If we are interested only in the files which contains this particular text –

$ grep -lr REPLACEME *
host.conf
hostname
hosts.deny
  • sed is a tool of choice for inline editing of files –
$ cat data 
This text will be replaced - REPLACEME
$ sed -i 's/REPLACEME/NEWTEXT/g' data 
$ cat data 
This text will be replaced - NEWTEXT

From here, there are multiple ways to skin the cat – we can loop through the files and do the replacement or we can let the commands do the replacement with a wildcard.

For loop style update -

$ for f in $(grep -lr REPLACEME *); do echo "*** File: ${f} ***" ; sed -i 's/REPLACEME/NEWTEXT/g' $f; done
*** File: host.conf ***
*** File: hostname ***
*** File: hosts.deny ***

$ grep -lr REPLACEME *

$ grep -lr NEWTEXT *
data
host.conf
hostname
hosts.deny

Actually the above for loop is redundant, sed can make changes across multiple files –

 sed -i 's/REPLACEME/NEWTEXT/g' *

What type of storage engine a MySQL table uses?


MySQL supports several storage engines such as InnoDB, MyISAM, BLACKHOLE, CSV. Depending on your use case, you might configure your MySQL table to use certain storage engine. To see the list of storage engines MySQL supports, simply run “SHOW ENGINES\G” under a mysql prompt.

To find out the particular storage engine used by a table, run the ‘show table status’ command for the named table as below. The first example is the mysql user table, which uses InnoDB –


mysql> use mysql;

mysql> show table status like 'user' \G
*************************** 1. row ***************************
Name: user
Engine: InnoDB
Version: 10
Row_format: Compact
Rows: 1
Avg_row_length: 16384
Data_length: 16384
Max_data_length: 0
Index_length: 49152
Data_free: 10485760
Auto_increment: 2
Create_time: 2013-08-26 22:52:09
Update_time: NULL
Check_time: NULL
Collation: binary
Checksum: NULL
Create_options:
Comment:
1 row in set (0.00 sec)

A sample table which uses MyISAM storage engine –


mysql> show table status like 'servers' \G
*************************** 1. row ***************************
Name: servers
Engine: MyISAM
Version: 10
Row_format: Fixed
Rows: 0
Avg_row_length: 0
Data_length: 0
Max_data_length: 433752939111120895
Index_length: 1024
Data_free: 0
Auto_increment: NULL
Create_time: 2013-08-24 01:42:15
Update_time: 2013-08-24 01:42:15
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment: MySQL Foreign Servers table
1 row in set (0.00 sec)

A table for logging slow queries is stored in a CSV storage engine –


mysql> show table status like 'slow_log' \G
*************************** 1. row ***************************
Name: slow_log
Engine: CSV
Version: 10
Row_format: Dynamic
Rows: 2
Avg_row_length: 0
Data_length: 0
Max_data_length: 0
Index_length: 0
Data_free: 0
Auto_increment: NULL
Create_time: NULL
Update_time: NULL
Check_time: NULL
Collation: utf8_general_ci
Checksum: NULL
Create_options:
Comment: Slow log
1 row in set (0.00 sec)

 

Features of some of the storage engines –

  • InnoDB: is a transaction-safe (ACID compliant) storage engine for MySQL that has commit, rollback, and crash-recovery capabilities to protect user data. 
  • MyISAM: These tables have a small footprint. Table-level locking limits the performance in read/write workloads, so it is often used in read-only or read-mostly workloads in Web and data warehousing configurations.
  • Memory: Stores all data in RAM, for fast access in environments that require quick lookups of non-critical data.
  • CSV: Its tables are really text files with comma-separated values. CSV tables let you import or dump data in CSV format, to exchange data with scripts and applications that read and write that same format.
  • Archive: These compact, unindexed tables are intended for storing and retrieving large amounts of seldom-referenced historical, archived, or security audit information.
  • Blackhole: The Blackhole storage engine accepts but does not store data, similar to the Unix /dev/null device. Queries always return an empty set.

 

References –

https://dev.mysql.com/doc/refman/8.0/en/storage-engines.html

GCP NEXT 2018

Google Cloud Platform 2018 conference


Google does an annual cloud conference, for this year of 2018, it will be held in Moscone Center, San Francisco from 24th to 26th of July.

You can view the conference details, calendar and registration information here.

The conference name is generally referred by Google as NEXT, and this year’s – “Next ’18 is a three day global exhibition of inspiration, innovation, and education where we learn from one another how the cloud can transform how we work and power everyone’s successes.” The event has several hands on sessions, the main themes of the sessions are –

  • Application development
  • Collaboration and Productivity
  • Data Analytics
  • Infrastructure and Operations
  • IoT
  • Machine Learning and AI
  • Mobility and Devices
  • Security

I am posting the event calendar from the site here –

Monday, July 23

7 AM–6 PM   : Event Check In

9 AM–6 PM   : Bootcamps

5 PM–7 PM   : Women Techmakers Social

6 PM–8 PM  : Celebrate Diversity Reception

 

Tuesday, July 24

7 AM–6 PM   : Late Event Check In Available

9 AM–10:30 AM   : Keynote

10:30 AM–5:30 PM   : Expo, Google & Partner Showcase, Hands-on Labs, and Equality Lounge

11 AM–5:30 PM   : Office Hours & Meetups

11 AM–5:55 PM   : Spotlight & Breakout Sessions

11 AM–6 PM   : Certification Testing

 

Wednesday, July 25

7 AM–6 PM   : Late Event Check In Available

9 AM–10:30 AM   : Keynote

10:30 AM–5:30 PM   : Expo, Google & Partner Showcase, Hands-on Labs, and Equality Lounge

11 AM–5:30 PM   : Office Hours & Meetups

11 AM–5:55 PM   : Spotlight & Breakout Sessions

11 AM–6 PM   : Certification Testing

7 PM–10 PM   : Evening Event

 

Thursday, July 26

8 AM–4 PM   : Registration & Badge Pickup

9 AM–10:30 AM   : Keynote

9 AM–2:30 PM   : Expo, Google & Partner Showcase, and Hands-on Labs

9 AM–2:35 PM   : Spotlight & Breakout Sessions

9:30 AM–1:45 PM   : Office Hours & Meetups

10:30 AM–12:30 PM   : Equality Lounge

11 AM–6 PM   : Certification Testing

Friday, July 27

8 AM–5 PM   : Bootcamps

 

Link – https://cloud.withgoogle.com/next18/sf

Google joins AWS and Azure as leader in Gartner’s 2018 IaaS Magic Quadrant


After intensive investing in Cloud Computing, particularly geared towards enterprises, Google has finally joined Amazon (Amazon Web Services) and Microsoft (Azure) as a leader in Infrastrucutre as a service (Iaas) in Gartner’s Magic Quadran for 2018. GCP – Google Cloud Platform – is very intuitive to use and particular popular among data scientists.

 

https://www.cloudcomputing-news.net/news/2018/may/29/gartners-2018-iaas-magic-quadrant-google-joins-leaders-zone-only-six-vendors-make-cut/

“Google has clambered into the leaders’ section of Gartner’s latest infrastructure as a service (IaaS) Magic Quadrant, while the wheat has been separated from the chaff.

The annual report concluded that the cloud IaaS market is now a three-horse race in the top right box, with the leaders’ zone not being an Amazon Web Services (AWS) and Microsoft-only area for the first time since 2013.  …

https://cloudplatform.googleblog.com/2018/05/Google-named-a-Leader-in-2018-Gartner-Infrastructure-as-a-Service-Magic-Quadrant.html

“We’re pleased to announce that Gartner recently named Google as a Leader in the 2018 Gartner Infrastructure as a Service Magic Quadrant.
With an increasing number of enterprises turning to the cloud to build and scale their businesses, research from organizations like Gartner can help you evaluate and compare cloud providers.

…”

 

Visit https://linuxfreelancer.com/getting-started-google-cloud-platform/ for links to get started with GCP.

Configure IP Aliases in Red Hat / CentOS


IP aliasing is a term for assigning multiple IP addresses to a single network interface. It is quite useful in a shared web hosting for instance, particularly if the domains have SSL certificates. You can setup each domain to resolve to different IP address, even if they are all sharing the same network interface.

You have to be root to perform this tasks.

1. Disable Network Manager


# service NetworkManager stop

# chkconfig NetworkManager off

2. Add IP alias from cli


# ip addr addr add 192.168.0.11/24 dev eth0 label eth0:1

# ip addr show eth0

3. Persistently add lias

Create the file /etc/sysconfig/network-scripts/ifcfg-eth0:1

# cat /etc/sysconfig/network-scripts/ifcfg-eth0:1
DEVICE=eth0:1
IPADDR=192.168.0.11
PREFIX=24
ONPARENT=yes

4. Restart network service

# service network restart
# ip addr show eth0