Wednesday 23 February 2022

How to install ngrok on Linux

ngrok is a tool which provides a public URL to the localhost server which is behind the router. It is useful for testing e.g. your web server. 

To install it on Ubuntu, go to download page and download an archive from Extract its content (ngrok directory) into /usr/local/bin:

$ sudo tar xvzf ~/Downloads/ngrok-stable-linux-amd64.tgz -C /usr/local/bin

Let's check that it's indeed in the desired location:

$ ls -la /usr/local/bin | grep ngrok
-rwxr-xr-x  1 root root 30053267 May  4  2021 ngrok

 Let's run it:
$ ngrok

   ngrok - tunnel local ports to public URLs and inspect traffic

    ngrok exposes local networked services behinds NATs and firewalls to the
    public internet over a secure tunnel. Share local websites, build/test
    webhook consumers and self-host personal services.
    Detailed help for each command is available with 'ngrok help <command>'.
    Open http://localhost:4040 for ngrok's web interface to inspect traffic.

    ngrok http 80                    # secure public URL for port 80 web server
    ngrok http -subdomain=baz 8080   # port 8080 available at
    ngrok http            # tunnel to host:port instead of localhost
    ngrok http https://localhost     # expose a local https server
    ngrok tcp 22                     # tunnel arbitrary TCP traffic to port 22
    ngrok tls 443  # TLS traffic for to port 443
    ngrok start foo bar baz          # start tunnels from the configuration file


  inconshreveable - <>

   authtoken    save authtoken to configuration file
   credits    prints author and licensing information
   http        start an HTTP tunnel
   start    start tunnels by name from the configuration file
   tcp        start a TCP tunnel
   tls        start a TLS tunnel
   update    update ngrok to the latest version
   version    print the version string
   help        Shows a list of commands or help for one command

We could have extracted ngrok into any directory e.g. default ~/Downloads/ngrok-stable-linux-amd64 but then we'd be able to run it as:

~/Downloads/ngrok-stable-linux-amd64$ ./ngrok

To start using ngrok we need to authenticate our local ngrok agent first. For this, we need to log in to our ngrok account and get authentication token first:


Let's authenticate ngrok agent now (with copy-pasted Authtoken):
$ ngrok authtoken 2pCTN...fwvjiUA2
Authtoken saved to configuration file: /home/bojan/.ngrok2/ngrok.yml

Authtoken can be reset any time in our ngrok account. We can check the current value by looking at this configuration file:

$ cat /home/bojan/.ngrok2/ngrok.yml
authtoken: 2pCTN...fwvjiUA2

Let's run it:

$ ngrok http 5000

To stop it, use CTRL-C.

Thursday 10 February 2022

Introduction to AWS Lambda

Falls under Compute category of AWS Services (among which are EC2, EBS, Elastic Load Balancing).
We only need to provide the code that needs to run on hardware. Servers are automatically provided so we don't need to provision or manage them. 

AWS Lambda platform provides automatic scaling, based on the workload, in response to each trigger it receives. 

We are charged only for the time that our application is running. 1 ms granularity is used.
Lambda can run any type of application or backend services. It supports many programming languages like C++, C#, Java, JavaScript, Python, Go etc... 

It can run the code (functions) in response to events received from other applications or AWS services. These events are actually requests to AWS Lambda. Requests are handled by containers which run the code written in such way to serve the query. If number of requests grows, so grows the number of containers spawned and assigned to this lambda. If number of request decreases, the smaller number of containers gets used. 

Use Case Example: Processing images uploaded to S3
  • image is uploaded to S3 bucket
  • this triggers AWS Lambda 
  • lambda function processes the image and formats it into a thumbnail adjusted for the device it will be showed on (mobile, tablet, PC)

Use Case Example: Extracting trending social media hashtags

  • social media data e.g. hashtags is added to Amazon Kinesis (streaming data processing platform)
  • this triggers AWS Lambda 
  • data is stored in DBs for further processing

Use Case Example: A near real-time data backup system

  • the goal is to save a copy of a document in a temporary storage system as soon as it's uploaded to server
  • create two S3 buckets: one where data is uploaded and another one for storing its copy
  • to allow these buckets talk to each other we need to set up Identity and Access Management (IAM) roles and policies
  • the code which copies data between buckets will be in Lambda function
    • this Lambda function is triggered each time a document is uploaded to first S3 bucket


/tmp is the only writeable directory in AWS Lambda.

How it differs from EC2? 

How to decide when to use EC2 and when Lambda?


Tuesday 8 February 2022

Configuration management for MySQL client applications

If you run any MySQL client application (mysql, mysqldump, ...) and pass password via --password command line argument, this application will show a warning:
$ docker run -i mysql /usr/bin/mysql --host= --port=3306 --user=root --password=root
mysql: [Warning] Using a password on the command line interface can be insecure.

It's not a good practice to pass the password from the command line as it is saved in the ~/.bash_history file and can be read by other applications.
The preferred way is to store MySQL DB configuration(s) (including credentials) in a file and then make MySQL clients read it (via --defaults-file or --defaults-extra-file command line argument).

This file can be created by mysql_config_editor tool or manually. If created by the tool, it will be named ~/.mylogin.cnf and its content would be obfuscated. 

Alternatively, it is possible to create and populate ~/.my.cnf file (or /path/to/arbitrary_name.cnf) manually and set desired read/write permissions on it e.g. to make it readable to me only: 
$ chmod 0600 ~/.my.cnf

This is the setup that worked for me:

We can create a configuration file for each database. E.g.:

$ cat ~/mysql/configs/my_db.cnf

We can then share this file with Docker container and specify it as --defaults-extra-file for MySQL client (I didn't set any special read permissions but Docker user should be able to read and copy it):

$ docker run \
-i \
-v ~/mysql/configs/my_db.cnf:/etc/mysql/my_db.cnf \
mysql \
/usr/bin/mysqldump \
--defaults-extra-file=/etc/mysql/my_db.cnf  \
--host= \
--port=3306 \
my_schema my_table_01 my_table_02 > dump_​​$(date +%Y%m%d_%H%M%S).sql
Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.




How to export tables from MySQL Database

To export tables from a DB we can use mysqldump tool which is available on hosts with installed MySQL DB. This means we can also run it in MySQL Docker container:

$ docker run mysql /usr/bin/mysqldump
Usage: mysqldump [OPTIONS] database [tables]
OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
To check mysqldump version:
$ docker run mysql /usr/bin/mysqldump --version
mysqldump  Ver 8.0.27 for Linux on x86_64 (MySQL Community Server - GPL)


Help Output

$ docker run mysql /usr/bin/mysqldump --help
mysqldump  Ver 8.0.27 for Linux on x86_64 (MySQL Community Server - GPL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective

Dumping structure and contents of MySQL databases and tables.
Usage: mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf ~/.my.cnf
The following groups are read: mysqldump client
The following options may be given as the first argument:
--print-defaults        Print the program argument list and exit.
--no-defaults           Don't read default options from any option file,
                        except for login file.
--defaults-file=#       Only read default options from the given file #.
--defaults-extra-file=# Read this file after the global files are read.
                        Also read groups with concat(group, suffix)
--login-path=#          Read this path from the login file.
  -A, --all-databases Dump all the databases. This will be same as --databases
                      with all databases selected.
  -Y, --all-tablespaces
                      Dump all the tablespaces.
  -y, --no-tablespaces
                      Do not dump any tablespace information.
  --add-drop-database Add a DROP DATABASE before each create.
  --add-drop-table    Add a DROP TABLE before each create.
                      (Defaults to on; use --skip-add-drop-table to disable.)
  --add-drop-trigger  Add a DROP TRIGGER before each create.
  --add-locks         Add locks around INSERT statements.
                      (Defaults to on; use --skip-add-locks to disable.)
  --allow-keywords    Allow creation of column names that are keywords.
                      Adds 'STOP SLAVE' prior to 'CHANGE MASTER' and 'START
                      SLAVE' to bottom of dump.
                      This option is deprecated and will be removed in a future
                      version. Use apply-replica-statements instead.
  --bind-address=name IP address to bind to.
                      Directory for character set files.
  --column-statistics Add an ANALYZE TABLE statement to regenerate any existing
                      column statistics.
                      (Defaults to on; use --skip-column-statistics to disable.)
  -i, --comments      Write additional information.
                      (Defaults to on; use --skip-comments to disable.)
  --compatible=name   Change the dump to be compatible with a given mode. By
                      default tables are dumped in a format optimized for
                      MySQL. The only legal mode is ANSI.Note: Requires MySQL
                      server version 4.1.0 or higher. This option is ignored
                      with earlier server versions.
  --compact           Give less verbose output (useful for debugging). Disables
                      structure comments and header/footer constructs.  Enables
                      options --skip-add-drop-table --skip-add-locks
                      --skip-comments --skip-disable-keys --skip-set-charset.
  -c, --complete-insert
                      Use complete insert statements.
  -C, --compress      Use compression in server/client protocol.
  -a, --create-options
                      Include all MySQL specific create options.
                      (Defaults to on; use --skip-create-options to disable.)
  -B, --databases     Dump several databases. Note the difference in usage; in
                      this case no tables are given. All name arguments are
                      regarded as database names. 'USE db_name;' will be
                      included in the output.
  -#, --debug[=#]     This is a non-debug version. Catch this and exit.
  --debug-check       This is a non-debug version. Catch this and exit.
  --debug-info        This is a non-debug version. Catch this and exit.
                      Set the default character set.
                      Rotate logs before the backup, equivalent to FLUSH LOGS,
                      and purge all old binary logs after the backup,
                      equivalent to PURGE LOGS. This automatically enables
                      This option is deprecated and will be removed in a future
                      version. Use delete-source-logs instead.
  -K, --disable-keys  '/*!40000 ALTER TABLE tb_name DISABLE KEYS */; and
                      '/*!40000 ALTER TABLE tb_name ENABLE KEYS */; will be put
                      in the output.
                      (Defaults to on; use --skip-disable-keys to disable.)
  --dump-replica[=#]  This causes the binary log position and filename of the
                      source to be appended to the dumped data output. Setting
                      the value to 1, will printit as a CHANGE MASTER command
                      in the dumped data output; if equal to 2, that command
                      will be prefixed with a comment symbol. This option will
                      turn --lock-all-tables on, unless --single-transaction is
                      specified too (in which case a global read lock is only
                      taken a short time at the beginning of the dump - don't
                      forget to read about --single-transaction below). In all
                      cases any action on logs will happen at the exact moment
                      of the dump.Option automatically turns --lock-tables off.
  --dump-slave[=#]    This option is deprecated and will be removed in a future
                      version. Use dump-replica instead.
  -E, --events        Dump events.
  -e, --extended-insert
                      Use multiple-row INSERT syntax that include several
                      VALUES lists.
                      (Defaults to on; use --skip-extended-insert to disable.)
                      Fields in the output file are terminated by the given
                      Fields in the output file are enclosed by the given
                      Fields in the output file are optionally enclosed by the
                      given character.
                      Fields in the output file are escaped by the given
  -F, --flush-logs    Flush logs file in server before starting dump. Note that
                      if you dump many databases at once (using the option
                      --databases= or --all-databases), the logs will be
                      flushed for each database dumped. The exception is when
                      using --lock-all-tables or --source-data: in this case
                      the logs will be flushed only once, corresponding to the
                      moment all tables are locked. So if you want your dump
                      and the log flush to happen at the same exact moment you
                      should use --lock-all-tables or --source-data with
  --flush-privileges  Emit a FLUSH PRIVILEGES statement after dumping the mysql
                      database.  This option should be used any time the dump
                      contains the mysql database and any other database that
                      depends on the data in the mysql database for proper
  -f, --force         Continue even if we get an SQL error.
  -?, --help          Display this help message and exit.
  --hex-blob          Dump binary strings (BINARY, VARBINARY, BLOB) in
                      hexadecimal format.
  -h, --host=name     Connect to host.
  --ignore-error=name A comma-separated list of error numbers to be ignored if
                      encountered during dump.
  --ignore-table=name Do not dump the specified table. To specify more than one
                      table to ignore, use the directive multiple times, once
                      for each table.  Each table must be specified with both
                      database and table names, e.g.,
                      Adds 'MASTER_HOST=<host>, MASTER_PORT=<port>' to 'CHANGE
                      MASTER TO..' in dump produced with --dump-replica.
                      This option is deprecated and will be removed in a future
                      version. Use include-source-host-port instead.
  --insert-ignore     Insert rows with INSERT IGNORE.
                      Lines in the output file are terminated by the given
  -x, --lock-all-tables
                      Locks all tables across all databases. This is achieved
                      by taking a global read lock for the duration of the
                      whole dump. Automatically turns --single-transaction and
                      --lock-tables off.
  -l, --lock-tables   Lock all tables for read.
                      (Defaults to on; use --skip-lock-tables to disable.)
  --log-error=name    Append warnings and errors to given file.
  --source-data[=#]   This causes the binary log position and filename to be
                      appended to the output. If equal to 1, will print it as a
                      CHANGE MASTER command; if equal to 2, that command will
                      be prefixed with a comment symbol. This option will turn
                      --lock-all-tables on, unless --single-transaction is
                      specified too (in which case a global read lock is only
                      taken a short time at the beginning of the dump; don't
                      forget to read about --single-transaction below). In all
                      cases, any action on logs will happen at the exact moment
                      of the dump. Option automatically turns --lock-tables
  --master-data[=#]   This option is deprecated and will be removed in a future
                      version. Use source-data instead.
                      The maximum packet length to send to or receive from
                      The buffer size for TCP/IP and socket communication.
  --no-autocommit     Wrap tables with autocommit/commit statements.
  -n, --no-create-db  Suppress the CREATE DATABASE ... IF EXISTS statement that
                      normally is output for each dumped database if
                      --all-databases or --databases is given.
  -t, --no-create-info
                      Don't write table creation info.
  -d, --no-data       No row information.
  -N, --no-set-names  Same as --skip-set-charset.
  --opt               Same as --add-drop-table, --add-locks, --create-options,
                      --quick, --extended-insert, --lock-tables, --set-charset,
                      and --disable-keys. Enabled by default, disable with
  --order-by-primary  Sorts each table's rows by primary key, or first unique
                      key, if such a key exists.  Useful when dumping a MyISAM
                      table to be loaded into an InnoDB table, but will make
                      the dump itself take considerably longer.
  -p, --password[=name]
                      Password to use when connecting to server. If password is
                      not given it's asked from the tty.

  -,, --password1[=name]
                      Password for first factor authentication plugin.
  -,, --password2[=name]
                      Password for second factor authentication plugin.
  -,, --password3[=name]
                      Password for third factor authentication plugin.
  -P, --port=#        Port number to use for connection.
  --protocol=name     The protocol to use for connection (tcp, socket, pipe,
  -q, --quick         Don't buffer query, dump directly to stdout.
                      (Defaults to on; use --skip-quick to disable.)
  -Q, --quote-names   Quote table and column names with backticks (`).
                      (Defaults to on; use --skip-quote-names to disable.)
  --replace           Use REPLACE INTO instead of INSERT INTO.
  -r, --result-file=name
                      Direct output to a given file. This option should be used
                      in systems (e.g., DOS, Windows) that use carriage-return
                      linefeed pairs (\r\n) to separate text lines. This option
                      ensures that only a single newline is used.
  -R, --routines      Dump stored routines (functions and procedures).
  --set-charset       Add 'SET NAMES default_character_set' to the output.
                      (Defaults to on; use --skip-set-charset to disable.)
                      Add 'SET @@GLOBAL.GTID_PURGED' to the output. Possible
                      values for this option are ON, COMMENTED, OFF and AUTO.
                      If ON is used and GTIDs are not enabled on the server, an
                      error is generated. If COMMENTED is used, 'SET
                      @@GLOBAL.GTID_PURGED' is added as a comment. If OFF is
                      used, this option does nothing. If AUTO is used and GTIDs
                      are enabled on the server, 'SET @@GLOBAL.GTID_PURGED' is
                      added to the output. If GTIDs are disabled, AUTO does
                      nothing. If no value is supplied then the default (AUTO)
                      value will be considered.
                      Creates a consistent snapshot by dumping all tables in a
                      single transaction. Works ONLY for tables stored in
                      storage engines which support multiversioning (currently
                      only InnoDB does); the dump is NOT guaranteed to be
                      consistent for other storage engines. While a
                      --single-transaction dump is in process, to ensure a
                      valid dump file (correct table contents and binary log
                      position), no other connection should use the following
                      statements: ALTER TABLE, DROP TABLE, RENAME TABLE,
                      TRUNCATE TABLE, as consistent snapshot is not isolated
                      from them. Option automatically turns off --lock-tables.
  --dump-date         Put a dump date to the end of the output.
                      (Defaults to on; use --skip-dump-date to disable.)
  --skip-opt          Disable --opt. Disables --add-drop-table, --add-locks,
                      --create-options, --quick, --extended-insert,
                      --lock-tables, --set-charset, and --disable-keys.
  -S, --socket=name   The socket file to use for connection.
                      File path to the server public RSA key in PEM format.
                      Get server public key
  --ssl-mode=name     SSL connection mode.
  --ssl-ca=name       CA file in PEM format.
  --ssl-capath=name   CA directory.
  --ssl-cert=name     X509 cert in PEM format.
  --ssl-cipher=name   SSL cipher to use.
  --ssl-key=name      X509 key in PEM format.
  --ssl-crl=name      Certificate revocation list.
  --ssl-crlpath=name  Certificate revocation list path.
  --tls-version=name  TLS version to use, permitted values are: TLSv1, TLSv1.1,
                      TLSv1.2, TLSv1.3
                      SSL FIPS mode (applies only for OpenSSL); permitted
                      values are: OFF, ON, STRICT
                      TLS v1.3 cipher to use.
  -T, --tab=name      Create tab-separated textfile for each table to given
                      path. (Create .sql and .txt files.) NOTE: This only works
                      if mysqldump is run on the same machine as the mysqld
  --tables            Overrides option --databases (-B).
  --triggers          Dump triggers for each dumped table.
                      (Defaults to on; use --skip-triggers to disable.)
  --tz-utc            SET TIME_ZONE='+00:00' at top of dump to allow dumping of
                      TIMESTAMP data when a server has data in different time
                      zones or data is being moved between servers with
                      different time zones.
                      (Defaults to on; use --skip-tz-utc to disable.)
  -u, --user=name     User for login if not current user.
  -v, --verbose       Print info about the various stages.
  -V, --version       Output version information and exit.
  -w, --where=name    Dump only selected records. Quotes are mandatory.
  -X, --xml           Dump a database as well formed XML.
  --plugin-dir=name   Directory for client-side plugins.
  --default-auth=name Default authentication client-side plugin to use.
                      Enable/disable the clear text authentication plugin.
  -M, --network-timeout
                      Allows huge tables to be dumped by setting
                      max_allowed_packet to maximum value and
                      net_read_timeout/net_write_timeout to large value.
                      (Defaults to on; use --skip-network-timeout to disable.)
                      Controls whether SECONDARY_ENGINE CREATE TABLE clause
                      should be dumped or not. No effect on older servers that
                      do not support the server side option.
                      Use compression algorithm in server/client protocol.
                      Valid values are any combination of
                      Use this compression level in the client/server protocol,
                      in case --compression-algorithms=zstd. Valid range is
                      between 1 and 22, inclusive. Default is 3.

Variables (--variable-name=value)
and boolean options {FALSE|TRUE}        Value (after reading options)
--------------------------------------- ----------------------------------
all-databases                           FALSE
all-tablespaces                         FALSE
no-tablespaces                          FALSE
add-drop-database                       FALSE
add-drop-table                          TRUE
add-drop-trigger                        FALSE
add-locks                               TRUE
allow-keywords                          FALSE
apply-replica-statements                FALSE
apply-slave-statements                  FALSE
bind-address                            (No default value)
character-sets-dir                      (No default value)
column-statistics                       TRUE
comments                                TRUE
compatible                              (No default value)
compact                                 FALSE
complete-insert                         FALSE
compress                                FALSE
create-options                          TRUE
databases                               FALSE
default-character-set                   utf8mb4
delete-source-logs                      FALSE
delete-master-logs                      FALSE
disable-keys                            TRUE
dump-replica                            0
dump-slave                              0
events                                  FALSE
extended-insert                         TRUE
fields-terminated-by                    (No default value)
fields-enclosed-by                      (No default value)
fields-optionally-enclosed-by           (No default value)
fields-escaped-by                       (No default value)
flush-logs                              FALSE
flush-privileges                        FALSE
force                                   FALSE
hex-blob                                FALSE
host                                    (No default value)
ignore-error                            (No default value)
include-source-host-port                FALSE
include-master-host-port                FALSE
insert-ignore                           FALSE
lines-terminated-by                     (No default value)
lock-all-tables                         FALSE
lock-tables                             TRUE
log-error                               (No default value)
source-data                             0
master-data                             0
max-allowed-packet                      25165824
net-buffer-length                       1046528
no-autocommit                           FALSE
no-create-db                            FALSE
no-create-info                          FALSE
no-data                                 FALSE
order-by-primary                        FALSE
port                                    0
quick                                   TRUE
quote-names                             TRUE
replace                                 FALSE
routines                                FALSE
set-charset                             TRUE
single-transaction                      FALSE
dump-date                               TRUE
socket                                  (No default value)
server-public-key-path                  (No default value)
get-server-public-key                   FALSE
ssl-ca                                  (No default value)
ssl-capath                              (No default value)
ssl-cert                                (No default value)
ssl-cipher                              (No default value)
ssl-key                                 (No default value)
ssl-crl                                 (No default value)
ssl-crlpath                             (No default value)
tls-version                             (No default value)
tls-ciphersuites                        (No default value)
tab                                     (No default value)
triggers                                TRUE
tz-utc                                  TRUE
user                                    (No default value)
verbose                                 FALSE
where                                   (No default value)
plugin-dir                              (No default value)
default-auth                            (No default value)
enable-cleartext-plugin                 FALSE
network-timeout                         TRUE
show-create-table-skip-secondary-engine FALSE
compression-algorithms                  (No default value)
zstd-compression-level                  3

This bash script snippet contains an example how to run mysqldump in a Docker contianer and make it connect to a DB and export tables my_table_01 and my_table_02 from schema my_db:
timestamp=$(date +%Y%m%d_%H%M%S)

docker run \
-it \
mysql \
/usr/bin/mysqldump \
--host=$db_endpoint_address \
--port=$db_endpoint_port \
--user=$db_user \
--password=$db_pass \
my_db my_table_01 my_table_02 > $dump_file_name

The command above pipes entire mysqldump output into a file, including a warning and error messages. In the case above, we're passing password to a MySQL client application via command line which is not secure so mysqldump outputs a warning which ends up at the beginning of the dump file:

$ head -20 dump_​​20220208_103008.sql
mysqldump: [Warning] Using a password on the command line interface can be insecure.
-- MySQL dump 10.13  Distrib 8.0.27, for Linux (x86_64)
-- Host:    Database: my_db
-- ------------------------------------------------------
-- Server version       8.0.17

/*!50503 SET NAMES utf8mb4 */;

If we try to import this file into another instance of MySQL, DB server will reset the connection:
$ cat dump_​​20220208_103008.sql | docker run -i mysql /usr/bin/mysql --host= --port=3306 --user=root --password=root
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'mysqldump: [Warning] Using a password on the command line interface can be insec' at line 1
read unix @->/run/docker.sock: read: connection reset by peer
To prevent this we can manually remove the warning from a dump sql file or, better, use recommended way of passing credentials:
Dump files contain SQL commands for creating views. This can be easily verified by inspecting the content of the dump file:

$ cat Dump20220208-local-db.sql | grep VIEW
/*!50001 DROP VIEW IF EXISTS `my_view_01`*/;
/*!50001 CREATE VIEW `my_view_01` AS SELECT
/*!50001 DROP VIEW IF EXISTS `my_view_02`*/;
/*!50001 CREATE VIEW `my_view_02` AS SELEC

Locally Installed mysqldump

To check if mysqldump is installed locally:
$ mysqldump
Usage: mysqldump [OPTIONS] database [tables]
OR     mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR     mysqldump [OPTIONS] --all-databases [OPTIONS]
For more options, use mysqldump --help
To find its path:
$ which mysqldump

Instead of passing credentials via command line (which might be insecure as they could be found in command line history) we can create a (temporary) file in home directory:

$ touch ~/.my.cnf
$ ls -la ~/.my.cnf
-rw-rw-r-- 1 bojan bojan 0 Sep 27 11:23 /home/bojan/.my.cnf

..and leave credentials there in form:

To export all databases:

$ mysqldump \ \
--port=3306 \
--all-databases \
> my-db-dump.sql

Warning: A partial dump from a server that has GTIDs will by default include the GTIDs of all transactions, even those that changed suppressed parts of the database. If you don't want to restore GTIDs, pass --set-gtid-purged=OFF. To make a complete dump, pass --all-databases --triggers --routines --events.
cnf file can have an arbitrary name and be saved at arbitrary location in which case we need to specify it via this command line argument:


Importing dump file

To import databases (schemas) into an empty database:
$ mysql \
--user=my_local_db_user \
--password=my_local_db_pass \
--host= \
--port=3306 \
< my-db-dump.sql
mysql: [Warning] Using a password on the command line interface can be insecure.

If import fails with error:
ERROR 1465 (HY000) at line 1125: Triggers can not be created on system tables

...then delete the dump file and execute export command but with this additional argument:



Monday 7 February 2022

read (Unix shell command) Manual

read (Wikipedia)


$ read --help
read: read [-ers] [-a array] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name ...]
    Read a line from the standard input and split it into fields.
    Reads a single line from the standard input, or from file descriptor FD
    if the -u option is supplied.  The line is split into fields as with word
    splitting, and the first word is assigned to the first NAME, the second
    word to the second NAME, and so on, with any leftover words assigned to
    the last NAME.  Only the characters found in $IFS are recognised as word
    If no NAMEs are supplied, the line read is stored in the REPLY variable.
      -a array  assign the words read to sequential indices of the array
                variable ARRAY, starting at zero
      -d delim  continue until the first character of DELIM is read, rather
                than newline
      -e        use Readline to obtain the line
      -i text   use TEXT as the initial text for Readline
      -n nchars return after reading NCHARS characters rather than waiting
                for a newline, but honor a delimiter if fewer than
                NCHARS characters are read before the delimiter
      -N nchars return only after reading exactly NCHARS characters, unless
                EOF is encountered or read times out, ignoring any
      -p prompt output the string PROMPT without a trailing newline before
                attempting to read
      -r        do not allow backslashes to escape any characters
      -s        do not echo input coming from a terminal
      -t timeout        time out and return failure if a complete line of
                input is not read within TIMEOUT seconds.  The value of the
                TMOUT variable is the default timeout.  TIMEOUT may be a
                fractional number.  If TIMEOUT is 0, read returns
                immediately, without trying to read any data, returning
                success only if input is available on the specified
                file descriptor.  The exit status is greater than 128
                if the timeout is exceeded
      -u fd     read from file descriptor FD instead of the standard input
    Exit Status:
    The return code is zero, unless end-of-file is encountered, read times out
    (in which case it's greater than 128), a variable assignment error occurs,
    or an invalid file descriptor is supplied as the argument to -u.


If some command returns a set of strings of interest in a tab separated line we can use read to assign them as elements of an array variable:

$ read -a vars <<<  $(...command...)
$ echo $vars

If some command returns 2 strings in a single line and we want to assign them to two variables:
$ read var1 var2 <<< "$(...command...)"
$ echo 'var1: ' $var1
$ echo 'var2: ' $var2

jq (JSON Query) Tool Manual

jq (JSON Query) is JSON processing tool.



Help Page

$ jq --help
jq - commandline JSON processor [version 1.6]

Usage:    jq [options] <jq filter> [file...]
    jq [options] --args <jq filter> [strings...]
    jq [options] --jsonargs <jq filter> [JSON_TEXTS...]

jq is a tool for processing JSON inputs, applying the given filter to
its JSON text inputs and producing the filter's results as JSON on
standard output.

The simplest filter is ., which copies jq's input to its output
unmodified (except for formatting, but note that IEEE754 is used
for number representation internally, with all that that implies).

For more advanced filters see the jq(1) manpage ("man jq")


    $ echo '{"foo": 0}' | jq .
        "foo": 0

Some of the options include:
  -c               compact instead of pretty-printed output;
  -n               use `null` as the single input value;
  -e               set the exit status code based on the output;
  -s               read (slurp) all inputs into an array; apply filter to it;
  -r               output raw strings, not JSON texts;
  -R               read raw strings, not JSON texts;
  -C               colorize JSON;
  -M               monochrome (don't colorize JSON);
  -S               sort keys of objects on output;
  --tab            use tabs for indentation;
  --arg a v        set variable $a to value <v>;
  --argjson a v    set variable $a to JSON value <v>;
  --slurpfile a f  set variable $a to an array of JSON texts read from <f>;
  --rawfile a f    set variable $a to a string consisting of the contents of <f>;
  --args           remaining arguments are string arguments, not files;
  --jsonargs       remaining arguments are JSON arguments, not files;
  --               terminates argument processing;

Named arguments are also available as $ARGS.named[], while
positional arguments are available as $ARGS.positional[].

See the manpage for more options.

Echoing Input (Verifying JSON string)

Let's use the following JSON (example taken from

$ myjson=''\
'{"web-app": {'\
' "servlet": ['\
'   {'\
'     "servlet-name": "cofaxCDS",'\
'     "servlet-class": "org.cofax.cds.CDSServlet",'\
'     "init-param": {'\
'       "configGlossary:installationAt": "Philadelphia, PA",'\
'       "configGlossary:adminEmail": "",'\
'       "dataStoreConnUsageLimit": 100,'\
'       "dataStoreLogLevel": "debug",'\
'       "maxUrlLength": 500}},'\
'   {'\
'     "servlet-name": "cofaxEmail",'\
'     "servlet-class": "org.cofax.cds.EmailServlet",'\
'     "init-param": {'\
'     "mailHost": "mail1",'\
'     "mailHostOverride": "mail2"}},'\
'   {'\
'     "servlet-name": "cofaxAdmin",'\
'     "servlet-class": "org.cofax.cds.AdminServlet"},'\
'   {'\
'     "servlet-name": "fileServlet",'\
'     "servlet-class": "org.cofax.cds.FileServlet"},'\
'   {'\
'     "servlet-name": "cofaxTools",'\
'     "servlet-class": "org.cofax.cms.CofaxToolsServlet",'\
'     "init-param": {'\
'       "templatePath": "toolstemplates/",'\
'       "log": 1,'\
'       "adminGroupID": 4,'\
'       "betaServer": true}}],'\
' "servlet-mapping": {'\
'   "cofaxCDS": "/",'\
'   "cofaxEmail": "/cofaxutil/aemail/*",'\
'   "cofaxAdmin": "/admin/*",'\
'   "fileServlet": "/static/*",'\
'   "cofaxTools": "/tools/*"},'\
' "taglib": {'\
'   "taglib-uri": "cofax.tld",'\
'   "taglib-location": "/WEB-INF/tlds/cofax.tld"}}}'

To verify that JSON is well-formatted we can use jq . which echoes JSON input to output (as pretty-printed JSON):

$ echo $myjson | jq .
  "web-app": {
    "servlet": [
        "servlet-name": "cofaxCDS",
        "servlet-class": "org.cofax.cds.CDSServlet",
        "init-param": {
          "configGlossary:installationAt": "Philadelphia, PA",
          "configGlossary:adminEmail": "",
          "dataStoreConnUsageLimit": 100,
          "dataStoreLogLevel": "debug",
          "maxUrlLength": 500
        "servlet-name": "cofaxEmail",
        "servlet-class": "org.cofax.cds.EmailServlet",
        "init-param": {
          "mailHost": "mail1",
          "mailHostOverride": "mail2"
        "servlet-name": "cofaxAdmin",
        "servlet-class": "org.cofax.cds.AdminServlet"
        "servlet-name": "fileServlet",
        "servlet-class": "org.cofax.cds.FileServlet"
        "servlet-name": "cofaxTools",
        "servlet-class": "org.cofax.cms.CofaxToolsServlet",
        "init-param": {
          "templatePath": "toolstemplates/",
          "log": 1,
          "adminGroupID": 4,
          "betaServer": true
    "servlet-mapping": {
      "cofaxCDS": "/",
      "cofaxEmail": "/cofaxutil/aemail/*",
      "cofaxAdmin": "/admin/*",
      "fileServlet": "/static/*",
      "cofaxTools": "/tools/*"
    "taglib": {
      "taglib-uri": "cofax.tld",
      "taglib-location": "/WEB-INF/tlds/cofax.tld"

If JSON is not formatted well, jq . will output parsing error:

$ myjson=''\
> '['\
> '    ['\
> '        "first"'\
> '    ],'\
> '    ['\
> '        "second"'\
> '    ],'\
> ']'
$ echo $myjson | jq .
parse error: Expected another array element at line 1, column 29 

The error here is a comma after the last array element. After removing it the error goes away.

Getting Values


To get the value of the key which has a dash (-) in name, we need to wrap the key name in double quotes:

$ echo $myjson | jq '."web-app"'
  "servlet": [
      "servlet-name": "cofaxCDS",

To get values at the path of keys, key names should be separated with dots:

$ echo $myjson | jq '."web-app".taglib'
  "taglib-uri": "cofax.tld",
  "taglib-location": "/WEB-INF/tlds/cofax.tld"

To get elements of an array, use [] behind the array key name:

$ echo $myjson | jq '."web-app".servlet[]'
  "servlet-name": "cofaxCDS",
  "servlet-class": "org.cofax.cds.CDSServlet",
  "init-param": {
    "configGlossary:installationAt": "Philadelphia, PA",
    "configGlossary:adminEmail": "",
    "dataStoreConnUsageLimit": 100,
    "dataStoreLogLevel": "debug",
    "maxUrlLength": 500
  "servlet-name": "cofaxEmail",
  "servlet-class": "org.cofax.cds.EmailServlet",
  "init-param": {
    "mailHost": "mail1",
    "mailHostOverride": "mail2"
  "servlet-name": "cofaxAdmin",
  "servlet-class": "org.cofax.cds.AdminServlet"
  "servlet-name": "fileServlet",
  "servlet-class": "org.cofax.cds.FileServlet"
  "servlet-name": "cofaxTools",
  "servlet-class": "org.cofax.cms.CofaxToolsServlet",
  "init-param": {
    "templatePath": "toolstemplates/",
    "log": 1,
    "adminGroupID": 4,
    "betaServer": true

Let's create JSON with array at its root:
$ myjson=''\
'    ['\
'        "first"'\
'    ],'\
'    ['\
'        "second"'\
'    ]'\

$ echo $myjson | jq .

To get elements of the root array:

$ echo $myjson | jq .[]

To get elements at index 0 we can pipe jq commands: .[] gives all elements of the root array (and these elements are arrays themselves) and then .[0] gives all first elements in each this sub-array:

$ echo $myjson | jq ".[] | .[0]"