Pages

Friday 4 November 2022

Linux User Management


 

 

To request security privileges of superuser (root):

$ sudo 


/usr/local/etc/sudoers

To log in and run the current shell as root use:

user@computer:~$ sudo -i
root@computer:~# whoami
root
root@computer:~# exit

logout
user@computer:~$

To find info about user use id:

$ id --help
Usage: id [OPTION]... [USER]
Print user and group information for the specified USER,
or (when USER omitted) for the current user.

  -a             ignore, for compatibility with other versions
  -Z, --context  print only the security context of the process
  -g, --group    print only the effective group ID
  -G, --groups   print all group IDs
  -n, --name     print a name instead of a number, for -ugG
  -r, --real     print the real ID instead of the effective ID, with -ugG
  -u, --user     print only the effective user ID
  -z, --zero     delimit entries with NUL characters, not whitespace;
                   not permitted in default format
      --help     display this help and exit
      --version  output version information and exitftime

Without any OPTION, print some useful set of identified information.

GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Full documentation at: <http://www.gnu.org/software/coreutils/id>
or available locally via: info '(coreutils) id invocation'

$ id

uid=1000(test_user) gid=1000(test_user) groups=1000(test_user),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),116(lpadmin),126(sambashare),999(docker)

$ id -u
1000

$ id -g
1000

$ id -un
test_user

$ id -gn
test_user

$ id -G
1000 3 23 26 29 45 115 125 999

$ id -Gn
test_user adm cdrom sudo dip plugdev lpadmin sambashare docker


$ echo user:group \(id\) = $(id -u):$(id -g)
user:group (id) = 1000:1000

$ echo user:group \(name\) = $(id -un):$(id -gn)
user:group (name) = bojan:bojan


To get the current user:

$ whoami
test_user

To list all users:
 
$ cat /etc/passwd
 
To list all groups:

$ groups
test_user adm cdrom sudo dip plugdev lpadmin sambashare docker

To find out to which groups belong given user:

$ groups test_user
test_user : test_user adm cdrom sudo dip plugdev lpadmin sambashare docker

.profile file 


There is one global profile file (executed when anyone logs in):

/etc/profile

There are three user-specific bash profile files (executed when current/specific user logs in):

~/.profile
~/.bash_profile
~/.bashrc

If ~/.profile doesn't exist, just create it.

This is the comment at the beginning of ~/.profile:

# ~/.profile: executed by the command interpreter for login shells.
# This file is not read by bash(1), if ~/.bash_profile or ~/.bash_login exists.
# see /usr/share/doc/bash/examples/startup-files for examples.
# the files are located in the bash-doc package.
# the default umask is set in /etc/profile; for setting the umask
# for ssh logins, install and configure the libpam-umask package.
#umask 022


How to logout current user from the Terminal?


$ gnome-session-quit

or

$ gnome-session-quit --no-prompt

to suppress showing Logout confirmation dialog.

 ---

Thursday 3 November 2022

AWS EC2 Instance Root Device


 

We can launch a new EC2 instance with e.g.:

$ aws ec2 run-instances \
--image-id ami-0179dbab9bb7ab8f7 \
--count 1 \
--instance-type t3.medium \
--key-name my-key-pair \
--security-group-ids sg-0eb70243855088ab1


Note here that we didn't need to explicitly specify Storage as default settings based on storage type set in AMI will be applied. This is the same as leaving default values in the Storage section when launching a new EC2 instance in AWS Console:



 
 
So, what is the storage (virtual hard disk...) where our EC2 instance boots from and where OS and applications are stored? It is called a root device.


From O’Reilly's AWS Certified Developer - Associate Guide by Vipul Tankariya, Bhavin Parmar:
Root device types
 
While choosing an AMI, it is essential to understand the root device type associated with the AMI. A bootable block device of the EC2 instance is called a root device. As EC2 instances are created from an AMI, it is very important to observe the root device type at the AMI. An AMI can have either of two root device types:
  • Amazon EBS-backed AMI (uses permanent block storage to store data)
  • Instance store-backed AMI (which uses ephemeral block storage to store data) 
While creating an EC2 instance using a web console, we can see whether an AMI is EBS- or instance-backed.

 
AWS EC2 Root Device Volume
 
When an EC2 instance is launched from an AMI, the root device volume contains the image used to boot the instance—mainly the operating system, all the configured services, and applications. This volume can be backed by either EBS or Instance Store (both are explained in the following sections) and can be configured when the AMI is created.
 
The Vector AMI is backed by EBS. This means that when you launch an EC2 instance from the Vector AMI, you will have at least one EBS volume attached to the instance, which will be the root volume. It will contain the operating system, a configured and ready-to-use Vector installation, sample data, and the sample database.

The root device is the virtual device that houses the partition where your filesystem is stored -- ephemeral d---evices have it running on the same physical host at the server, and EBS devices have it mounted using iSCSI.

 

Amazon EC2 instance root device volume - Amazon Elastic Compute Cloud

Amazon EBS volumes - Amazon Elastic Compute Cloud 

AWS EBS Ultimate Guide & 5 Bonus Features to Try 

AWS — Difference between EBS and Instance Store | by Ashish Patel | Awesome Cloud | Medium 

What Is EBS?. EBS is a popular cloud-based storage… | by Eddie Segal | Medium 

Create an Amazon EBS volume - Amazon Elastic Compute Cloud 

Tutorial: Get started with Amazon EC2 Linux instances - Amazon Elastic Compute Cloud 

 

It is important knowing what happens with EBS volumes when we stop and when we terminate EC2 instance.

From What is the difference between terminating and stopping an EC2 instance?

Amazon supports the ability to terminate or stop a running instance. The ability to stop a running instance is only supported by instances that were launched with an EBS-based AMI. There are distinct differences between stopping and terminating an instance.

 

Terminate Instance 

When you terminate an EC2 instance, the instance will be shutdown and the virtual machine that was provisioned for you will be permanently taken away and you will no longer be charged for instance usage. Any data that was stored locally on the instance will be lost. Any attached EBS volumes will be detached and deleted.

 

Stop Instance 

When you stop an EC2 instance, the instance will be shutdown and the virtual machine that was provisioned for you will be permanently taken away and you will no longer be charged for instance usage. The key difference between stopping and terminating an instance is that the attached bootable EBS volume will not be deleted. The data on your EBS volume will remain after stopping while all information on the local (ephemeral) hard drive will be lost as usual. The volume will continue to persist in its availability zone. Standard charges for EBS volumes will apply. Therefore, you should only stop an instance if you plan to start it again within a reasonable timeframe. Otherwise, you might want to terminate an instance instead of stopping it for cost saving purposes.

The ability to stop an instance is only supported on instances that were launched using an EBS-based AMI where the root device data is stored on an attached EBS volume as an EBS boot partition instead of being stored on the local instance itself. As a result, one of the key advantages of starting a stopped instance is that it should theoretically have a faster boot time. When you start a stopped instance the EBS volume is simply attached to the newly provisioned instance. Although, the AWS-id of the new virtual machine will be the same, it will have new IP Addresses, DNS Names, etc. You shouldn't think of starting a stopped instance as simply restarting the same virtual machine that you just stopped as it will most likely be a completely different virtual machine that will be provisioned to you.


If we try to terminate an instance via AWS Console we'll see a warning which states:
 
On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated. Storage on any local drives will be lost.

---

Wednesday 2 November 2022

Exploring Docker commands: docker container



 Let's learn what commands can be used when object is container in the following Docker CLI syntax:

 docker <object> <command> <options> 


docker container

$ docker container 

Usage:  docker container COMMAND

Manage containers

Commands:
  attach      Attach local standard input, output, and error streams to a running container
  commit      Create a new image from a container's changes
  cp          Copy files/folders between a container and the local filesystem
  create      Create a new container
  diff        Inspect changes to files or directories on a container's filesystem
  exec        Run a command in a running container
  export      Export a container's filesystem as a tar archive
  inspect     Display detailed information on one or more containers
  kill        Kill one or more running containers
  logs        Fetch the logs of a container
  ls          List containers
  pause       Pause all processes within one or more containers
  port        List port mappings or a specific mapping for the container
  prune       Remove all stopped containers
  rename      Rename a container
  restart     Restart one or more containers
  rm          Remove one or more containers
  run         Run a command in a new container
  start       Start one or more stopped containers
  stats       Display a live stream of container(s) resource usage statistics
  stop        Stop one or more running containers
  top         Display the running processes of a container
  unpause     Unpause all processes within one or more containers
  update      Update configuration of one or more containers
  wait        Block until one or more containers stop, then print their exit codes


Run 'docker container COMMAND --help' for more information on a command.

---

If we try to start some container which requires opening some port on the localhost while another running container has already opened it, we might get the following error:

$ docker-compose up --build

Starting my_app_1 ... 

ERROR: for my-app_db_1  Cannot start service db: driver failed programming external connectivity on endpoint my-app_db_1 (a456b10867b734493c831aa99f227147110f61233652c4984415c12ecdf9a9b3): Bind for 0.0.0.0:5432 failed: port is Starting my-app_my-service_1 ... done

ERROR: for db  Cannot start service db: driver failed programming external connectivity on endpoint my-app_db_1 (a456b10867b734493c831aa99f227147110f61233652c4984415c12ecdf9a9b3): Bind for 0.0.0.0:5432 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.

To resolve this, we first need to check which containers are running and then to stop them:

$ docker container ls 
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
499c43a9c28b        postgres:latest     "docker-entrypoint.s…"   3 days ago          Up 3 days           0.0.0.0:5432->5432/tcp   my-app-2_db_1

$ docker container stop my-app-2_db_1
my-app-2_db_1

Let's verify that no containers are running:

$ docker container ls
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

---

docker container inspect


Useful command for debugging issues. It shows info about various container's properties e.g. network.

% docker container inspect --help

Usage:  docker container inspect [OPTIONS] CONTAINER [CONTAINER...]

Display detailed information on one or more containers

Options:
  -f, --format string   Format the output using the given Go template
  -s, --size            Display total file sizes

docker container kill


Stops container (non-gracefully). Killing the container does not give a chance to applications running inside it to shut down / terminate gracefully. This command sends a SIGKILL signal to the main process running inside container. This signal immediately terminates this process and this process cannot ignore it. SIGKILL also automatically terminates all child processes. 

% docker container kill --help

Usage:  docker container kill [OPTIONS] CONTAINER [CONTAINER...]

Kill one or more running containers

Options:
  -s, --signal string   Signal to send to the container (default "KILL")


docker container ls 


$ docker container ls --help

Usage:  docker container ls [OPTIONS]

List containers

Aliases:
  ls, ps, list

Options:
  -a, --all             Show all containers (default shows just running)
  -f, --filter filter   Filter output based on conditions provided
      --format string   Pretty-print containers using a Go template
  -n, --last int        Show n last created containers (includes all states) (default -1)
  -l, --latest          Show the latest created container (includes all states)
      --no-trunc        Don't truncate output
  -q, --quiet           Only display numeric IDs
  -s, --size            Display total file sizes



To list all containers:

$ docker container ls -a

To list only numeric IDs of all containers:

$ docker container ls -a -q

or

$ docker container ls -aq


docker container pause


This command transits specific container(s) into a Paused state. This feature is used to lower container's utilization of host's resources (CPU etc...so its battery) when running applications in container is not currently needed.

$ docker container pause --help

Usage:  docker container pause CONTAINER [CONTAINER...]

Pause all processes within one or more containers


CONTAINER is a full container id or just first several characters that can uniquely identify it.

Example:

$ docker container pause 5a7

To resume the processes and transit the container back to Up state:

$ docker container unpause 5a7


docker container rm


$ docker container rm --help

Usage:  docker container rm [OPTIONS] CONTAINER [CONTAINER...]

Remove one or more containers

Options:
  -f, --force     Force the removal of a running container (uses SIGKILL)
  -l, --link      Remove the specified link
  -v, --volumes   Remove the volumes associated with the container


To remove all containers:

$ docker container rm $(docker container ls -aq)


To display detailed information on one or more containers:

$ docker container inspect <container id or name>

If container is running (its state is UP), it needs to be stopped before we can remove it.


docker container run

Exploring Docker commands: docker [container] run | My Public Notepad


docker container stop 


Stops the container (gracefully). Stopping the container in this way gives applications running inside it to stop/terminate gracefully. This can be verified by observing docker logs for a given container during the process of stopping it. Unix signal that gets sent to the main process running inside the container is SIGTERM. Process can handle this signal or can even ignore it. SIGTERM does not kill child processes so main process can choose to stop them gracefully.

% docker container stop --help

Usage:  docker container stop [OPTIONS] CONTAINER [CONTAINER...]

Stop one or more running containers

Options:
  -t, --time int   Seconds to wait for stop before killing it (default 10)

When the terminal is attached to a container output we can press CTRL+C in order to to shut down the container.


Exploring Docker commands: docker images

 


docker images


To see which images are present locally, use docker images:

docker images --help

Usage: docker images [OPTIONS] [REPOSITORY[:TAG]]

List images

Options:
  -a, --all             Show all images (default hides intermediate images)
      --digests         Show digests
  -f, --filter filter   Filter output based on conditions provided
      --format string   Pretty-print images using a Go template
      --no-trunc        Don't truncate output
  -q, --quiet           Only show numeric IDs


Example:

docker images
REPOSITORY           TAG                  IMAGE ID            CREATED             SIZE
my_app               latest               477db2641216        12 minutes ago      193MB
<none>               <none>               7f82c84cc05a        12 minutes ago      193MB

Exploring Docker commands: docker logs


 


docker logs

$ docker logs --help

Usage: docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Options:
      --details        Show extra details provided to logs
  -f, --follow         Follow log output
      --since string   Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)
      --tail string    Number of lines to show from the end of the logs (default "all")
  -t, --timestamps     Show timestamps
      --until string   Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m for 42 minutes)

We can specify either container name or ID (or just the first several characters of ID which make a unique sequence good enough to distinguish this ID from other IDs)

Example:

$ docker logs my_app >& my_app.log

To bind the current terminal and show "live" log outputs from the currently running container, we need to use -f (--follow) option. Press CTRL+C to stop following the logs. 

Exploring Docker commands: docker [container] run

 


$ docker run 

is the shorter version of:

$ docker container run

docker run


docker run  =  docker pull + docker create  +  docker start

docker run performs the following:
  • pulls an image from the remote repository if image is not found locally. A container is an instance of an image and docker pull only downloads the image, it doesn't create a container. 
  • (each time) creates a (new) container from the locally available image
  • start the container

If you run

$ docker run myImage

...N times, docker ps -a will show N containers based on that image.

Why “Docker run” creates new container every time?

Containers do not modify images. Any changes made to a container you've started via docker run won't affect another container run from the same image

Container process that runs is isolated in that it has its own:
  • file system
  • networking
  • isolated process tree separate from the host

$ docker run --help
($ docker container run --help)

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]


Run a command in a new container


Options:

      --add-host list                  Add a custom host-to-IP mapping (host:ip)
  -a, --attach list                    Attach to STDIN, STDOUT or STDERR
      --blkio-weight uint16            Block IO (relative weight), between 10 and 1000, or 0 to disable (default 0)
      --blkio-weight-device list       Block IO weight (relative device weight) (default [])
      --cap-add list                   Add Linux capabilities
      --cap-drop list                  Drop Linux capabilities
      --cgroup-parent string           Optional parent cgroup for the container
      --cidfile string                 Write the container ID to the file
      --cpu-count int                  CPU count (Windows only)
      --cpu-percent int                CPU percent (Windows only)
      --cpu-period int                 Limit CPU CFS (Completely Fair Scheduler) period
      --cpu-quota int                  Limit CPU CFS (Completely Fair Scheduler) quota
      --cpu-rt-period int              Limit CPU real-time period in microseconds
      --cpu-rt-runtime int             Limit CPU real-time runtime in microseconds
  -c, --cpu-shares int                 CPU shares (relative weight)
      --cpus decimal                   Number of CPUs
      --cpuset-cpus string             CPUs in which to allow execution (0-3, 0,1)
      --cpuset-mems string             MEMs in which to allow execution (0-3, 0,1)
  
      -d, --detach                     Run container in background and print container ID

      --detach-keys string             Override the key sequence for detaching a container

      --device list                    Add a host device to the container
      --device-cgroup-rule list        Add a rule to the cgroup allowed devices list
      --device-read-bps list           Limit read rate (bytes per second) from a device (default [])
      --device-read-iops list          Limit read rate (IO per second) from a device (default [])
      --device-write-bps list          Limit write rate (bytes per second) to a device (default [])
      --device-write-iops list         Limit write rate (IO per second) to a device (default [])
      --disable-content-trust          Skip image verification (default true)
      --dns list                       Set custom DNS servers
      --dns-option list                Set DNS options
      --dns-search list                Set custom DNS search domains
      --entrypoint string              Overwrite the default ENTRYPOINT of the image

      -e, --env list                   Set environment variables


      --env-file list                  Read in a file of environment variables

      --expose list                    Expose a port or a range of ports
      --group-add list                 Add additional groups to join
      --health-cmd string              Command to run to check health
      --health-interval duration       Time between running the check (ms|s|m|h) (default 0s)
      --health-retries int             Consecutive failures needed to report unhealthy
      --health-start-period duration   Start period for the container to initialize before starting health-retries countdown (ms|s|m|h) (default 0s)
      --health-timeout duration        Maximum time to allow one check to run (ms|s|m|h) (default 0s)
      --help                           Print usage
      -h, --hostname string            Container host name
      --init                           Run an init inside the container that forwards signals and reaps processes

      -i, --interactive                
Keep STDIN open even if not attached

      --io-maxbandwidth bytes          Maximum IO bandwidth limit for the system drive (Windows only)

      --io-maxiops uint                Maximum IOps limit for the system drive (Windows only)
      --ip string                      IPv4 address (e.g., 172.30.100.104)
      --ip6 string                     IPv6 address (e.g., 2001:db8::33)
      --ipc string                     IPC mode to use
      --isolation string               Container isolation technology
      --kernel-memory bytes            Kernel memory limit
  -l, --label list                     Set meta data on a container
      --label-file list                Read in a line delimited file of labels
      --link list                      Add link to another container
      --link-local-ip list             Container IPv4/IPv6 link-local addresses
      --log-driver string              Logging driver for the container
      --log-opt list                   Log driver options
      --mac-address string             Container MAC address (e.g., 92:d0:c6:0a:29:33)
  -m, --memory bytes                   Memory limit
      --memory-reservation bytes       Memory soft limit
      --memory-swap bytes              Swap limit equal to memory plus swap: '-1' to enable unlimited swap
      --memory-swappiness int          Tune container memory swappiness (0 to 100) (default -1)
      
      --mount mount                    Attach a filesystem mount to the container

      --name string                    Assign a name to the container

      
      --network string                 Connect a container to a network (default "default")

      --network-alias list             Add network-scoped alias for the container

      --no-healthcheck                 Disable any container-specified HEALTHCHECK
      --oom-kill-disable               Disable OOM Killer
      --oom-score-adj int              Tune host's OOM preferences (-1000 to 1000)
      --pid string                     PID namespace to use
      --pids-limit int                 Tune container pids limit (set -1 for unlimited)
      --platform string                Set platform if server is multi-platform capable
      --privileged                     Give extended privileges to this container

      -p, --publish list               Publish a container's port(s) to the host


  -P, --publish-all                    Publish all exposed ports to random ports

      --read-only                      Mount the container's root filesystem as read only
      --restart string                 Restart policy to apply when a container exits (default "no")

      --rm                             
Automatically remove the container when it exits; this means it will not be listed in "docker ps -a" output after it terminates.

      --runtime string                 Runtime to use for this container

      --security-opt list              Security Options
      --shm-size bytes                 Size of /dev/shm
      --sig-proxy                      Proxy received signals to the process (default true)
      --stop-signal string             Signal to stop a container (default "SIGTERM")
      --stop-timeout int               Timeout (in seconds) to stop a container
      --storage-opt list               Storage driver options for the container
      --sysctl map                     Sysctl options (default map[])
      --tmpfs list                     Mount a tmpfs directory

      -t, --tty                        
Allocate a pseudo-TTY
(If you wander WTF is TTY read this)

      --ulimit ulimit                  Ulimit options (default [])


      -u, --user string                
Username or UID (format: <name|uid>[:<group|gid>])

      --userns string                  User namespace to use

      --uts string                     UTS namespace to use

      -v, --volume list                
Bind mount a volume

      --volume-driver string           Optional volume driver for the container

      --volumes-from list              Mount volumes from the specified container(s)
  -w, --workdir string                 Working directory inside the container


For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process. -i -t is often written -it. [source]

Confused about Docker -t option to Allocate a pseudo-TTY

It is possible to pipe (continuously) data from some command running in host's terminal into an app running in Docker container:

e.g.

$ ping 8.8.8.8 | docker run -i --rm -p 8000:8000 --name http_server http_server


Example: 

This example shows how it's easy to fetch, spin and use a specific version of Golang (v1.11) with Docker:

docker run --rm -it golang:1.11
Unable to find image 'golang:1.11' locally
1.11: Pulling from library/golang
e79bb959ec00: Already exists 
... 
963c818ebafc: Already exists 
66831bab8cde: Pull complete 
0ac2e04178ce: Pull complete 
Digest: sha256:c096aaab963e25f78d226e841ad258ab3cba8c6a4c345c24166a5755686734f1
Status: Downloaded newer image for golang:1.11
root@bee14b1ffe61:/go# go version
go version go1.11.6 linux/amd64
root@bee14b1ffe61:/go# exit
exit

Example:

docker run -p 443:443 \
        -v "/private/var/lib/pgadmin:/var/lib/pgadmin" \
        -v "/path/to/certificate.cert:/certs/server.cert" \
        -v "/path/to/certificate.key:/certs/server.key" \
        -v "/tmp/servers.json:/servers.json" \
        -e "PGADMIN_DEFAULT_EMAIL=user@domain.com" \
        -e "PGADMIN_DEFAULT_PASSWORD=SuperSecret" \
        -e "PGADMIN_ENABLE_TLS=True" \
        -d dpage/pgadmin4


Pulling the image from Docker Hub


docker run checks if image exists locally and if not it performs docker pull and downloads it from Docker Hub. [1] Once the image is present docker run would not attempt to download its newer version next time [docker run doesn't pull down latest image if the image exists locally · Issue #13331 · moby/moby]. We need to pull it manually:

$ docker pull my_image && docker run my_image

Container identification


Container can be identified by:

  • UUID
    • assigned by the daemon
    • can be:
      • long
      • short
  • name
    • set by the developer via --name argument OR
    • if --name is not used, Docker assigns automatically some arbitrary but human readable name

Container Modes (-d) 


Docker can run processes (containers) in one of these two modes:
  • foreground
    • default mode
    • console/terminal is attached to process' stdin, stdout, stderr; we can see all the terminal output, log messages etc... created by the applications running inside the container
    • only one container can run in the current terminal 
  • background (detached)
    • -d has to be passed to docker run
    • in a single terminal (tab) we can run as many container as we wish
    • container exits when 
      • the root process used to run container exits OR
      • when daemon exits (if this happens first)


Exposing Ports


By default, all Docker containers run in a default bridge network. We can't access the container unless its port(s) are exposed outside. 

If some process in the container is listening on some port and we want to allow connections from outside the container, we can open (expose) these incoming ports via -p option which publishes single port or range of ports to the host. Port number that service listens inside the container does not need to be the same as the port exposed to the host (where external clients connect). These mappings are specified like in these examples:

-p <host_port(s)>:<container_port(s)>/protocol

-p 1234:1234 // single port, all protocols
-p 1234:1234/tcp // allow only TCP protocol
-p 1234-1240:1234-1240 // range
-p 1234-1240:1234-1240/tcp // range, TCP protocol only
-p 127.0.0.1:80:8080/tcp // tcp port 8080 is mapped onto host's port 80

Host port is bound onto local host address 0.0.0.0.

Networking

Use case:
  • DB is running in one container and DB client is running in another container
  • DB client is using host name (not IP address) of the first container
Setup:
  • Both containers have to be on the same network. This can be achieved in two ways:
    • DB container does not specify any network but DB client container is using --network=container:<DB_container_name>
    • a custom network is created and both containers are using --network=<network_name>
  • DB has to have set network alias so DB client can use it as DB's host name
Creating custom network:

$ docker network create \
--subnet=172.18.0.0/24 \
--gateway 172.18.0.1 \
custom_network

Starting DB container:

$ docker run \
--name postgres \
--network custom_network \
--publish 5432:5432 \
--hostname db \ <-- this is irrelevant for DNS resolution
--network-alias db \ <-- this is relevant for DNS resolution
--ip 172.18.0.2 \
...
postgres

Starting DB client container:

$ docker run \
-e PGHOST=db \
--name db_client \
--network custom_network \
...
dockerhub.com/mydbclient

From the docs: --hostname option specifies the hostname a container uses for itself. Defaults to the container’s ID if not specified.

To check container's network do:

$ docker inspect <container_name_or_id>

or

$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $INSTANCE_ID

Container's IPv4 address has to be found in NetworkSettings.Networks.<network_name>.IPAddress. For backward compatibility and only for the default bridge network, the IPv4 address will also be listed in NetworkSettings.IPAddress.


Can't resolve set hostname from another docker container in same network

If (Node.js) app for some reason can't resolve hostname, its networking stack might error with:
Error: getaddrinfo EAI_AGAIN

Error: getaddrinfo EAI_AGAIN
My Docker container does not have IP address. Why?
docker inspect <container-id> returns blank IP address
Troubleshooting Container Networking


Useful for testing: blocking particular domain for Docker container

Use --add-host. Example:

$ docker run -e APP_ENV=dev -e DB_HOST=1.2.3.4 -e DB_PORT=5432 -e DB_NAME=my_db_dev -e DB_USER=postgres -e DB_PASSWORD=postgres -e OUTPUT_DIR=./data-vol --rm -it --mount type=bind,src="$(pwd)/data-vol",target=/data-vol --network=db_default --add-host dev.example.com:0.0.0.0 --name my-app-container my-app-image --param0=arg0

Adding something like

RUN echo "0.0.0.0 dev.example.com" >> /etc/hosts && cat /etc/hosts

to Dockerfile won't work.


How to make Docker using Proxy server?

One of the options is to use environment variables:

docker run \
...
-e HTTP_PROXY=10.21.32.40:8080 \
-e HTTPS_PROXY=10.21.32.40:8080 \
...

Configure Docker to use a proxy server | Docker Documentation

Running Docker Containers as Specified User


Processes In Containers Should Not Run As Root
Running a Docker container as a non-root user
How to use Docker without sudo on Ubuntu

Add current user to docker group (which members can talk to docker service which is running as root):

$ sudo gpasswd -a $USER docker
# or sudo usermod -aG docker $USER
$ newgrp docker

Running Docker Containers as Current Host User
What's the default user for docker exec?
Permission denied on accessing host directory in docker



Environment variables


Use -e option followed by NAME=VALUE like in this example:

-e POSTGRES_PASSWORD=mysecretpassword


Using volumes

Use volumes

From Volume mapping making directories out of files:
When using the -v <host-path>:<container-path> option is used for a bind-mount, docker will look for <host-path> on the host where the daemon runs; if that path does not exist, it will create a directory on that location (as it doesn't know if you wanted to mount a file or a directory; a directory is the default). (Windows-specific: If you don't have a shared drive set-up, docker won't be able to access that path from your host, thus won't find the path inside the VM (thus creates a directory)).
To prevent docker from creating the path if it cannot find it (and produce an error instead), use the --mount option instead of -v. The --mount flag uses a different syntax, which can be found on this page; https://docs.docker.com/engine/admin/volumes/bind-mounts/#start-a-container-with-a-bind-mount


Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.

New users should try --mount syntax which is simpler than --volume syntax.

--mount 'type=volume,src=<VOLUME-NAME>,dst=<CONTAINER-PATH>,volume-driver=local,volume-opt=type=nfs...'


type - type of the mount, which can be bind, volume, or tmpfs
src - For named volumes, this is the name of the volume. May be specified as source or src. It can be either absolute or relative path.
dst - destination takes as its value the path where the file or directory is mounted in the container; May be specified as destination, dst, or target. It has to be absolute path otherwise an error similar to this is issued:

$ docker run --rm -it --mount type=volume,src=./data-vol-src,target=./data-vol-target --name go-demo go-demo
docker: Error response from daemon: invalid mount config for type "volume": invalid mount path: './data-vol-target' mount path must be absolute.
See 'docker run --help'.


Example:

$ docker run  --mount type=volume,source=myvol,target=/app ...


Docker Tip #33: Should You Use the Volume or Mount Flag?

BK: I could not make docker run --mount working when having both type=volume and source=`pwd`/myvol (or $(pwd)/myvol, ${pwd}/myvol, "$(pwd)/myvol", "$(pwd)"/myvol...etc...).

Example:

$ docker run --rm -it --mount type=volume,src="$(pwd)/data-vol",target=/go/src/github.com/BojanKomazec/go-demo/data-vol --name go-demo go-demo
docker: Error response from daemon: create /home/bojan/dev/go/src/github.com/BojanKomazec/go-demo/data-vol: "/home/bojan/dev/go/src/github.com/BojanKomazec/go-demo/data-vol" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path.
See 'docker run --help'.

This is very likely a consequence of Docker's way of handling volumes: When -v (or -- mount type=volume) is specified Docker will create a new volume in the docker host's /var/lib/docker/volumes/new_volume_name/_data and mount to the container. We can't use an arbitrary absolute path for the volume (the one that includes $(pwd)) if volume has to be in a directory predefined by Docker.

Nevertheless, if we switch to binding local host's arbitrary directory to some container's one then $(pwd) works well.

Example:

$ docker run --rm -it --mount type=bind,src="$(pwd)/data-vol",target=/go/src/github.com/BojanKomazec/go-demo/data-vol --name go-demo go-demo

Our application can pick up target directory via environment variable: we need to add e.g. OUTPUT_DIR=./data-vol to .env file or to docker run -e argument.

If some file is created and saved to disk by the application which runs within Docker container, that file will actually be saved on the host's disk, in the directory in container's volume. E.g.

/var/lib/docker/volumes/f84dd5236429fc84c31104335394bd08f5292316c93fb04552e68349877ba0ca/_data/storage/test_example.com/demo_table_from_db.csv

To access (e.g. view) this file, attach to container's terminal as:

$ docker exec -it container_name /bin/bash

or

$ docker exec -it container_name /bin/sh

...if bash is not available in the image.

Example:

$ docker exec -it pgadmin /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
$ docker exec -it pgadmin /bin/sh
/pgadmin4 # ls -la
...

Troubleshooting #1


If you use docker run and then see that container is not running (e.g. the output of docker container ls shows its status as Exited(1)) check the log of the container:

$ docker inspect <container_name>

In the output json look for LogPath attribute:

"LogPath":"/var/lib/docker/containers/62fdb19e88ded813969915891f6a2dec6a83cb3a28c1054fbdd1e001f033ccf6/62fdb19e88ded813969915891f6a2dec6a83cb3a28c1054fbdd1e001f033ccf6-json.log"

Then see the content of the log file:

$ sudo cat /var/lib/docker/containers/62fdb19e88ded813969915891f6a2dec6a83cb3a28c1054fbdd1e001f033ccf6/62fdb19e88ded813969915891f6a2dec6a83cb3a28c1054fbdd1e001f033ccf6-json.log

{"log":"You need to specify PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD environment variables\n","stream":"stdout","time":"2019-06-05T10:14:22.078604896Z"}

Troubleshooting #2


If Dockerfile specifies an executable to be run via CMD but some of the dependencies of that executable are missing (e.g. base image is alpine so doesn't have some libs installed...) running this container might report an error:

$ docker run  --rm -it --mount type=bind,src="$(pwd)/data-vol",target=/data-vol  --name myapp docker.example.com/myapps/myapp:build-28
standard_init_linux.go:211: exec user process caused "no such file or directory"

To verify that it is indeed binary stated in CMD that is making problems, we can override CMD by specifying /bin/sh as the executable so we can browse the file system of the container:

$ docker run  --rm -it --mount type=bind,src="$(pwd)/data-vol",target=/data-vol  --name myapp docker.example.com/myapps/myapp:build-28 /bin/sh
/ # ls -la
total 8404
drwxr-xr-x    1 root     root          4096 Jul 25 13:27 .
drwxr-xr-x    1 root     root          4096 Jul 25 13:27 ..
-rwxr-xr-x    1 root     root             0 Jul 25 13:27 .dockerenv
drwxr-xr-x    2 root     root          4096 Jul 11 17:29 bin

-rwxr-xr-x    1 root     root       8532786 Jul 24 17:22 myapp
/ # ./myapp 
/bin/sh: ./myapp: not found
/ # myapp

/bin/sh: myapp: not found
...

Troubleshooting #3

If docker run shows that app started with CMD doesn't have access privileges, it will output an error similar to this:

error TS5033: Could not write file '/home/node/myapp/build/index.js': EACCES: permission denied, open '/home/node/myapp/build/index.js'.

To find out the permissions on the /home/node/myapp/ you can temporarily change CMD app in Dockerfile:

CMD [ "ls", "-la", "/home/node/myapp" ]

Now rebuild the image and run the container - the output will show owners of that directory and files in it:

$ docker build -t bkomazec/myapp .
$ docker run --user node --rm -it --name running-myapp bkomazec/myapp