CLI Cheat Sheet

General

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
# Get help with Docker. Can also use –help on all subcommands
docker --help
docker COMMAND --help

# Show the Docker version information
docker version

# Start the docker daemon
docker -d

# Display system-wide information
docker info

# Get real time events from the server
docker events [OPTIONS]

# Remove all unused data, images, containers, networks, volumes
docker system prune

Images

Docker images are a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Build an Image from a Dockerfile
docker build -t <image_name>

# Build an Image from a Dockerfile without the cache
docker build -t <image_name> . –no-cache

# List local images
docker images

# Delete an Image
docker rmi <image_name>

# Remove all unused images
docker image prune

# Tag an image to a name (local or registry)
docker tag <MAGE ID> TARGET_IMAGE[:TAG]

# Show the history of an image
docker history [OPTIONS] IMAGE

# Import the contents from a tarball to create a filesystem image
docker import [OPTIONS] file|URL|- [REPOSITORY[:TAG]]

# Load an image from a tar archive or STDIN
docker load - name [tar-file]

# Save an image to a tar archive file
docker save [image] > [tar-file

# Create a container without running
docker create [image]

Containers

A container is a runtime instance of a docker image. Containers isolate software from its environment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# Create and run a container from an image, with a custom name:
docker run --name <container_name> <image_name>

# Run a container with and publish a container’s port(s) to the host.
docker run -p <host_port>:<container_port> <image_name>

# Run a container in the background
docker run -d <image_name>

# Run a container with interactive 
docker run -it <image_name>

# Run a container and remove it after it stops
docker run --rm <image_name>

# Start or stop an existing container:
docker start|stop|restart <container_name> (or <container-id>)

# Kill one or more running containers
docker kill [OPTIONS] CONTAINER [CONTAINER...]

# Attach local standard input, output, and error streams to a running container
docker attach [OPTIONS] CONTAINER

# Remove a stopped container:
docker rm <container_name>

# Open a shell inside a running container:
docker exec -it <container_name> sh

# Fetch and follow the logs of a container:
docker logs -f <container_name>

# To inspect a running container:
docker inspect <container_name> (or <container_id>)

# To list currently running containers:
docker ps

# List all docker containers (running and stopped):
docker ps --all

# View resource usage stats
docker container stats

# Create a new image from a container's changes
docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

# Rename a container
docker rename CONTAINER NEW_NAME

# Update configuration of one or more containers
docker update [OPTIONS] CONTAINER [CONTAINER...]

# List port mappings or a specific mapping for the container
docker port CONTAINER [PRIVATE_PORT[/PROTO]]

# Display the running processes of a container
docker top CONTAINER [ps OPTIONS]

# Display a live stream of container(s) resource usage statistics
docker stats

# Inspect changes to files or directories on a container's filesystem
docker diff CONTAINER

# Copy files/folders between a container and the local filesystem
docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH 
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH

Docker Hub

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
# Login into Docker
docker login -u <username>

# Publish an image to Docker Hub
docker push <username>/<image_name>

# Search Hub for an image
docker search <image_name>

# Pull an image from a Docker Hub
docker pull <image_name>

# Log out of a Docker registry
docker logout

Networks

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# list network driver plugins
docker info

# Create bridge network
docker network create -d bridge mynet

# Create overlay network
docker network create -d overlay mynet

# List available networks
docker network ls

# Remove one or more networks
docker network rm [network]

# Show information on one or more networks
docker netwrok inspect [network]

# Connect a container to a network
docker network connect [network] [container]

# Disconnect a container from a network
docker network disconnect [network] [container]

# Remove all the unused docker networks
docker network prune

Volumes

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Create docker volumes
docker volume create [name]

# creating a new docker container to attach the docker volume.
docker run -itd - volume=[name]:/world nginx

# Show information about volume
docker volume inspect [name]

# remove docker volumes
docker volume rm [name]

Plugin Management

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# Enable a Docker plugin
docker plugin enable [plugin]

# Disable a Docker plugin
docker plugin disable [plugin]

# View details about a plugin
docker plugin inspect [plugin]

# Remove a plugin
docker plugin rm [plugin]

Security

1
2
3
4
5
# scan docker image
docker scan [image]

# display the scan result as a JSON output
docker scan - json [image]

Docker Compose

1
2
3
4
5
6
7
8
# Create and start containers
docker-compose up: 

# Stop and remove containers, networks
docker-compose down

# Build or rebuild services
docker-compose build

Dockerfile reference

  • FROM [--platform=<platform>] <image>[:<tag>] [AS <name>]:

    This sets the base image for subsequent instructions.

  • WORKDIR /path/to/workdir:

    This sets the working directory for any following RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.

  • USER <user>[:<group>]:

    This sets the user name or UID used when running the image and for any following RUN, CMD, and ENTRYPOINT instructions.

  • ENV <key>=<value>:

    Set environment variables using this instruction. This value will be in the environment for all subsequent instructions in the build stage and can be replaced inline in many as well.

  • EXPOSE <port> [<port>/<protocol>...]:

    You can inform Docker that the container listens on the specified network ports at runtime with this instruction.

  • RUN:

    1
    2
    
    # shell form, the command is run in a shell
    RUN ["executable", "param1", "param2"]
    
    1
    2
    
    # exec form
    RUN ["executable", "param1", "param2"]
    

    This allows you to execute commands in a new layer on top of the current image and commit the results.

  • ADD/COPY:

    1
    
    ADD <src>... <dest>
    
    1
    
    COPY <src>... <dest>
    

    These instructions let you copy new files, directories, or remote file URLs and add them to the image filesystem.

    According to the Dockerfile best practices guide, we should always prefer COPY over ADD unless we specifically need one of the two additional features of ADD.

    • The ADD instruction support remote file URLs from <src> .
    • The ADD instruction support unpack compression file.
  • ENTRYPOINT ["executable", "param1", "param2"]:

    An ENTRYPOINT allows you to configure a container that will run as an executable.

  • VOLUME:

    This creates a mount point and marks it as holding externally mounted volumes.

  • CMD:

    1
    2
    
    # exec form, this is the preferred form
    CMD ["executable","param1","param2"] 
    
    1
    2
    
    # as default parameters to ENTRYPOINT
    CMD ["param1","param2"]
    
    1
    2
    
    # shell form
    CMD command param1 param2  
    
    1
    
    CMD [ "sh", "-c", "echo $HOME" ]
    

    This specifies the command to run when a container is launched.

    There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.

Set up Docker environment

Images and containers

  • An image is an executable package that includes everything needed to run an application–the code, a runtime, libraries, environment variables, and configuration files.
  • A container is a runtime instance of an image–what the image becomes in memory when executed. A container runs natively on Linux and shares the kernel of the host machine with other containers.

Install Docker

1
2
3
sudo pacman -S docker
systemctl start docker.service
systemctl enable docker.service

add your user to the docker group.

1
2
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
1
2
3
4
5
6
7
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://registry.docker-cn.com", "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

when failed to start daemon with docker service, debug with dockerd

1
sudo dockerd

Docker command

1
2
3
## List Docker CLI commands
$ docker                    
$ docker container --help
1
2
3
## Display Docker version and info
$ docker --version
$ docker info                                                             
1
2
3
4
5
6
7
## Execute Docker image
$ docker run [OPTIONS] IMAGE                                     
             --rm    Automatically remove the container when it exits
             -i    Keep STDIN open even if not attached
             -t    Allocate a pseudo-TTY
             -d    Run container in background and print container ID
             -p 4000:80 mapping your machine’s port 4000 to the container’s published port 80
1
2
## List Docker images
$ docker image ls                                                                      
1
2
3
4
## List Docker containers (running, all, all in quiet mode)                           
$ docker container ls
$ docker container ls --all
$ docker container ls -aq

Build an image and run it as one container

The portable images are defined by something called a Dockerfile.

Define a container with Dockerfile

Create a file called Dockerfile, copy-and-paste the following content into that file, and save it.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# Use an official Python runtime as a parent image
FROM python:2.7-slim

# Set the working directory to /app
WORKDIR /app

# Copy the current directory contents into the container at /app
ADD . /app

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD ["python", "app.py"]

DNS settings or Proxy server settings refer to the documentation.

Build the app

1
2
3
## Create a Docker image, and tag using -t follow by tag name              
$ docker build -t friendlyhello .
$ docker image ls

Note how the tag defaulted to latest. The full syntax for the tag option would be something like --tag=friendlyhello:v0.0.1.

Run the app

1
2
3
4
## Mapping local machine’s port 4000 to the container’s port 80                        
$ docker run -p 4000:80 friendlyhello
## Run the app in the background using detached mode
$ docker run -d -p 4000:80 friendlyhello

Stop the container

1
2
$ docker container ls
$ docker container stop <container_hash_id>                                             

Manage the container and image

1
2
3
4
5
6
docker container kill <container_hash_id>   # Force shutdown of the specified container
docker container rm <container_hash_id>  # Remove specified container from this machine
docker container rm $(docker container ls -a -q)                # Remove all containers
docker image ls -a                                    # List all images on this machine
docker image rm <image id>                   # Remove specified image from this machine
docker image rm $(docker image ls -a -q)          # Remove all images from this machine

Share your image

1
2
3
4
docker login                    # Log in this CLI session using your Docker credentials
docker tag <image_name> username/repository:tag    # Tag <image> for upload to registry
docker push username/repository:tag                   # Upload tagged image to registry
docker run username/repository:tag                          # Run image from a registry

If the image isn’t available locally on the machine, Docker pulls it from the repository.

Scale your app to run multiple containers

Install Docker Compose

1
2
3
$ sudo curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
$ docker-compose --version

Services

In a distributed application, different pieces of the app are called “services.” A service only runs one image.

Services are really just “containers in production.” A service only runs one image, but it codifies the way that image runs—what ports it should use, how many replicas of the container should run so the service has the capacity it needs, and so on. Scaling a service changes the number of container instances running that piece of software, assigning more computing resources to the service in the process.

Luckily it’s very easy to define, run, and scale services with the Docker platform – just write a docker-compose.yml file.

Define docker-compose.yml file

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
      restart_policy:
        condition: on-failure
    ports:
      - "80:80"
    networks:
      - webnet
networks:
  webnet:
  • Pull the image from the registry.
  • Run 5 instances of that image as a service called web, limiting each one’s CPU and RAM.
  • Immediately restart containers if one fails.
  • Map port 80 on the host to web’s port 80.
  • Instruct web’s containers to share port 80 via a load-balanced network called webnet. (Internally, the containers themselves publish to web’s port 80 at an ephemeral port.)
  • Define the webnet network with the default settings (which is a load-balanced overlay network).

Run this load-balanced app

1
2
3
$ docker swarm init
$ docker stack ls                 # List stacks or apps
$ docker stack deploy -c docker-compose.yml <set_a_app_name>

A single container running in a service is called a task. Tasks are given unique IDs that numerically increment, up to the number of replicas you defined in docker-compose.yml.

1
2
3
4
$ docker service ls
$ docker service ps <app_name>   # List tasks associated with app
$ docker inspect <task or container>  # Inspect task or container
$ docker container ls -q

Scale the app by changing the replicas value in docker-compose.yml, and re-running the docker stack deploy command.

Take down the app and the swarm

1
2
3
4
## Take the app down
$ docker stack rm app_name
# Take down a single node swarm from the manager
$ docker swarm leave --force

Distribute your app across a cluster

Multi-container, multi-machine applications are made possible by joining multiple machines into a “Dockerized” cluster called a swarm. The machines are joining a swarm, they are referred to as nodes. Swarm managers are the only machines in a swarm that can execute your commands, or authorize other machines to join the swarm as workers. Workers are just there to provide capacity and do not have the authority to tell any other machine what it can and cannot do.

Install Docker Machine

Set up your swarm

Enabling swarm mode instantly makes the current machine a swarm manager. From then on, Docker runs the commands you execute on the swarm you’re managing, rather than just on the current machine.

Run docker swarm init to enable swarm mode and make your current machine a swarm manager, then run docker swarm join on other machines to have them join the swarm as workers.

Always run docker swarm init and docker swarm join with port 2377 (the swarm management port), or no port at all and let it take the default.

1
2
# On swarm manager machine
$ docker swarm init --advertise-addr <local ip>
1
2
# On other machine
$ docker swarm join --token <token> <ip>:2377
1
2
# On swarm manager machine
$ docker node ls

Deploy the app on the swarm cluster

run a command that configures

1
2
3
4
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/sam/.docker/machine/machines/myvm1"
export DOCKER_MACHINE_NAME="myvm1"
1
$ docker-machine scp <file> <machine>:~
1
2
# On swarm manager machine
$ docker stack deploy -c docker-compose.yml <app_name>

If your image is stored on a private registry instead of Docker Hub, you need to be logged in using docker login <your-registry> and then you need to add the --with-registry-auth flag to the docker stack deploy command.

1
2
# On swarm manager machine
$ docker stack ps <app_name>

Iterating and scaling your app

Scale the app by changing the docker-compose.yml file, then rebuild, and push the new image. Using the docker swarm join command to join any machine.

All of modified simply run docker stack deploy afterwards, and your app can take advantage of the new resources.

Cleanup and reboot Stacks and swarms

1
2
$ docker stack rm <service_name>      # tear down the stack
$ docker swarm leave --force

Stack services by adding a backend database

A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.

Make multiple services relate to each other, and run them on multiple machines.

Add a new service and redeploy

1
$ cat docker-compose.yml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
    ports:
      - "80:80"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
networks:
  webnet:

Notice two new things here: a volumes key, giving the visualizer access to the host’s socket file for Docker, and a placement key, ensuring that this service only ever runs on a swarm manager – never a worker.

redeploy

1
2
3
$ docker stack deploy -c docker-compose.yml getstartedlab
Updating service getstartedlab_web (id: angi1bf5e4to03qu9f93trnxm)
Creating service getstartedlab_visualizer (id: l9mnwkeq2jiononb5ihz9u7a4)

You saw in the Compose file that visualizer runs on port 8080. Get the IP address of one of your nodes by running docker-machine ls. Go to either IP address at port 8080 and you can see the visualizer running. 192.168.99.101:8080

1
$ docker stack ps getstartedlab

The visualizer is a standalone service that can run in any app that includes it in the stack. It doesn’t depend on anything else.

Persist the data

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
version: "3"
services:
  web:
    # replace username/repo:tag with your name and image details
    image: username/repo:tag
    deploy:
      replicas: 5
      restart_policy:
        condition: on-failure
      resources:
        limits:
          cpus: "0.1"
          memory: 50M
    ports:
      - "80:80"
    networks:
      - webnet
  visualizer:
    image: dockersamples/visualizer:stable
    ports:
      - "8080:8080"
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock"
    deploy:
      placement:
        constraints: [node.role == manager]
    networks:
      - webnet
  redis:
    image: redis
    ports:
      - "6379:6379"
    volumes:
      - "/home/docker/data:/data"
    deploy:
      placement:
        constraints: [node.role == manager]
    command: redis-server --appendonly yes
    networks:
      - webnet
networks:
  webnet:

Redis has an official image in the Docker library and has been granted the short image name of just redis, so no username/repo notation here. The Redis port, 6379, has been pre-configured by Redis to be exposed from the container to the host.

there are a couple of things in the redis specification that make data persist between deployments of this stack:

  • redis always runs on the manager, so it’s always using the same filesystem.
  • redis accesses an arbitrary directory in the host’s file system as /data inside the container, which is where Redis stores data.

this is creating a “source of truth” in your host’s physical filesystem for the Redis data.

This source of truth has two components:

  • The placement constraint you put on the Redis service, ensuring that it always uses the same host.
  • The volume you created that lets the container access ./data (on the host) as /data (inside the Redis container). While containers come and go, the files stored on ./data on the specified host persists, enabling continuity.

Create a ./data directory on the manager:

1
$ docker-machine ssh myvm1 "mkdir ./data"

redeploy

1
$ docker stack deploy -c docker-compose.yml getstartedlab

Deploy your app to production

Storage

Storage drivers

Use the Device Mapper storage driver

Decrease the size of the data file using the truncate command

1
sudo truncate -s 20G /var/lib/docker/devicemapper/data

cheat sheet

rename

1
2
docker tag <IMAGE ID> csyezheng/coursera-helper
docker tag <IMAGE ID> csyezheng/coursera-helper:0.12.1
1
2
docker push csyezheng/coursera-helper
docker push csyezheng/coursera-helper:0.12.1