Docker

Docker #

Refcard #

docker build -t friendlyname .  # Create image using this directory's Dockerfile
docker run -p 4000:80 friendlyname  # Run "friendlyname" mapping port 4000 to 80
docker run -d -p 4000:80 friendlyname         # Same thing, but in detached mode
docker ps                                 # See a list of all running containers
docker stop <hash>                     # Gracefully stop the specified container
docker ps -a           # See a list of all containers, even the ones not running
docker kill <hash>                   # Force shutdown of the specified container
docker rm <hash>              # Remove the specified container from this machine
docker rm $(docker ps -a -q)           # Remove all containers from this machine
docker images -a                               # Show all images on this machine
docker rmi <imagename>            # Remove the specified image from this machine
docker rmi $(docker images -q)             # Remove all images from this machine
docker login             # Log in this CLI session using your Docker credentials
docker tag <image> username/repository:tag  # Tag <image> for upload to registry
docker push username/repository:tag            # Upload tagged image to registry
docker run username/repository:tag                   # Run image from a registry

Unterschied zwischen docker run und docker start #

  • docker run erstellt einen neuen Container basierend auf einem Image. Daher docker run <image>.
  • docker start startet einen bereits existierenden Container. Daher docker start <container>.

Verbinden mit laufendem Container #

docker attach CONTAINER_ID

Dies verbindet stdin, stdout und stderr der Shell mit dem Container. Nützlich ist zudem der Parameter --sig-proxy=false, was verhindert, das der Container beim Drücken von Ctrl + c gestoppt wird.

Alternativ kann mit docker exec auch ein weiterer Prozess im Container gestartet werden. Dies ist insbesondere zusammen mit den Parametern -i (für interactive) und -t (das eine Pseudo-TTY startet) nützlich, da so bspw. eine zweite Shell im Container gestartet werden kann: docker exec -it CONTAINER_ID bash # Wenn Bash installiert

Container verlassen (detach): Ctrl + p + Ctrl + q

Container mit interaktiver Shell starten #

docker run -it --entrypoint=/bin/bash IMAGENAME # Bash muss natürlich vorhanden sein...

Container(-prozesse) überwachen #

  • docker ps bzw. docker container ls: Listet laufende Container auf.
    • -a zeigt auch alle gestoppten Containers.
    • Mit --filter value, --latest und --last n kann die Liste gefiltert werden
  • docker logs bzw. docker container logs: Container-Logs. Parameter -f bewirkt ein fortwährendes Aktualisieren der Logs.
  • docker port bzw. docker container port: Zeigt Port-Mapping des Containers
  • docker stats bzw. docker container stats: Echtzeit-Statistik zum Container
  • docker top bzw. docker container top: Laufende Prozesse im Container. Optionen analog zum Unix-Befehl ps.

Verwaltung von images #

Änderungen an Image commiten #

docker commit CONTAINER_ID NEW_CONTAINER_NAME

Es können u.a. folgende Argumente mitgegeben werden:

  • -a: Autor des Commits
  • -c: Dockerfile-Instruktionen zum Image hinzufügen. Unterstützt werden folgende Prädikate: CMD, ENTRYPOINT, ENV, EXPOSE, LABEL, ONBUILD, USER, VOLUME, WORKDIR
  • -m: Commit-Message

Networking #

Verschiedene Typen von Netzwerken:

  • bridge: Standardnetzwerk. Bridge-Netzwerke werden in der Regel eingesetzt, wenn Standalone-Container miteinander kommunizieren müssen
  • host: Keine Netzwerktrennung zwischen Container und Host, und Container greift direkt auf das Netzwerk des Hosts zu.
  • overlay: Verbinden mehrere Docker daemons, und erlauben es Swarm-Services, miteinander zu kommunizieren. Wird auch für die Kommunikation Swarm - Standalone-Container oder Standalone-Container - Standalone-Container auf verschiedenen Docker-Daemons verwendet
  • macvlan: Erlaubt es, einem Docker-Container eine MAC-Adresse zuzuweisen und damit den Container als Gerät im Netzwerk erscheinen zu lassen. Eher für Legacy-Zwecke
  • none: Kein Networking

Tipps & Tricks #

Zeige Dateien im Image #

docker image save image_name > image.tar
tar -xvf image.tar

Zeige Dateien in Container #

docker export $(docker ps -lq) | tar tf - | less 

docker ps -lq gibt die ID des neuesten Docker-Containers zurück - kann auch durch explizite ID oder Container-Name ersetzt werden.

Zeige Inhalt von spezifischen Dateien in Container #

docker export <container-id> | tar xOf - <datei1> <datei2> <...> | less

tar-Parameter O gibt Dateiinhalt nach stdout aus.

Kein Internet-Zugang durch Container #

Zwar aktiviert Docker standardmässig IP forwarding, doch wird die entsprechende Einstellung in sysctl von systemd-networkd überschrieben. Überprüft werden kann dies mit sysctl -a | grep forward.

  • Um die IP forwarding temporär zu aktivieren: sudo sysctl net.ipv4.ip_forward=1 (bzw. für ein spezifisches Interface: sudo sysctl net.ipv4.conf.<interface_name>.forwarding=1
  • Persistent in /etc/sysctl.d/30-ipforward.conf:

net.ipv4.ip_forward=1 net.ipv6.conf.default.forwarding=1 net.ipv6.conf.all.forwarding=1

Traffic in Docker-Netzwerk mitschneiden #

  1. Name des Bridge-Netzwerkes, in dem das Container-Ensemble läuft, herausfinden:
    1. docker network ls zeigt Liste der gestarteten Netzwerke an. Hier ist die alphanumerische Id (erste Spalte) ausschlaggebend
    2. Mittels ip addr show den entsprechenden Bridge-Name in den Interfaces suchen. Normalerweise br-<network-id>
  2. Tcpdump starten: sudo tcpdump -i <bridge-id> tcp -w <Ausgabedatei>. Um den Mitschnitt auf einzelne Ports einzuschränken, nach tcp port <portnummer> hinzufügen
  3. Tcpdump erstellt eine binäre Datei. Diese kann aber bspw. mit Wireshark angeschaut werden: wireshark <Ausgabedatei>

Alternativ zur Tcpdump-Methode lässt sich der Traffic auch direkt in Wireshark beobachten. Dazu muss Wireshark mit root-Rechten gestartet werden, damit das entsprechende Interface angezeigt wird (bzw. mitgeschnitten werden kann)

Docker Swarm #

Glossary #

  • Node: Instance of the Docker engine participating in the swarm, which can run even in parallel on a single physical computer. There are two types of nodes: manager nodes and worker nodes. Manager nodes dispatch units of work called tasks, perform the orchestration and cluster management functions required to maintain the desired state of the swarm and act as HTTP API endpoints. Manager nodes elect a single leader to conduct orchestration tasks. Worker nodes receive and execute tasks dispatched from manager nodes. By default manager nodes also run services as worker nodes, but can be set up to exclusively act as a manager node.
  • Service: Definition of the tasks to execute on the manager or worker nodes. It is the central structure of the swarm system and the primary root of user interaction with the swarm. When you create a service, you specify which container image to use and which commands to execute inside running containers. In the replicated services model, the swarm manager distributes a specific number of replica tasks among the nodes based upon the scale you set in the desired state. For global services, the swarm runs one task for the service on every available node in the cluster. The command docker service (which manages services) can be compared to docker run in a single-node environment.
  • Stack: YAML file which describes all the components and configurations of your Swarm app, and can be used to easily create and destroy your app in any Swarm environment. This file comprises the description of one or more services. The command docker stack (which manages stacks) can be compared to docker-compose in a single-node environment.
  • Swarm: Consists of multiple Docker hosts which run in swarm mode and act as managers (to manage membership and delegation) and/or workers (which run swarm services)
  • Task: Carries a Docker container and the commands to run inside the container (comparable with a “slot”). It is the atomic scheduling unit of swarm. Manager nodes assign tasks to worker nodes according to the number of replicas set in the service scale. Once a task is assigned to a node, it cannot move to another node. It can only run on the assigned node or fail. A task has a lifecycle, in which it goes through a fixed order of states.

Management #

Setup #

The following ports must be open on every host running a node in the swarm in order to allow communication between the nodes and to use the ingress network (see below):

  • Port 2377 for communication between nodes
  • Port 7946 TCP/UDP for container network discovery
  • Port 4789 UDP for the container ingress network
  • (Port 2376 for secure Docker client communication (used by Docker machine))

To take advantage of swarm mode’s fault-tolerance features, Docker recommends you implement an odd number of nodes according to your organization’s high-availability requirements. When you have multiple managers you can recover from the failure of a manager node without downtime.

  • A three-manager swarm tolerates a maximum loss of one manager.
  • A five-manager swarm tolerates a maximum simultaneous loss of two manager nodes.
  • An N manager cluster tolerates the loss of at most (N-1)/2 managers.
  • Docker recommends a maximum of seven manager nodes for a swarm

For further detailed instructions see here.

Create a first Docker swarm manager node: docker swarm init –advertise-addr

Connect a worker node to the swarm (copy and paste from the output of the above command: docker swarm join –token

For more information on the PKI infrastructure see here.

To retrieve the information once again run the followin command on a manager node: docker swarm join-token worker

To join as a manager node, you need to provide the manager token. Extract it this way: docker swarm join-token manager

See status of swarm (scroll to line beginning with Swarm: active): docker info

See information about nodes (works only on manager node host!): docker node ls

Manage services #

There are to types of service deployments:

  • replicated, which replicates a task a defined number of times (default is 1, i.e. one instance)
  • global, which deploys a task on each node (even if the node only joins the swarm later). Good candidates for this type of deployments are e.g. monitoring agents.

Start a service (on the manager node): docker service create –replicas –name

List running services (on the manager node): docker service ls

Inspect a running service (on the manager node): docker service inspect –pretty

Without the --pretty flag the information is more extensive and represented in the JSON format.

Show on which nodes the service is running (on the manager node): docker service ps

To see details about a specific instance (a container) of a service, run docker ps on the respective node.

Scale a service (on the manager node): docker service scale =

Delete a service (on the manager node): docker service rm

Update service images #

Images on which a service is based can be updated with docker service update –image NAME:TAG

Afterwards the scheduler performs the update as follows by default:

  • Stop the first task
  • Schedule update for the stopped task
  • Start the container for the updated task
  • If the update to a task returns RUNNING, start the next task (unless a delay is configured, see below)
  • If, at any time during the update, a task returns FAILED, pause the update (check with service inspect). This behaviour can be changed with the flag --update-failure-action.
  • Repeat for each container

Services can be started with a configured --update-delay <TIME><ns|us|ms|s|m|h> (e.g. --update-delay 10m30s). This delays the update of the next container. For other settings see docker service update --help.

Overview of parameters #
docker service create docker service update Description
–args Service command args
–config –config-add / –config-rm Configurations to expose to the service
–constraint –constraint-add / –constraint-rm Placement constraints

Manage stacks #

docker stack ls              # List all running applications on this Docker host
docker stack deploy -c <composefile> <appname>  # Run the specified Compose file
docker stack services <appname>       # List the services associated with an app
docker stack ps <appname>   # List the running containers associated with an app
docker stack rm <appname>                             # Tear down an application

Mange nodes #

View a list of nodes in the swarm (run from a manager node): docker node ls

Inspect the details of an individual node: docker node inspect self –pretty

Labels can be used to limit critical tasks to nodes that meet certain requirements. To add a node label: docker node update –label-add –label-add <KEY=VALUE>

Promote a worker node to a manager node or vice versa: docker node promote # Long form: docker node update –role manager docker node demote # Long form: docker node update –role worker

Leave the swarm: docker swarm leave # Run on the specific node docker node rm # Run on the manager node

Drain nodes #

Draining a nodes to the node not receiving new tasks from the swarm manager anymore. It also means the manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability. This can be useful for maintenance actions for example.

Drain a node: docker node update –availability drain

Reactivate a node: docker node update –availability active

Routing Mesh #

All nodes participate in an ingress routing mesh. The routing mesh enables each node in the swarm to accept connections on published ports for any service running in the swarm, even if there’s no task running on the node. The routing mesh routes all incoming requests to published ports on available nodes to an active container.

Publish a port for a service with the --publish published=<PUBLISHED-PORT>,target=<CONTAINER-PORT> flag. The published part can be left out (a random high-numbered port is then chosen). Afterwards an access to port on any node is possible. On the swarm nodes themselves, the specific port may not actually be bound, but the routing mesh knows how to route the traffic and prevents any port conflicts from happening.

Bypassing the routing mesh for a given service is possible. This is reffered to as host mode. In order to enable this mode, use the long form of the publish flag: --publish published=<PUBLISHED-PORT>,target=<CONTAINER-PORT>,procotol=<PROTOCOL-TYPE>,mode=host.

See here how to configure an external load balancer.

Configurations #

It is possible to add configuration files as separates entities to a Docker swarm and subsequently used by a service.

  • In a Linux container, a config is normally mounted in the root directory: /<root-directory>
  • You can set the ownership (uid and gid) for the config, using either the numerical ID or the name of the user or group. You can also specify the file permissions (mode). If not set, the is owned by the default user and group and is world-readable (unless a umask is set set within the container).
  • A node has only access to a config if it is a manager or hosts a container which has access rights to the config
  • Consider adding a version number or date to the config name to easily roll back if needed
  • The use of configs can be configured in a docker-compose.yml file. However configs can only be used in the context of a Docker swarm

The following actions are available docker config create <FILE|-> # Create config docker config ls # List config docker config inspect docker config rm

Rotate a config: docker service update
–config-rm –config-add source=,target=,mode=

To read:

Distributed Volumes #

Routing #

UIs #

Ressourcen #