# Docker

> Docker is an OS‑level virtualization (or containerization) platform, which allows applications to share the host OS kernel instead of running a separate guest OS like in traditional virtualization. This design makes Docker containers lightweight, fast, and portable, while keeping them isolated from one another.

### Why use Docker?

* **Portability**: Runs anywhere in local machine, cloud, on‑prem servers.
* **Consistency**: Same behavior in development, testing, and production.
* **Lightweight**: No full OS per app; containers share the host kernel.
* **Scalability**: Ideal for microservices and orchestrators like Kubernetes and Docker Swarm.
* **Efficiency**: Starts in seconds, uses fewer system resources.

### Docker Set Up

We can set up using apt. First, we set up Docker's `apt` repository.

```shellscript
# Add Docker's official GPG key:
sudo apt update
sudo apt install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF

sudo apt update
```

*Generally, best practices dictate that you should copy and paste commands from the internet but you trust me right ;)*

Next, lets install Docker:

```bash
 sudo apt install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
```

We can then check that Docker is running:

```bash
sudo systemctl status docker
```

We can now add the user to the docker group, so that you can run docker commands without sudo:&#x20;

```bash
sudo usermod -aG docker <username>
exit
```

ssh back into the server and you should be able to run:&#x20;

```bash
docker ps
```

\
Full documentation of setting up in Ubuntu:

{% embed url="<https://docs.docker.com/engine/install/ubuntu/>" %}

***

## Brief Introduction to Docker

{% hint style="info" %}
The **following subsections 1-8 are not necessary for set up.** This is for your learning!
{% endhint %}

{% stepper %}
{% step %}

### How does Docker work?

Docker is a **containerization** platform, that allows applications to run in **containers**.&#x20;

Think of containers like micro virtual machines, where each container lives in its own isolated namespace, where it has its own processes, networking users, etc.&#x20;

This makes containers very lightweight (*and fast)* compared to virtual machines.&#x20;
{% endstep %}

{% step %}

### Images & Docker Hub

Docker images are templates to create containers. The image specifies things like what files and applications(binaries) should exist in the container, and how to start the container.&#x20;

Images are normally stored in **registries.** The most common registry is dockerhub.&#x20;

<https://hub.docker.com/>

```docker
FROM python:3.14.3-alpine

# equiv to mkdir app; cd app/
WORKDIR /app 

# copy files from the host machine into image
COPY . . 
# tell docker that port 8000 is exposed
EXPOSE 8000
# command to run
CMD ["python3", "-m", "http.server", "8000"]
```

* `python:3.14.3`:&#x20;
  * `python` is the image it is going to pull from the registry. for docker, it pulls from dockerhub by default.&#x20;
  * `3.14.3` is the image tag, a label assigned to a particular version of an image.&#x20;

This builds the above Dockerfile with the name mypython.&#x20;

```bash
docker build -f Dockerfile . -t mypython
docker image ls
```

{% endstep %}

{% step %}

### Containers

Containers are running instances of an image. From the image, it runs the `cmd` block specified on startup.&#x20;

{% hint style="warning" %}
Containers are ephemeral by default! All data is lost when container is removed.&#x20;
{% endhint %}

* ```bash
  docker run hello-world
  ```
  * finds the image hello-world, and runs it until it completes
* ```bash
  docker container ls -a
  docker container rm <image_name>
  ```
  * Lists all containers, running and stopped.&#x20;
  * Deletes a given container
* ```bash
  docker run -i -t --rm ubuntu
  ```
  * finds the image ubuntu, attaches your input and terminal and runs in it. when you exit, it deletes the container.&#x20;
  * `-i` : Run in an interactive process (you can type and it will be passed into the container)
  * `-t` : it attaches a terminal&#x20;
  * `--rm` : it deletes the container its running automatically when stopped
* ```bash
  docker run -d --name mynginx nginx

  ```
  * finds the image nginx (a web server), creates a container with the name mynginx and runs it detached, in the background
  * `-d` : run detached, running in the background instead of attached to your terminal
* ```bash
  docker ps
  docker stats
  ```
  * shows you information about the current running containers in your system. &#x20;
* ```bash
  docker stop mynginx
  ```
  * stops the running container (doesn't delete it!)
    {% endstep %}

{% step %}

### Volumes

Volumes are how you persist data outside of the container's lifecycle.&#x20;

Volumes are normally used for databases, sharing files between host and container, etc.&#x20;

```bash
docker volume create mydata 
docker run -v mydata:/data postgres
docker run -v $(pwd):/app node
```

<table><thead><tr><th width="90">Type</th><th>Description</th></tr></thead><tbody><tr><td>Volume</td><td>Docker managed persistent storage</td></tr><tr><td>Bind Mount</td><td>Mount your host directory into the container</td></tr></tbody></table>
{% endstep %}

{% step %}

### Ports

Containers run in their own isolated networks, so their ports are not accessible by default. We need to specify which ports to *publish* when we run the container

```bash
docker run -d -p 127.0.0.1:8080:80 nginx
```

* `8080:80` : This follows a `host`: `container` structure. You local machine's port `8080` is bound to the containers port `80` , so all traffic between those 2 ports are piped.

{% hint style="danger" %}
Docker by default ignores the rules set by ufw. Read more here: <https://github.com/chaifeng/ufw-docker>

\
So you can run&#x20;

```
docker run -d -p 8080:80 nginx
```

but this will be exposed to the internet.
{% endhint %}

<details>

<summary>You can access a container's ports without publishing it, but it isn't really recommended</summary>

```bash
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <containername>
# get the ip address from that
curl http://<ip>:<port>
```

</details>
{% endstep %}

{% step %}

### Environment variables

Similar to environment variables in your host system, we can pass environment variables to containers as well. A lot of applications use environment variables for configuration.&#x20;

```bash
docker run -e POSTGRES_PASSWORD=boo -e POSTGRES_USER=someone postgres
```

{% endstep %}

{% step %}

### Networking

Docker provides builtin networking. Networking is a very complex topic, that we do not have the time or the capabilities to explain...

Containers in the same network can see and talk to each

```bash
docker network create -d bridge mynetwork
docker run --network mynetwork nginx
```

{% endstep %}

{% step %}

### Docker Compose

Docker compose is a tool for running **multi-container applications**

Multi-container applications? Example:&#x20;

* your application, written in python
* `postgres` which your application talks to
* `redis` which your application uses as a cache

All of these can be bundled and *orchestrated* together using docker compose!

Another side effect: All the commands above? We don't need to run any of them! WAHOOOO
{% endstep %}
{% endstepper %}

## Let's Run Docker Compose for Portainer&#x20;

```bash
cd /home/<username> &&
mkdir -p docker-compose/portainer &&
cd docker-compose/portainer &&
vim docker-compose.yml
```

{% code title="docker-compose.yml" %}

```yaml
services: # each item under services would be a container
  portainer:
    container_name: portainer
    image: portainer/portainer-ce:sts
    restart: always # what to do when the container crashes / exits
    
    volumes: # where to mount the volumes into the container
      - /var/run/docker.sock:/var/run/docker.sock
      - portainer_data:/data
    ports: # what ports we want to publish
      - 127.0.0.1:9443:9443

volumes: # docker volume create ...
  portainer_data:
    name: portainer_data

networks: # docker network create ...
  default:
    name: portainer_network
```

{% endcode %}

```bash
docker compose up ## deploy the docker-compose 

## we can run compose down to shutdown the docker containers defined in the docker-compose 
## docker compose down 
```

## Useful Docker Commands

<table><thead><tr><th width="130"></th><th></th></tr></thead><tbody><tr><td><code>docker run</code> </td><td>It used for launching the containers from images, with specifying the runtime options and commands</td></tr><tr><td><code>docker pull</code></td><td>Fetch the container images from the container registry like Docker Hub to the local machine</td></tr><tr><td><code>docker ps</code></td><td>Display the running containers along with their important information like container ID, image used and status</td></tr><tr><td><code>docker stats</code></td><td>Show the stats and compute usage of the current running containers</td></tr><tr><td><code>docker stop</code></td><td>Halt running containers gracefully shutting down the processes within them</td></tr><tr><td><code>docker start</code></td><td>Restart the stopped containers, resuming their operations from the previous state.</td></tr></tbody></table>

## Additional Readings

Some content here was based off:

{% embed url="<https://www.geeksforgeeks.org/devops/introduction-to-docker/>" %}

{% embed url="<https://docs.docker.com/reference/compose-file/>" %}
