TLS on a Simple Dockerized WordPress VM (Certbot + Nginx)

This note documents how TLS was issued, configured, and made fully automatic for a WordPress site running on a single Ubuntu VM with Docker, Nginx, PHP-FPM, and MariaDB.

The goal was boring, predictable HTTPS — no load balancers, no Front Door, no App Service magic.


Architecture Context

  • Host: Azure Ubuntu VM (public IP)
  • Web server: Nginx (Docker container)
  • App: WordPress (PHP-FPM container)
  • DB: MariaDB (container)
  • TLS: Let’s Encrypt via Certbot (host-level)
  • DNS: Azure DNS → VM public IP
  • Ports:
    • 80 → HTTP (redirect + ACME challenge)
    • 443 → HTTPS

1. Certificate Issuance (Initial)

Certbot was installed on the VM (host), not inside Docker.

Initial issuance was done using standalone mode (acceptable for first issuance):

sudo certbot certonly \
  --standalone \
  -d shahzadblog.com

This required:

  • Port 80 temporarily free
  • Docker/nginx stopped during issuance

Resulting certs live at:

/etc/letsencrypt/live/shahzadblog.com/
  ├── fullchain.pem
  └── privkey.pem

2. Nginx TLS Configuration (Docker)

Nginx runs in Docker and mounts the host cert directory read-only.

Docker Compose (nginx excerpt)

nginx:
  image: nginx:alpine
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./wordpress:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - /etc/letsencrypt:/etc/letsencrypt:ro

Nginx config (key points)

  • Explicit HTTP → HTTPS redirect
  • TLS configured with Let’s Encrypt certs
  • HTTP left available only for ACME challenges
# HTTP (ACME + redirect)
server {
    listen 80;
    server_name shahzadblog.com;

    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS
server {
    listen 443 ssl;
    http2 on;

    server_name shahzadblog.com;

    ssl_certificate     /etc/letsencrypt/live/shahzadblog.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shahzadblog.com/privkey.pem;

    root /var/www/html;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

3. Why Standalone Renewal Failed

Certbot auto-renew initially failed with:

Could not bind TCP port 80

Reason:

  • Docker/nginx already listening on port 80
  • Standalone renewal always tries to bind port 80

This is expected behavior.


4. Switching to Webroot Renewal (Correct Fix)

Instead of stopping Docker every 60–90 days, renewal was switched to webroot mode.

Key Insight

Certbot (host) and Nginx (container) must point to the same physical directory.

  • Nginx serves:
    ~/wp-docker/wordpress → /var/www/html (container)
  • Certbot must write challenges into:
    ~/wp-docker/wordpress/.well-known/acme-challenge

5. Renewal Config Fix (Critical Step)

Edit the renewal file:

sudo nano /etc/letsencrypt/renewal/shahzadblog.com.conf

Change:

authenticator = standalone

To:

authenticator = webroot
webroot_path = /home/azureuser/wp-docker/wordpress

⚠️ Do not use /var/www/html here — that path exists only inside Docker.


6. Filesystem Permissions

Because Docker created WordPress files as root, the ACME path had to be created with sudo:

sudo mkdir -p /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge
sudo chmod -R 755 /home/azureuser/wp-docker/wordpress/.well-known

Validation test:

echo test | sudo tee /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge/test.txt
curl http://shahzadblog.com/.well-known/acme-challenge/test.txt

Expected output:

test

7. Final Renewal Test (Success Condition)

sudo certbot renew --dry-run

Success message:

Congratulations, all simulated renewals succeeded!

At this point:

  • Certbot timer is active
  • Docker/nginx stays running
  • No port conflicts
  • No manual intervention required

Final State (What “Done” Looks Like)

  • 🔒 HTTPS works in all browsers
  • 🔁 Cert auto-renews in background
  • 🐳 Docker untouched during renewals
  • 💸 No additional Azure services
  • 🧠 Minimal moving parts

Key Lessons

  • Standalone mode is fine for first issuance, not renewal
  • In Docker setups, filesystem alignment matters more than ports
  • Webroot renewal is the simplest long-term option
  • Don’t fight permissions — use sudo intentionally
  • “Simple & boring” scales better than clever abstractions

This setup is intentionally non-enterprise, low-cost, and stable — exactly what a long-running personal site needs.

Upgrade Debian from bullseye to bookworm and PVE7 to PVE8

Here is a short checklist to upgrade Debian to latest bookworm version;

Proxmox update goes with Debian Latest stable version. I am running BullEye and need to upgrade to BookWorm.

Run checklist (a small script that comes with Proxmox):

pve7to8

Fix errors and warnings reported by above script.

Next change repositories for Debian and Proxmos;

1. update the configured APT repositories
   apt update
   apt dist-upgrade
   pveversion

   This should report at least 7.4-15 or newer version.

2. CEPH
   nano /etc/apt/sources.list.d/ceph.list
   make sure there is just one entry.
	

3. Bulleye to BookWorm
   nano /etc/apt/sources.list
   or better, run this command to search and replace bullye to
   bookworm

   sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
   Output
   ------
   # security updates
   #deb http://security.debian.org bookworm-security main contrib

   # My repo changes
   deb http://deb.debian.org/debian/ bookworm main contrib non-free
   deb http://deb.debian.org/debian/ bookworm-updates main non-free contrib
   # security updates
   deb http://security.debian.org/debian-security bookworm-security main contrib non-free

   # PVE pve-no-subscription repository provided by proxmox.com,
   # NOT recommended for production use
   deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription

4. APT Repositorys
   I don't have special repositories here. so don't worry about this.

Install this package if using EFI to boot box;

apt install grub-efi-amd64

To clear CEPH warnings, reset Ceph monitor on VM.

Remove any used packages with this command;

apt autoremove

Re-run scan;

pve7to8

Make sure to disable enterprise library if using evaluation version;

modify enterprise repo;

nano /etc/apt/sources.list.d/pve-enterprise.list

and add a # at the beginning. Save this file 

Restart your nodes one by one.

References

https://pve.proxmox.com/wiki/Upgrade_from_7_to_8

https://pve.proxmox.com/wiki/Ceph_Nautilus_to_Octopus

UBUNTU disk size increase

To increase disk size, first we need to see disk status;

df -h

If it’s a VM, make sure VM has allocated enough space before performing next actions.

Here’s the list of steps for a simple scenario where you have two partitions, /dev/sda1 is an ext4 partition the OS is booted from and /dev/sdb2 is swap. For this exercise we want to remove the swap partition an extend /dev/sda1 to the whole disk.

  1. As always, make sure you have a backup of your data – since we’re going to modify the partition table there’s a chance to lose all your data if you make a typo, for example.
  2. Run sudo fdisk /dev/sda
    • use p to list the partitions. Make note of the start cylinder of /dev/sda1
    • use d to delete first the swap partition (2) and then the /dev/sda1 partition. This is very scary but is actually harmless as the data is not written to the disk until you write the changes to the disk.
    • use n to create a new primary partition. Make sure its start cylinder is exactly the same as the old /dev/sda1 used to have. For the end cylinder agree with the default choice, which is to make the partition to span the whole disk.
    • use a to toggle the bootable flag on the new /dev/sda1
    • review your changes, make a deep breath and use w to write the new partition table to disk. You’ll get a message telling that the kernel couldn’t re-read the partition table because the device is busy, but that’s ok.
  3. Reboot with sudo reboot. When the system boots, you’ll have a smaller filesystem living inside a larger partition.
  4. The next magic command is resize2fs. Run sudo resize2fs /dev/sda1 – this form will default to making the filesystem to take all available space on the partition.

That’s it, we’ve just resized a partition on which Ubuntu is installed, without booting from an external drive.

https://askubuntu.com/questions/116351/increase-partition-size-on-which-ubuntu-is-installed

Setting Traefik on unRAID

This is a basic Traefik setup. Follow these steps to setup Traefik as reverse proxy on unRAID.

We will be using Traefik 2.x as reverse proxy on unRAID v 6.9.x. we will be setting up unRAID ui and Traefik dashboard to show traffic can be routed to any container running on unRAID.

DNS records configuration

We need to create DNS records, all pointing to unRAID box. We will be using unRAID default “local” domain running on 192.168.1.20. Since we own foo.com domain so our DNS records would be;

tower.local.foo.com -> 192.168.1.20
traefik-dashboard.local.foo.com -> 192.168.1.20

How and where to configure these depends on the DNS server, for example PI-HOLE etc.

Reconfiguring unRAID HTTP Port

unRAID web ui is using port 80 but Traefik will be listening on port 80. We need to reconfigure this port.

Go to Settings -> Management Access, and change HTTP port to 8080 from 80.

In case Traefik container is not working, we can always access unRAID server at http://192.168.1.20:8080.

Traefik configuration

In order to configure Trafik we will be using a mix of dynamic configuration (via Docker labels), and static configuration (via configuration files).

Place the following yml configuration files in your appdata share.

appdata/traefik/traefik.yml

api:
  dashboard: true
  insecure: true

entryPoints:
  http:
    address: ":80"

providers:
  docker: {}
  file:
    filename: /etc/traefik/dynamic_conf.yml
    watch: true

appdata/traefik/dynamic_conf.yml

http:
  routers:
    unraid:
      entryPoints:
      - http
      service: unraid
      rule: "Host(`tower.local.foo.com`)"
  services:
    unraid:
      loadBalancer:
        servers:
        - url: "http://192.168.1.20:8080/"

Make sure yml has two space indentation.

Setup Traefik Container

Go to the Docker tab in unRAID and ADD CONTAINER.
We need to fill in the following configuration:

Name: traefik
Repository: traefik:latest
Network Type: bridge

Add a port mapping from 80 → 80, so that Traefik can listen for incoming HTTP traffic.

Add a path where we mount our /mnt/user/appdata/traefik to /etc/traefik so that Traefik can actually read our configuration.

Add another path where we mount our Docker socket /var/run/docker.sock to /var/run/docker.sockRead-only is sufficient here.

This is required so Traefik can listed for new containers and read their labels, which is used for the dynamic configuration part. We are using this exact mechanism to expose the Treafik dashboard now.

Add a label
• key = traefik.http.routers.api.entrypoints
• value = http

Add another label
• key = traefik.http.routers.api.service
• value = api@internal

And a final label
• key = traefik.http.routers.api.rule
• value = Host(`traefik-dashboard.local.foo.com`)

Our container configuration should look like this;

Run container, and view container log to make sure its running. You will see something like this;

The screen will scroll with new logs. Traefik is up and running.

Open browser, we are able to access unRAID at http://tower.local.foo.com, and the Traefik dashboard at http://traefik-dashboard.local.foo.com.

Proxying any Container

In order to add another container to our Traefik configuration we simply need to add a single label to it.

Assuming we have a Portainer container running we can add a label with

  • key = traefik.http.routers.portainer.rule
  • value = Host(`portainer.local.foo.com`)

If our container is only exposing a single port, Traefik is smart enough to pick it up, and no other configuration is required.

If Portainer container would expose multiple ports, but the webUI is accessible on port 3900 we would need to add an additional label with

  • key = traefik.http.services.portainer.loadbalancer.server.port
  • value = 8080

For external hosts to take advantage of terafik, point their DNS entry to traefik host. Obviously we have to define router and services in traefik dynamic file.

Resources

https://datosh.github.io/post/unraid_reverse_traefik/

Reddit reference

Move Pi-Hole databases and list to different location

Create a new folder in new location, for example pihole-db.

mkdir pihole-db
# make sure folder has this permission
chmod 775 pihole-db
# change user/group to pihole on this folder
chown pihole:pihole pihole-db

We will be creating symlink (symbolic link) by copying database to pihole-db.

https://unix.stackexchange.com/questions/218557/how-to-change-ownership-of-symbolic-links

# Pihole-FTL.db
# stop Pihole service
sudo service pihole-FTL stop
cp /etc/pihole/pihole-FTL.db /srv/pihole-data
chown pihole:pihole pihole-FTL.db
# rm /etc/pihole/pihole-FTL.db
# create link in /etc/pihole
ln -s /srv/pihole-db/pihole-FTL.db pihole-FTL.db
# change owner/group of symlinks
sudo chown -h pihole:pihole pihole-FTL.db

# start the service
sudo service pihole-FTL start
# check service status
# systemctl status pihole-FTL

Open browser, navigate to a site and see if pihole-FTL works.

Pihole-FTL started working. Let’s move others;

# gravity.db
sudo service pihole-FTL stop
cp /etc/pihole/gravity.db /srv/pihole-db
ls -l /srv/pihole-db
chown pihole:pihole /srv/pihole-db/gravity.db
rm /etc/pihole/gravity.db
# create symlink in /etc/pihole
ln -s /srv/pihole-db/gravity.db gravity.db
# change owner/group of symlinks
sudo chown -h pihole:pihole gravity.db

# verify
sudo service pihole-FTL start

# macvendor.db
sudo service pihole-FTL stop
cp /etc/pihole/macvendor.db /srv/pihole-db
ls -l /srv/pihole-db

chown pihole:pihole /srv/pihole-db/macvendor.db
rm /etc/pihole/macvendor.db
# create symlink in /etc/pihole
ln -s /srv/pihole-db/macvendor.db macvendor.db
sudo chown -h pihole:pihole macvendor.db
# verify
sudo service pihole-FTL start

# list.1.raw.githubusercontent.com.domains
sudo service pihole-FTL stop
cp /etc/pihole/list.1.raw.githubusercontent.com.domains /srv/pihole-db
ls -l /srv/pihole-db

rm /etc/pihole/list.1.raw.githubusercontent.com.domains
# create symlink in /etc/pihole
ln -s /srv/pihole-db/list.1.raw.githubusercontent.com.domains list.1.raw.githubusercontent.com.domains
# verify
sudo service pihole-FTL start

Make sure you have changed owner and group of sym(Symbolic) links of these databases.

https://www.freecodecamp.org/news/symlink-tutorial-in-linux-how-to-create-and-remove-a-symbolic-link/

sudo chown -h pihole:pihole pihole-FTL.db
sudo chown -h pihole:pihole macvendor.db
sudo chown -h pihole:pihole gravity.db

Make sure you can see these permissions;

To reset, run this command;

chmod 664 gravity.db

Here is your modified file system;

To rebuild gravity database, run this and see the time stamp;

pihole -g

https://discourse.pi-hole.net/t/gravity-database/46182

For macvendor database refer to this;

Resources

https://www.cyberciti.biz/faq/linux-log-files-location-and-how-do-i-view-logs-files/