TLS on a Simple Dockerized WordPress VM (Certbot + Nginx)

This note documents how TLS was issued, configured, and made fully automatic for a WordPress site running on a single Ubuntu VM with Docker, Nginx, PHP-FPM, and MariaDB.

The goal was boring, predictable HTTPS — no load balancers, no Front Door, no App Service magic.


Architecture Context

  • Host: Azure Ubuntu VM (public IP)
  • Web server: Nginx (Docker container)
  • App: WordPress (PHP-FPM container)
  • DB: MariaDB (container)
  • TLS: Let’s Encrypt via Certbot (host-level)
  • DNS: Azure DNS → VM public IP
  • Ports:
    • 80 → HTTP (redirect + ACME challenge)
    • 443 → HTTPS

1. Certificate Issuance (Initial)

Certbot was installed on the VM (host), not inside Docker.

Initial issuance was done using standalone mode (acceptable for first issuance):

sudo certbot certonly \
  --standalone \
  -d shahzadblog.com

This required:

  • Port 80 temporarily free
  • Docker/nginx stopped during issuance

Resulting certs live at:

/etc/letsencrypt/live/shahzadblog.com/
  ├── fullchain.pem
  └── privkey.pem

2. Nginx TLS Configuration (Docker)

Nginx runs in Docker and mounts the host cert directory read-only.

Docker Compose (nginx excerpt)

nginx:
  image: nginx:alpine
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./wordpress:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - /etc/letsencrypt:/etc/letsencrypt:ro

Nginx config (key points)

  • Explicit HTTP → HTTPS redirect
  • TLS configured with Let’s Encrypt certs
  • HTTP left available only for ACME challenges
# HTTP (ACME + redirect)
server {
    listen 80;
    server_name shahzadblog.com;

    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS
server {
    listen 443 ssl;
    http2 on;

    server_name shahzadblog.com;

    ssl_certificate     /etc/letsencrypt/live/shahzadblog.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shahzadblog.com/privkey.pem;

    root /var/www/html;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

3. Why Standalone Renewal Failed

Certbot auto-renew initially failed with:

Could not bind TCP port 80

Reason:

  • Docker/nginx already listening on port 80
  • Standalone renewal always tries to bind port 80

This is expected behavior.


4. Switching to Webroot Renewal (Correct Fix)

Instead of stopping Docker every 60–90 days, renewal was switched to webroot mode.

Key Insight

Certbot (host) and Nginx (container) must point to the same physical directory.

  • Nginx serves:
    ~/wp-docker/wordpress → /var/www/html (container)
  • Certbot must write challenges into:
    ~/wp-docker/wordpress/.well-known/acme-challenge

5. Renewal Config Fix (Critical Step)

Edit the renewal file:

sudo nano /etc/letsencrypt/renewal/shahzadblog.com.conf

Change:

authenticator = standalone

To:

authenticator = webroot
webroot_path = /home/azureuser/wp-docker/wordpress

⚠️ Do not use /var/www/html here — that path exists only inside Docker.


6. Filesystem Permissions

Because Docker created WordPress files as root, the ACME path had to be created with sudo:

sudo mkdir -p /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge
sudo chmod -R 755 /home/azureuser/wp-docker/wordpress/.well-known

Validation test:

echo test | sudo tee /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge/test.txt
curl http://shahzadblog.com/.well-known/acme-challenge/test.txt

Expected output:

test

7. Final Renewal Test (Success Condition)

sudo certbot renew --dry-run

Success message:

Congratulations, all simulated renewals succeeded!

At this point:

  • Certbot timer is active
  • Docker/nginx stays running
  • No port conflicts
  • No manual intervention required

Final State (What “Done” Looks Like)

  • 🔒 HTTPS works in all browsers
  • 🔁 Cert auto-renews in background
  • 🐳 Docker untouched during renewals
  • 💸 No additional Azure services
  • 🧠 Minimal moving parts

Key Lessons

  • Standalone mode is fine for first issuance, not renewal
  • In Docker setups, filesystem alignment matters more than ports
  • Webroot renewal is the simplest long-term option
  • Don’t fight permissions — use sudo intentionally
  • “Simple & boring” scales better than clever abstractions

This setup is intentionally non-enterprise, low-cost, and stable — exactly what a long-running personal site needs.

Install SonarQube using Docker

Sonar is a popular open-source platform for continuous inspection of code quality. One of the easiest ways to install and use Sonar is to use Docker, a containerization platform that makes it easy to deploy and manage applications.

Prerequisites

Before getting started, you will need to have Docker installed on your machine. If you do not have Docker installed, you can download and install it from the Docker website.

Step 1: Pull the Sonar Docker Image

The first step in installing Sonar with Docker is to pull the Sonar Docker image from the Docker Hub repository. To do this, open a terminal or command prompt and run the following command:

docker pull sonarqube

This will download the latest version of the Sonar Docker image to your machine.

Step 2: Create a Docker Network

Next, we need to create a Docker network that will allow the Sonar container to communicate with the database container. To create a Docker network, run the following command:

docker network create sonar-network

Step 3: Start a Database Container

Sonar requires a database to store its data. In this example, we will use a PostgreSQL database, but you can also use a MySQL or Microsoft SQL Server database if you prefer. To start a PostgreSQL database container, run the following command:

docker run -d --name sonar-db --network sonar-network -e POSTGRES_USER=sonar -e POSTGRES_PASSWORD=sonar -e POSTGRES_DB=sonar postgres:9.6

Step 4: Start the Sonar Container

Once the database container is running, we can start the Sonar container. To do this, run the following command:

docker run -d --name sonar -p 9000:9000 --network sonar-network -e SONARQUBE_JDBC_URL=jdbc:postgresql://sonar-db:5432/sonar -e SONAR_JDBC_USERNAME=sonar -e SONAR_JDBC_PASSWORD=sonar sonarqube

Step 5: Access the Sonar Dashboard

Once the Sonar container is running, you can access the Sonar dashboard by opening a web browser and navigating to http://localhost:9000. The default username and password are admin and admin, respectively.

Copy live WordPress Site and Run inside Docker container

I am going to copy this site and run inside Docker Container.

STEPS

1-Pull WordPress and MySQL images using docker-compose, I am going to use docker-compose file.

version: '3.7'

services:
  db:
    # If you really want to use MySQL, uncomment the following line
    image: mysql:8.0.27
    command: '--default-authentication-plugin=mysql_native_password'
    container_name: wp-db
    volumes:
      - ./data/wp-db-data:/var/lib/mysql
    networks:
      - default
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: supersecretpassword
      MYSQL_DATABASE: db
      MYSQL_USER: dbuser
      MYSQL_PASSWORD: dbpassword

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    container_name: wordpress
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_NAME: db
      WORDPRESS_DB_USER: dbuser
      WORDPRESS_DB_PASSWORD: dbpassword
    volumes:
      - ./data/wp-content:/var/www/html/wp-content
      - ./data/wp-html:/var/www/html
    networks:
      - traefik-public
      - default
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.wordpress.entrypoints=http"
      - "traefik.http.routers.wordpress.rule=Host(`wp.dk.tanolis.com`)"
      - "traefik.http.middlewares.wordpress-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.wordpress.middlewares=wordpress-https-redirect"
      - "traefik.http.routers.wordpress-secure.entrypoints=https"
      - "traefik.http.routers.wordpress-secure.rule=Host(`wp.dk.tanolis.com`)"
      - "traefik.http.routers.wordpress-secure.tls=true"
      - "traefik.http.routers.wordpress-secure.service=wordpress"
      - "traefik.http.services.wordpress.loadbalancer.server.port=80"
      - "traefik.docker.network=traefik-public"

volumes:
  db-data:
    name: wp-db-data

networks:
  traefik-public:

3-Open container wordpress site and install “All-in-One WP Migration” plugin.

4-Go to source wordpress site and install “All-in-One WP Migration” plugin.

5-Create a File backup on source site.

6-Try to restore backup on target site

7-You will see following error;

<<ERROR>>

Increase size for All in one plugin;

8-We need to increase restore size. Search for .htaccess file in your linux root file system;

# find / -type f -name .htaccess*

9-Use nano editor to open this file;

# nano .htaccess

place the following code in it after # END WordPress commentd line:

php_value upload_max_filesize 2048M
php_value post_max_size 2048M
php_value memory_limit 4096M
php_value max_execution_time 0
php_value max_input_time 0

10-Save file. Open plugin and you will see that you are allowed to restore 2GB data.

11-Open WordPress container site. Do a comparison with online site.

Congratulations! You’ve done it. You can now easily import any file you’d like using this amazing plugin. Migrating your sites are not a hassle anymore!

Video

References

How to increase the all-in-one-wp-migration plugin upload import limit

https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md

What does –net=host option in Docker command really do?

After the docker installation you have 3 networks by default:

If you start a container by default it will be created inside the bridge (docker0) network.

$ docker run -d jenkins
1498e581cdba        jenkins             "/bin/tini -- /usr..."   3 minutes ago 

The –net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.

Normally you have to forward ports from the host machine into a container, but when the containers share the host’s network, any network activity happens directly on the host machine – just as it would if the program was running locally on the host instead of inside a container.

While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can’t have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.

For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as –net=host.

Generally speaking, –net=host is only needed when you are running programs with very specific, unusual network needs.

Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use –net=host then you’ll get all the container’s ports listening on the host, even those that aren’t listed in the Dockerfile. This means you will need to check the container closely (especially if it’s not yours, e.g. an official one provided by a software project) to make sure you don’t inadvertently expose extra services on the machine.

Reference

https://stackoverflow.com/questions/43316376/what-does-net-host-option-in-docker-command-really-do