Designing a Secure Home Lab with VLAN Segmentation and TLS Subdomain Separation Using Traefik

Modern home labs and small hosting environments often grow organically. New services are added over time, ports multiply, and TLS certificates become difficult to manage. Eventually, what started as a simple setup becomes hard to secure and maintain.

Over the last few years, I gradually evolved my lab environment into a structure that separates workloads, automates TLS, and simplifies routing using Traefik as a reverse proxy.

This article summarizes the architecture and lessons learned from running multiple Traefik instances across segmented networks with automated TLS certificates.


The Initial Problem

Typical home lab setups look like this:

service1 → host:9000
service2 → host:9443
service3 → host:8123
service4 → host:8080

Problems quickly appear:

  • Too many ports exposed
  • TLS certificates become manual work
  • Hard to secure services individually
  • Debugging routing becomes messy
  • Services mix across trust levels

As services increase, maintenance becomes harder.


Design Goals

The environment was redesigned around a few simple goals:

  1. One secure entry point for services
  2. Automatic TLS certificate management
  3. Network segmentation between service types
  4. Clean domain naming
  5. Failure isolation between environments
  6. Minimal ongoing maintenance

High-Level Architecture

The resulting architecture separates services using VLANs and domain zones.

Internet
    ↓
DNS
    ↓
Traefik Reverse Proxy Instances
    ↓
Segmented Service Networks

Workloads are separated by purpose and risk profile.

Example:

Secure VLAN → internal services
IoT VLAN → containers and test services
Application VLAN → development workloads

Each network segment runs its own services and routing.


Role of Traefik

Traefik serves as the gateway for services by handling:

  • HTTPS certificates (Let’s Encrypt)
  • Reverse proxy routing
  • Automatic service discovery
  • HTTPS redirects
  • Security headers

Instead of accessing services by ports, everything is exposed through HTTPS:

https://sonarqube.example.com
https://portainer.example.com
https://grafana.example.com

Traefik routes traffic internally to the correct service.


TLS Strategy: Subdomain Separation

Instead of creating individual certificates per service, services are grouped by domain zones.

Example zones:

*.dk.example.com
*.pbi.example.com
*.ad.example.com

Each zone receives a wildcard certificate.

Example services:

sonarqube.dk.example.com
traefik.dk.example.com
grafana.dk.example.com

Benefits:

  • One certificate covers many services
  • Renewal complexity drops
  • Let’s Encrypt rate limits avoided
  • Services can be added freely
  • Routing stays simple

Each Traefik instance manages certificates for its own domain zone.


Why Multiple Traefik Instances?

Rather than centralizing everything, multiple Traefik gateways are used.

Example:

  • Unraid services handled by one proxy
  • Docker services handled by another
  • Podman workloads handled separately

Benefits:

  • Failure isolation
  • Independent upgrades
  • Easier experimentation
  • Reduced blast radius during misconfiguration

If one gateway fails, others continue operating.


Operational Benefits Observed

After stabilizing this architecture:

Certificate renewal became automatic

No manual certificate maintenance required.

Service expansion became simple

New services only need routing rules.

Network isolation improved safety

IoT workloads cannot easily reach secure services.

Troubleshooting became easier

Common issues reduce to:

404 → router mismatch
502 → backend unreachable
TLS error → DNS or certificate issue

Lessons Learned

Several practical lessons emerged.

Use container names instead of IPs

Docker DNS is more stable than static IP references.

Keep services on shared networks

Ensures routing remains predictable.

Remove unnecessary exposed ports

Let Traefik handle public access.

Back up certificate storage

Losing certificate storage can trigger renewal rate limits.

Avoid unnecessary upgrades

Infrastructure components should change slowly.


Is This Overkill for a Home Lab?

Not necessarily.

As soon as you host multiple services, segmentation and automated TLS reduce maintenance effort and improve reliability.

Even small environments benefit from:

  • consistent routing
  • secure entry points
  • simplified service management

Final Thoughts

Traefik combined with VLAN segmentation and TLS subdomain zoning has provided a stable and low-maintenance solution for managing multiple services.

The environment now:

  • renews certificates automatically
  • isolates workloads
  • simplifies routing
  • scales easily
  • requires minimal manual intervention

What started as experimentation evolved into a practical architecture pattern that now runs quietly in the background.

And in infrastructure, quiet is success.

Traefik Reverse Proxy Troubleshooting Guide (Docker + TLS + Let’s Encrypt)

Traefik is an excellent reverse proxy for Docker environments, providing automatic TLS certificates and dynamic routing. However, when something breaks, symptoms can look confusing.

This guide summarizes practical troubleshooting steps based on real-world debugging of a production home-lab setup using Traefik, Docker, and Let’s Encrypt.


Typical Architecture

A common setup looks like:

Internet
   ↓
DNS → Host IP
   ↓
Traefik (Docker container)
   ↓
Application containers

Traefik handles:

  • TLS certificates
  • Reverse proxy routing
  • HTTPS redirect
  • Service discovery

Most Common Error Types

1. HTTP 404 from Traefik

Meaning:

Request reached Traefik
but no router matched the request.

Common causes:

  • Host rule mismatch
  • Wrong domain name
  • Missing router configuration
  • Missing path prefix rules

Check routers:

curl http://localhost:8080/api/http/routers

Fix:
Ensure router rule matches request:

rule: Host(`app.example.com`)

2. HTTP 502 Bad Gateway

Meaning:

Router matched
but backend service unreachable.

Most common cause: wrong backend IP or port.

Test backend directly:

curl http://localhost:9000 -I

If this works but Traefik gives 502, fix service URL:

Bad:

url: "http://172.x.x.x:9000"

Good:

url: "http://sonarqube:9000"

Use container names instead of IPs.


3. Dashboard returns 404

Dashboard requires routing both paths:

/dashboard
/api

Fix router rule:

rule: Host(`traefik.example.com`) &&
      (PathPrefix(`/api`) || PathPrefix(`/dashboard`))

Also ensure trailing slash:

/dashboard/

4. TLS Certificate Not Issued

Check ACME logs:

docker logs traefik | grep -i acme

Verify:

  • DNS challenge configured
  • Secrets mounted correctly
  • acme.json writable

Permissions should be:

chmod 600 acme.json

5. TLS Renewal Concerns

Traefik automatically renews certificates 30 days before expiry.

Check expiry:

echo | openssl s_client \
-servername app.example.com \
-connect app.example.com:443 \
2>/dev/null | openssl x509 -noout -dates

Renewal happens automatically if Traefik stays running.


Debugging Workflow (Recommended)

When something fails, follow this order:

Step 1 — Is Traefik running?

docker ps

Step 2 — Check routers

curl http://localhost:8080/api/http/routers

Step 3 — Check backend

curl http://localhost:<port>

Step 4 — Check logs

docker logs traefik

Step 5 — Test routing locally

curl -k -H "Host: app.example.com" https://localhost -I

Best Practices for Stable Setup

Use container names instead of IPs

Avoid hardcoded LAN IPs.

Keep all services on same Docker network

Example:

networks:
  - traefik-public

Remove exposed ports

Let Traefik handle access.

Backup certificates

Cron backup:

0 3 * * * cp /opt/traefik/data/acme.json /backup/

Freeze Docker versions

Avoid surprise upgrades:

sudo apt-mark hold docker-ce docker-ce-cli containerd.io

Quick Diagnosis Cheat Sheet

ErrorMeaning
404Router mismatch
502Backend unreachable
TLS errorCert or DNS issue
Dashboard 404Router rule incomplete

Final Advice

Most Traefik problems are not Traefik itself, but:

  • router rules
  • backend targets
  • entrypoint mismatches
  • DNS configuration

Once routing and networks are correct, Traefik runs reliably for years.


Conclusion

Traefik simplifies TLS and routing, but clear troubleshooting patterns save hours when issues arise. Use this guide as a reference whenever routing or certificates behave unexpectedly.

Install SonarQube using Docker

Sonar is a popular open-source platform for continuous inspection of code quality. One of the easiest ways to install and use Sonar is to use Docker, a containerization platform that makes it easy to deploy and manage applications.

Prerequisites

Before getting started, you will need to have Docker installed on your machine. If you do not have Docker installed, you can download and install it from the Docker website.

Step 1: Pull the Sonar Docker Image

The first step in installing Sonar with Docker is to pull the Sonar Docker image from the Docker Hub repository. To do this, open a terminal or command prompt and run the following command:

docker pull sonarqube

This will download the latest version of the Sonar Docker image to your machine.

Step 2: Create a Docker Network

Next, we need to create a Docker network that will allow the Sonar container to communicate with the database container. To create a Docker network, run the following command:

docker network create sonar-network

Step 3: Start a Database Container

Sonar requires a database to store its data. In this example, we will use a PostgreSQL database, but you can also use a MySQL or Microsoft SQL Server database if you prefer. To start a PostgreSQL database container, run the following command:

docker run -d --name sonar-db --network sonar-network -e POSTGRES_USER=sonar -e POSTGRES_PASSWORD=sonar -e POSTGRES_DB=sonar postgres:9.6

Step 4: Start the Sonar Container

Once the database container is running, we can start the Sonar container. To do this, run the following command:

docker run -d --name sonar -p 9000:9000 --network sonar-network -e SONARQUBE_JDBC_URL=jdbc:postgresql://sonar-db:5432/sonar -e SONAR_JDBC_USERNAME=sonar -e SONAR_JDBC_PASSWORD=sonar sonarqube

Step 5: Access the Sonar Dashboard

Once the Sonar container is running, you can access the Sonar dashboard by opening a web browser and navigating to http://localhost:9000. The default username and password are admin and admin, respectively.

Copy live WordPress Site and Run inside Docker container

I am going to copy this site and run inside Docker Container.

STEPS

1-Pull WordPress and MySQL images using docker-compose, I am going to use docker-compose file.

version: '3.7'

services:
  db:
    # If you really want to use MySQL, uncomment the following line
    image: mysql:8.0.27
    command: '--default-authentication-plugin=mysql_native_password'
    container_name: wp-db
    volumes:
      - ./data/wp-db-data:/var/lib/mysql
    networks:
      - default
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: supersecretpassword
      MYSQL_DATABASE: db
      MYSQL_USER: dbuser
      MYSQL_PASSWORD: dbpassword

  wordpress:
    depends_on:
      - db
    image: wordpress:latest
    container_name: wordpress
    environment:
      WORDPRESS_DB_HOST: db:3306
      WORDPRESS_DB_NAME: db
      WORDPRESS_DB_USER: dbuser
      WORDPRESS_DB_PASSWORD: dbpassword
    volumes:
      - ./data/wp-content:/var/www/html/wp-content
      - ./data/wp-html:/var/www/html
    networks:
      - traefik-public
      - default
    restart: always
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.wordpress.entrypoints=http"
      - "traefik.http.routers.wordpress.rule=Host(`wp.dk.tanolis.com`)"
      - "traefik.http.middlewares.wordpress-https-redirect.redirectscheme.scheme=https"
      - "traefik.http.routers.wordpress.middlewares=wordpress-https-redirect"
      - "traefik.http.routers.wordpress-secure.entrypoints=https"
      - "traefik.http.routers.wordpress-secure.rule=Host(`wp.dk.tanolis.com`)"
      - "traefik.http.routers.wordpress-secure.tls=true"
      - "traefik.http.routers.wordpress-secure.service=wordpress"
      - "traefik.http.services.wordpress.loadbalancer.server.port=80"
      - "traefik.docker.network=traefik-public"

volumes:
  db-data:
    name: wp-db-data

networks:
  traefik-public:

3-Open container wordpress site and install “All-in-One WP Migration” plugin.

4-Go to source wordpress site and install “All-in-One WP Migration” plugin.

5-Create a File backup on source site.

6-Try to restore backup on target site

7-You will see following error;

<<ERROR>>

Increase size for All in one plugin;

8-We need to increase restore size. Search for .htaccess file in your linux root file system;

# find / -type f -name .htaccess*

9-Use nano editor to open this file;

# nano .htaccess

place the following code in it after # END WordPress commentd line:

php_value upload_max_filesize 2048M
php_value post_max_size 2048M
php_value memory_limit 4096M
php_value max_execution_time 0
php_value max_input_time 0

10-Save file. Open plugin and you will see that you are allowed to restore 2GB data.

11-Open WordPress container site. Do a comparison with online site.

Congratulations! You’ve done it. You can now easily import any file you’d like using this amazing plugin. Migrating your sites are not a hassle anymore!

Video

References

How to increase the all-in-one-wp-migration plugin upload import limit

https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_migration_linux_appservices.md

What does –net=host option in Docker command really do?

After the docker installation you have 3 networks by default:

If you start a container by default it will be created inside the bridge (docker0) network.

$ docker run -d jenkins
1498e581cdba        jenkins             "/bin/tini -- /usr..."   3 minutes ago 

The –net=host option is used to make the programs inside the Docker container look like they are running on the host itself, from the perspective of the network. It allows the container greater network access than it can normally get.

Normally you have to forward ports from the host machine into a container, but when the containers share the host’s network, any network activity happens directly on the host machine – just as it would if the program was running locally on the host instead of inside a container.

While this does mean you no longer have to expose ports and map them to container ports, it means you have to edit your Dockerfiles to adjust the ports each container listens on, to avoid conflicts as you can’t have two containers operating on the same host port. However, the real reason for this option is for running apps that need network access that is difficult to forward through to a container at the port level.

For example, if you want to run a DHCP server then you need to be able to listen to broadcast traffic on the network, and extract the MAC address from the packet. This information is lost during the port forwarding process, so the only way to run a DHCP server inside Docker is to run the container as –net=host.

Generally speaking, –net=host is only needed when you are running programs with very specific, unusual network needs.

Lastly, from a security perspective, Docker containers can listen on many ports, even though they only advertise (expose) a single port. Normally this is fine as you only forward the single expected port, however if you use –net=host then you’ll get all the container’s ports listening on the host, even those that aren’t listed in the Dockerfile. This means you will need to check the container closely (especially if it’s not yours, e.g. an official one provided by a software project) to make sure you don’t inadvertently expose extra services on the machine.

Reference

https://stackoverflow.com/questions/43316376/what-does-net-host-option-in-docker-command-really-do