Designing a Secure Home Lab with VLAN Segmentation and TLS Subdomain Separation Using Traefik

Modern home labs and small hosting environments often grow organically. New services are added over time, ports multiply, and TLS certificates become difficult to manage. Eventually, what started as a simple setup becomes hard to secure and maintain.

Over the last few years, I gradually evolved my lab environment into a structure that separates workloads, automates TLS, and simplifies routing using Traefik as a reverse proxy.

This article summarizes the architecture and lessons learned from running multiple Traefik instances across segmented networks with automated TLS certificates.


The Initial Problem

Typical home lab setups look like this:

service1 → host:9000
service2 → host:9443
service3 → host:8123
service4 → host:8080

Problems quickly appear:

  • Too many ports exposed
  • TLS certificates become manual work
  • Hard to secure services individually
  • Debugging routing becomes messy
  • Services mix across trust levels

As services increase, maintenance becomes harder.


Design Goals

The environment was redesigned around a few simple goals:

  1. One secure entry point for services
  2. Automatic TLS certificate management
  3. Network segmentation between service types
  4. Clean domain naming
  5. Failure isolation between environments
  6. Minimal ongoing maintenance

High-Level Architecture

The resulting architecture separates services using VLANs and domain zones.

Internet
    ↓
DNS
    ↓
Traefik Reverse Proxy Instances
    ↓
Segmented Service Networks

Workloads are separated by purpose and risk profile.

Example:

Secure VLAN → internal services
IoT VLAN → containers and test services
Application VLAN → development workloads

Each network segment runs its own services and routing.


Role of Traefik

Traefik serves as the gateway for services by handling:

  • HTTPS certificates (Let’s Encrypt)
  • Reverse proxy routing
  • Automatic service discovery
  • HTTPS redirects
  • Security headers

Instead of accessing services by ports, everything is exposed through HTTPS:

https://sonarqube.example.com
https://portainer.example.com
https://grafana.example.com

Traefik routes traffic internally to the correct service.


TLS Strategy: Subdomain Separation

Instead of creating individual certificates per service, services are grouped by domain zones.

Example zones:

*.dk.example.com
*.pbi.example.com
*.ad.example.com

Each zone receives a wildcard certificate.

Example services:

sonarqube.dk.example.com
traefik.dk.example.com
grafana.dk.example.com

Benefits:

  • One certificate covers many services
  • Renewal complexity drops
  • Let’s Encrypt rate limits avoided
  • Services can be added freely
  • Routing stays simple

Each Traefik instance manages certificates for its own domain zone.


Why Multiple Traefik Instances?

Rather than centralizing everything, multiple Traefik gateways are used.

Example:

  • Unraid services handled by one proxy
  • Docker services handled by another
  • Podman workloads handled separately

Benefits:

  • Failure isolation
  • Independent upgrades
  • Easier experimentation
  • Reduced blast radius during misconfiguration

If one gateway fails, others continue operating.


Operational Benefits Observed

After stabilizing this architecture:

Certificate renewal became automatic

No manual certificate maintenance required.

Service expansion became simple

New services only need routing rules.

Network isolation improved safety

IoT workloads cannot easily reach secure services.

Troubleshooting became easier

Common issues reduce to:

404 → router mismatch
502 → backend unreachable
TLS error → DNS or certificate issue

Lessons Learned

Several practical lessons emerged.

Use container names instead of IPs

Docker DNS is more stable than static IP references.

Keep services on shared networks

Ensures routing remains predictable.

Remove unnecessary exposed ports

Let Traefik handle public access.

Back up certificate storage

Losing certificate storage can trigger renewal rate limits.

Avoid unnecessary upgrades

Infrastructure components should change slowly.


Is This Overkill for a Home Lab?

Not necessarily.

As soon as you host multiple services, segmentation and automated TLS reduce maintenance effort and improve reliability.

Even small environments benefit from:

  • consistent routing
  • secure entry points
  • simplified service management

Final Thoughts

Traefik combined with VLAN segmentation and TLS subdomain zoning has provided a stable and low-maintenance solution for managing multiple services.

The environment now:

  • renews certificates automatically
  • isolates workloads
  • simplifies routing
  • scales easily
  • requires minimal manual intervention

What started as experimentation evolved into a practical architecture pattern that now runs quietly in the background.

And in infrastructure, quiet is success.

Traefik Reverse Proxy Troubleshooting Guide (Docker + TLS + Let’s Encrypt)

Traefik is an excellent reverse proxy for Docker environments, providing automatic TLS certificates and dynamic routing. However, when something breaks, symptoms can look confusing.

This guide summarizes practical troubleshooting steps based on real-world debugging of a production home-lab setup using Traefik, Docker, and Let’s Encrypt.


Typical Architecture

A common setup looks like:

Internet
   ↓
DNS → Host IP
   ↓
Traefik (Docker container)
   ↓
Application containers

Traefik handles:

  • TLS certificates
  • Reverse proxy routing
  • HTTPS redirect
  • Service discovery

Most Common Error Types

1. HTTP 404 from Traefik

Meaning:

Request reached Traefik
but no router matched the request.

Common causes:

  • Host rule mismatch
  • Wrong domain name
  • Missing router configuration
  • Missing path prefix rules

Check routers:

curl http://localhost:8080/api/http/routers

Fix:
Ensure router rule matches request:

rule: Host(`app.example.com`)

2. HTTP 502 Bad Gateway

Meaning:

Router matched
but backend service unreachable.

Most common cause: wrong backend IP or port.

Test backend directly:

curl http://localhost:9000 -I

If this works but Traefik gives 502, fix service URL:

Bad:

url: "http://172.x.x.x:9000"

Good:

url: "http://sonarqube:9000"

Use container names instead of IPs.


3. Dashboard returns 404

Dashboard requires routing both paths:

/dashboard
/api

Fix router rule:

rule: Host(`traefik.example.com`) &&
      (PathPrefix(`/api`) || PathPrefix(`/dashboard`))

Also ensure trailing slash:

/dashboard/

4. TLS Certificate Not Issued

Check ACME logs:

docker logs traefik | grep -i acme

Verify:

  • DNS challenge configured
  • Secrets mounted correctly
  • acme.json writable

Permissions should be:

chmod 600 acme.json

5. TLS Renewal Concerns

Traefik automatically renews certificates 30 days before expiry.

Check expiry:

echo | openssl s_client \
-servername app.example.com \
-connect app.example.com:443 \
2>/dev/null | openssl x509 -noout -dates

Renewal happens automatically if Traefik stays running.


Debugging Workflow (Recommended)

When something fails, follow this order:

Step 1 — Is Traefik running?

docker ps

Step 2 — Check routers

curl http://localhost:8080/api/http/routers

Step 3 — Check backend

curl http://localhost:<port>

Step 4 — Check logs

docker logs traefik

Step 5 — Test routing locally

curl -k -H "Host: app.example.com" https://localhost -I

Best Practices for Stable Setup

Use container names instead of IPs

Avoid hardcoded LAN IPs.

Keep all services on same Docker network

Example:

networks:
  - traefik-public

Remove exposed ports

Let Traefik handle access.

Backup certificates

Cron backup:

0 3 * * * cp /opt/traefik/data/acme.json /backup/

Freeze Docker versions

Avoid surprise upgrades:

sudo apt-mark hold docker-ce docker-ce-cli containerd.io

Quick Diagnosis Cheat Sheet

ErrorMeaning
404Router mismatch
502Backend unreachable
TLS errorCert or DNS issue
Dashboard 404Router rule incomplete

Final Advice

Most Traefik problems are not Traefik itself, but:

  • router rules
  • backend targets
  • entrypoint mismatches
  • DNS configuration

Once routing and networks are correct, Traefik runs reliably for years.


Conclusion

Traefik simplifies TLS and routing, but clear troubleshooting patterns save hours when issues arise. Use this guide as a reference whenever routing or certificates behave unexpectedly.

The Modern .NET Developer in 2026: From Code Writer to System Builder

There was a time when being a .NET developer mostly meant writing solid C# code, building APIs, and shipping features. If the application worked and the database queries were fast enough, the job was done.

That world is gone.

In 2026, a modern .NET developer isn’t just a coder. They’re a system builder, balancing application development, cloud architecture, DevOps, security, and increasingly, AI-driven decisions.

One Feature, Many Disciplines

Consider a typical modern feature:

  • A scheduled job populates data into a database.
  • That data feeds reporting tools like Power BI.
  • Deployment pipelines push updates across environments worldwide.
  • Cloud services scale automatically under load.
  • Monitoring and security controls are part of the delivery.

One feature now touches multiple domains. Delivering it requires understanding infrastructure, automation, data, deployment, and operations—not just application logic.

The scope of the role has expanded dramatically.

Fundamentals Still Matter

Despite all the change, the core skills haven’t disappeared.

Developers still need to:

  • Build REST APIs that handle real-world load
  • Write efficient Entity Framework queries
  • Understand async/await and concurrency
  • Maintain clean, maintainable codebases

Bad fundamentals still break systems, regardless of how modern the infrastructure is.

But fundamentals alone are no longer enough.

Cloud Decisions Are Now Developer Decisions

In many teams, developers now influence—or directly make—architecture decisions:

  • Should this workload run in App Service, Containers, or Functions?
  • Should data live in SQL Server or Cosmos DB?
  • Do we need messaging via Service Bus or event-driven patterns?

These choices affect cost, scalability, reliability, and operational complexity. Developers increasingly need architectural awareness, not just coding ability.

DevOps Is Part of the Job

Deployment is no longer someone else’s responsibility.

Modern developers are expected to:

  • Build CI/CD pipelines that deploy automatically
  • Containerize services using Docker
  • Ensure logs, metrics, and monitoring are available
  • Support production reliability

The boundary between development and operations has largely disappeared.

Security Is Developer-Owned

Security has shifted left.

Developers now regularly deal with:

  • OAuth and identity flows
  • Microsoft Entra ID integration
  • Secure data handling
  • API protection and access control

Security mistakes are expensive, and modern developers are expected to understand the implications of their implementations.

AI Changes How We Work

Another shift is happening quietly.

In the past, developers searched for how to implement something. Today, AI tools increasingly help answer higher-level questions:

  • What are the long-term tradeoffs of this architecture?
  • How will this scale?
  • What operational risks am I introducing?

The developer’s role moves from solving isolated technical problems to designing sustainable systems.

From Specialist to Swiss Army Knife

The modern .NET developer is no longer just a backend specialist. They are expected to be adaptable:

  • Application developer
  • Cloud architect
  • DevOps contributor
  • Security implementer
  • Systems thinker

Not every developer must master every area—but awareness across domains is increasingly required.

The New Reality

The job has evolved from writing features to building systems.

And while that can feel overwhelming, it’s also exciting. Developers now influence architecture, scalability, reliability, and user experience at a system-wide level.

The industry hasn’t just changed what we build.

It’s changed what it means to be a developer.

And in 2026, being versatile isn’t optional—it’s the job.

Google quietly just re-lit the “reasoning race.”

This week, Google rolled out a major upgrade to Gemini 3 “Deep Think”—and the benchmark jumps are… hard to ignore.

What changed (highlights):

  • 84.6% on ARC-AGI-2 (verified by the ARC Prize Foundation, per Google) and 48.4% on Humanity’s Last Exam (no tools)
  • 3,455 Elo on Codeforces, plus gold-medal-level performance across Olympiad-style evaluations
  • Introduction of Aletheia, a math research agent designed to iteratively generate + verify + revise proofs—aimed at pushing beyond “competition math” into research workflows

Access:
Deep Think’s upgrade is live for Google AI Ultra users in the Gemini app, and Google is opening early access via the Gemini API to researchers/selected partners.

Why this matters (my take):
For much of early 2026, the narrative has been “OpenAI vs Anthropic.” But Google is still a heavyweight—and reasoning + math/science agents are starting to look like the next platform shift (not just better chat). If Aletheia-style systems keep improving, we’ll measure progress less by “can it answer?” and more by “can it discover, verify, and iterate with minimal supervision?”

Questions I’m watching next:

  • Do these gains translate to reliability in real engineering work (not just scoreboards)?
  • How quickly do we get accessible APIs + enterprise controls for these reasoning modes?
  • What does “human review” look like when the system can verify and revise its own proofs?

If you’re building anything in AI-assisted engineering, math, or research ops, 2026 is going to get weird—in a good way.

https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-deep-think

More coverage

Gemini 3 Deep Think is Google's 'most advanced reasoning feature' - and it's available now

This tiny AI startup just crushed Google's Gemini 3 on a key reasoning test - here's what we know

How to Rename a SharePoint Online Tenant Domain (Microsoft 365 Tenant Rename Guide)

If you created your Microsoft 365 tenant years ago, your SharePoint URL probably looks something like this:

youroldname.sharepoint.com

The problem? That name is tied to your original .onmicrosoft.com domain — and it becomes part of every SharePoint and OneDrive URL.

If you’re building a professional business presence, especially for government or enterprise clients, you may want to rename your SharePoint tenant domain.

This guide walks through how to rename a SharePoint Online tenant using PowerShell safely and correctly.


Why Rename Your SharePoint Tenant Domain?

Renaming your SharePoint Online domain helps:

  • Align URLs with your legal business name
  • Improve branding consistency
  • Present professional collaboration links
  • Avoid technical debt later
  • Separate dev and production tenants

Microsoft allows a SharePoint tenant rename only once, so it’s important to do it carefully.


Important Limitations Before You Start

Before renaming your Microsoft 365 tenant domain:

  • You must be a Global Administrator
  • Rename must be scheduled at least 24 hours in advance
  • Not supported in GCC High or DoD environments
  • Large tenants may experience longer processing time
  • Existing links will redirect for 1 year only

If your tenant is new or lightly used, this is the safest time to perform the rename.


Step 1: Add a New .onmicrosoft.com Domain

You cannot rename SharePoint directly to a custom domain like yourcompany.com.

Instead, you must create a new Microsoft-managed domain:

  1. Go to Microsoft 365 Admin Center
  2. Navigate to Settings → Domains
  3. Select Add onmicrosoft.com domain (preview)
  4. Enter your desired name

Example:

tanolisllc.onmicrosoft.com

Make sure:

  • Status shows “Healthy”
  • Do not remove the original domain
  • Do not set it as fallback

Step 2: Install SharePoint Online Management Shell

Tenant rename must be executed from Windows PowerShell (5.1).

Do NOT use:

  • Azure Cloud Shell
  • WSL (Ubuntu)
  • PowerShell 7

Install the module:

Install-Module -Name Microsoft.Online.SharePoint.PowerShell

Step 3: Connect to SharePoint Admin

Use the existing admin URL (before rename):

Connect-SPOService -Url https://youroldtenant-admin.sharepoint.com

Login using Global Admin credentials.


Step 4: Validate the Rename with WhatIf

Always test first:

Start-SPOTenantRename -DomainName "tanolisllc" -ScheduledDateTime "2026-02-13T23:30:00" -WhatIf

If there are no blocking errors, you are ready to proceed.


Step 5: Schedule the SharePoint Tenant Rename

Remove -WhatIf:

Start-SPOTenantRename -DomainName "tanolisllc" -ScheduledDateTime "2026-02-13T23:30:00"

If successful, you will see:

Success
RenameJobID : <GUID>

This confirms the rename job has been scheduled.

Here is a working snapshot of Powershell commands;


Step 6: Monitor Rename Status

You can check status anytime:

Get-SPOTenantRenameStatus

Possible states:

  • Scheduled
  • InProgress
  • Success

Small tenants typically complete within 30–90 minutes.


What Changes After Renaming?

Old URL:

https://youroldtenant.sharepoint.com

New URL:

https://newtenantname.sharepoint.com

Old links will automatically redirect for one year.

Important:

  • Email addresses are NOT affected
  • Custom domains are NOT changed
  • Azure subscriptions are NOT impacted

Post-Rename Checklist

After completion:

  • Test SharePoint homepage
  • Test OneDrive access
  • Test Microsoft Teams
  • Update bookmarks
  • Validate external sharing links

If OneDrive was locally synced, you may need to reconnect it.


Best Practices for Microsoft 365 Tenant Rename

  • Rename before scaling usage
  • Keep dev and production tenants separate
  • Align tenant name with legal entity
  • Schedule rename during off-hours
  • Document the RenameJobID for audit purposes

Tenant naming is part of cloud governance and identity architecture — not just branding.


Final Thoughts

Renaming your SharePoint Online tenant is a one-time decision that affects every collaboration link your organization generates.

If you’re early in your Microsoft 365 lifecycle, it’s worth doing right.

Clean identity structure today prevents technical debt tomorrow.