When Azure AD Claims Arenโ€™t Enough: Issuing Your Own JWT for Custom Authorization

Modern applications often start with Azure AD (now Microsoft Entra ID) for authenticationโ€”and for good reason. Itโ€™s secure, battle-tested, and integrates seamlessly with Azure-native services.

But as systems grow, teams frequently hit a wall:

โ€œWe can authenticate users, but we canโ€™t express our authorization logic cleanly using Entra ID claims alone.โ€

At that point, one architectural pattern comes into focus:

Validating the Azure AD token, then issuing your own application-specific JWT.

This post explains when this strategy makes sense, when it doesnโ€™t, and how to implement it responsibly.


The Problem: Identity vs Authorization Drift

Azure AD excels at answering one question:

Who is this user?

It does this using:

  • App roles
  • Group membership
  • Scopes
  • Optional claims

However, real-world authorization often depends on things Azure AD cannot evaluate:

  • Data stored in your application database
  • Tenant-specific permissions
  • Feature flags or subscription tiers
  • Time-bound or contextual access rules
  • Row-level or domain-specific authorization logic

Trying to force this logic into Entra ID frequently leads to:

  • Role explosion
  • Overloaded group membership
  • Fragile claim mappings
  • Slow iteration cycles

This is where many systems start to creak.


The Strategy: Token Exchange with an Application-Issued JWT

Instead of overloading Azure AD, you introduce a clear trust boundary.

High-level flow

  1. User authenticates with Azure AD
  2. Client receives an Azure-issued access token
  3. Your API fully validates that token
  4. Your API issues a new, short-lived JWT containing:
    • Application-specific claims
    • Computed permissions
    • Domain-level roles
  5. Downstream services trust your issuer, not Azure AD directly

This is often referred to as:

  • Token exchange
  • Backend-for-Frontend (BFF) token
  • Application-issued JWT

Itโ€™s a well-established pattern in enterprise and government systems.


Why This Works Well

1. Azure AD handles authentication; your app handles authorization

This separation keeps responsibilities clean:

  • Azure AD โ†’ Identity proof
  • Your API โ†’ Business authorization

You avoid pushing business logic into your identity provider, where it doesnโ€™t belong.


2. You can issue dynamic, computed claims

Your API can:

  • Query databases
  • Apply complex logic
  • Evaluate tenant state
  • Calculate effective permissions

Azure AD cannot do thisโ€”and shouldnโ€™t.


3. Downstream services stay simple

Instead of every service needing to understand:

  • Azure tenants
  • Scopes vs roles
  • Group semantics

They simply trust:

  • Your issuer
  • Your claim contract

This dramatically simplifies internal service authorization.


4. Identity provider portability

If you later introduce:

  • Entra B2C
  • A second tenant
  • External identity providers

Your internal JWT remains the stable contract.
Only the validation layer changes.


When This Is Overkill

This pattern is not always the right choice.

Avoid it if:

  • App roles and groups already express your needs
  • Authorization rules are static
  • You donโ€™t have downstream services
  • You want minimal operational overhead

A second token adds complexityโ€”donโ€™t add it unless it earns its keep.


Security Rules You Must Follow

If you issue your own JWT, there are no shortcuts.

1. Fully validate the Azure token

Always validate:

  • Signature
  • Issuer
  • Audience
  • Expiration
  • Required scopes or roles

If you skip any of these, the pattern collapses.


2. Keep your token short-lived

Best practice:

  • 5โ€“15 minute lifetime
  • No refresh tokens (unless explicitly designed)

Azure AD should remain responsible for session longevity.


3. Protect and rotate signing keys

  • Use a dedicated signing key
  • Store it securely (Key Vault)
  • Rotate it regularly
  • Publish JWKS if multiple services validate your token

Your token is only as trustworthy as your key management.


4. Be disciplined with claims

Only include what downstream services actually need.

Avoid:

  • Personally identifiable information
  • Large payloads
  • Debug or โ€œjust in caseโ€ claims

JWTs are not data containers.


A Practical Mental Model

A simple way to reason about this pattern:

Azure AD authenticates people.
Your application authorizes behavior.

If that statement matches your systemโ€™s reality, this approach is not only validโ€”itโ€™s often the cleanest solution.


Final Thoughts

Issuing your own JWT after validating an Azure AD token is not a workaround or an anti-pattern. Itโ€™s a deliberate architectural choice used in:

  • Regulated environments
  • Multi-tenant SaaS platforms
  • Government and municipal systems
  • Complex internal platforms

Like all powerful patterns, it should be applied intentionally, with strong security discipline and a clear boundary of responsibility.

Used correctly, it restores simplicity where identity systems alone fall short.

Architecture Diagram (Conceptual)

Actors and trust boundaries:

  1. Client (Web / SPA / Mobile)
    • Initiates sign-in using Microsoft Entra ID
    • Receives an Azure AD access token
    • Never handles application secrets or signing keys
  2. Microsoft Entra ID (Azure AD)
    • Authenticates the user
    • Issues a standards-based OAuth 2.0 / OpenID Connect token
    • Acts as the external identity authority
  3. Authorization API (Your Backend)
    • Validates the Azure AD token (issuer, audience, signature, expiry, scopes)
    • Applies application-specific authorization logic
    • Queries internal data sources (database, feature flags, tenant configuration)
    • Issues a short-lived, application-signed JWT
  4. Downstream APIs / Services
    • Trust only the application issuer
    • Validate the application JWT using published signing keys (JWKS)
    • Enforce authorization using domain-specific claims

Token flow:

  • Azure AD token โ†’ proves identity
  • Application-issued JWT โ†’ encodes authorization

This design creates a clean boundary where:

  • Identity remains centralized and externally managed
  • Authorization becomes explicit, testable, and owned by the application

TLS on a Simple Dockerized WordPress VM (Certbot + Nginx)

This note documents how TLS was issued, configured, and made fully automatic for a WordPress site running on a single Ubuntu VM with Docker, Nginx, PHP-FPM, and MariaDB.

The goal was boring, predictable HTTPS โ€” no load balancers, no Front Door, no App Service magic.


Architecture Context

  • Host: Azure Ubuntu VM (public IP)
  • Web server: Nginx (Docker container)
  • App: WordPress (PHP-FPM container)
  • DB: MariaDB (container)
  • TLS: Letโ€™s Encrypt via Certbot (host-level)
  • DNS: Azure DNS โ†’ VM public IP
  • Ports:
    • 80 โ†’ HTTP (redirect + ACME challenge)
    • 443 โ†’ HTTPS

1. Certificate Issuance (Initial)

Certbot was installed on the VM (host), not inside Docker.

Initial issuance was done using standalone mode (acceptable for first issuance):

sudo certbot certonly \
  --standalone \
  -d shahzadblog.com

This required:

  • Port 80 temporarily free
  • Docker/nginx stopped during issuance

Resulting certs live at:

/etc/letsencrypt/live/shahzadblog.com/
  โ”œโ”€โ”€ fullchain.pem
  โ””โ”€โ”€ privkey.pem

2. Nginx TLS Configuration (Docker)

Nginx runs in Docker and mounts the host cert directory read-only.

Docker Compose (nginx excerpt)

nginx:
  image: nginx:alpine
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./wordpress:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - /etc/letsencrypt:/etc/letsencrypt:ro

Nginx config (key points)

  • Explicit HTTP โ†’ HTTPS redirect
  • TLS configured with Letโ€™s Encrypt certs
  • HTTP left available only for ACME challenges
# HTTP (ACME + redirect)
server {
    listen 80;
    server_name shahzadblog.com;

    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS
server {
    listen 443 ssl;
    http2 on;

    server_name shahzadblog.com;

    ssl_certificate     /etc/letsencrypt/live/shahzadblog.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shahzadblog.com/privkey.pem;

    root /var/www/html;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

3. Why Standalone Renewal Failed

Certbot auto-renew initially failed with:

Could not bind TCP port 80

Reason:

  • Docker/nginx already listening on port 80
  • Standalone renewal always tries to bind port 80

This is expected behavior.


4. Switching to Webroot Renewal (Correct Fix)

Instead of stopping Docker every 60โ€“90 days, renewal was switched to webroot mode.

Key Insight

Certbot (host) and Nginx (container) must point to the same physical directory.

  • Nginx serves:
    ~/wp-docker/wordpress โ†’ /var/www/html (container)
  • Certbot must write challenges into:
    ~/wp-docker/wordpress/.well-known/acme-challenge

5. Renewal Config Fix (Critical Step)

Edit the renewal file:

sudo nano /etc/letsencrypt/renewal/shahzadblog.com.conf

Change:

authenticator = standalone

To:

authenticator = webroot
webroot_path = /home/azureuser/wp-docker/wordpress

โš ๏ธ Do not use /var/www/html here โ€” that path exists only inside Docker.


6. Filesystem Permissions

Because Docker created WordPress files as root, the ACME path had to be created with sudo:

sudo mkdir -p /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge
sudo chmod -R 755 /home/azureuser/wp-docker/wordpress/.well-known

Validation test:

echo test | sudo tee /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge/test.txt
curl http://shahzadblog.com/.well-known/acme-challenge/test.txt

Expected output:

test

7. Final Renewal Test (Success Condition)

sudo certbot renew --dry-run

Success message:

Congratulations, all simulated renewals succeeded!

At this point:

  • Certbot timer is active
  • Docker/nginx stays running
  • No port conflicts
  • No manual intervention required

Final State (What โ€œDoneโ€ Looks Like)

  • ๐Ÿ”’ HTTPS works in all browsers
  • ๐Ÿ” Cert auto-renews in background
  • ๐Ÿณ Docker untouched during renewals
  • ๐Ÿ’ธ No additional Azure services
  • ๐Ÿง  Minimal moving parts

Key Lessons

  • Standalone mode is fine for first issuance, not renewal
  • In Docker setups, filesystem alignment matters more than ports
  • Webroot renewal is the simplest long-term option
  • Donโ€™t fight permissions โ€” use sudo intentionally
  • โ€œSimple & boringโ€ scales better than clever abstractions

This setup is intentionally non-enterprise, low-cost, and stable โ€” exactly what a long-running personal site needs.

Writing code is over

Ryan Dahl built Node.js.

Now he says writing code is over.

When the engineer who helped define modern software says this, pay attention.

Not because coding is dead.

Because the ๐˜ƒ๐—ฎ๐—น๐˜‚๐—ฒ ๐—บ๐—ผ๐˜ƒ๐—ฒ๐—ฑ.

๐—”๐—œ ๐—ฑ๐—ผ๐—ฒ๐˜€๐—ปโ€™๐˜ ๐—ฒ๐—น๐—ถ๐—บ๐—ถ๐—ป๐—ฎ๐˜๐—ฒ ๐—ฒ๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ๐˜€.

๐—œ๐˜ ๐—ฒ๐—น๐—ถ๐—บ๐—ถ๐—ป๐—ฎ๐˜๐—ฒ๐˜€ ๐˜๐—ต๐—ฒ ๐—ถ๐—น๐—น๐˜‚๐˜€๐—ถ๐—ผ๐—ป ๐˜๐—ต๐—ฎ๐˜ ๐˜„๐—ฟ๐—ถ๐˜๐—ถ๐—ป๐—ด ๐—ฐ๐—ผ๐—ฑ๐—ฒ ๐˜„๐—ฎ๐˜€ ๐˜๐—ต๐—ฒ ๐—ท๐—ผ๐—ฏ.

๐—ง๐—ต๐—ฒ ๐—ข๐—น๐—ฑ ๐— ๐—ผ๐—ฑ๐—ฒ๐—น

Value lived in syntax.

Output was measured in lines of code.

๐—ง๐—ต๐—ฒ ๐—˜๐—บ๐—ฒ๐—ฟ๐—ด๐—ถ๐—ป๐—ด ๐— ๐—ผ๐—ฑ๐—ฒ๐—น

Value lives in systems thinking.

Output is measured in correctness, resilience, and architecture.

You can already see this shift.

The meeting where no one debates the code.

They debate the ๐—ฎ๐˜€๐˜€๐˜‚๐—บ๐—ฝ๐˜๐—ถ๐—ผ๐—ป.

The ๐˜๐—ฟ๐—ฎ๐—ฑ๐—ฒ๐—ผ๐—ณ๐—ณ.
The ๐—ณ๐—ฎ๐—ถ๐—น๐˜‚๐—ฟ๐—ฒ ๐—บ๐—ผ๐—ฑ๐—ฒ.

The code is already there.

The decision is not.

๐—ฆ๐˜†๐—ป๐˜๐—ฎ๐˜… ๐˜„๐—ฎ๐˜€ ๐—ป๐—ฒ๐˜ƒ๐—ฒ๐—ฟ ๐˜๐—ต๐—ฒ ๐˜€๐—ฐ๐—ฎ๐—ฟ๐—ฐ๐—ฒ ๐˜€๐—ธ๐—ถ๐—น๐—น.

๐—๐˜‚๐—ฑ๐—ด๐—บ๐—ฒ๐—ป๐˜ ๐˜„๐—ฎ๐˜€.

๐— ๐—ฌ ๐—ง๐—”๐—ž๐—˜๐—”๐—ช๐—”๐—ฌ

The future of software is not necessarily fewer engineers.

Itโ€™s engineers operating at a higher level of consequence.

Teams that optimize for systems will compound.

Teams that optimize for syntax will stall.

Rebuilding My Personal Blog on Azure: Lessons From the Trenches

In January, I decided to rebuild my personal WordPress blog on Azure.

Not as a demo.
Not as a โ€œhello world.โ€
But as a long-running, low-cost, production-grade personal workloadโ€”something I could realistically live with for years.

What followed was a reminder of why real cloud engineering is never about just clicking โ€œCreateโ€.


Why I Didnโ€™t Use App Service (Again)

I initially explored managed options like Azure App Service and Azure Container Apps. On paper, theyโ€™re perfect. In practice, for a personal blog:

  • Storage behavior mattered more than storage size
  • Hidden costs surfaced through SMB operations and snapshots
  • PHP versioning and runtime controls were more rigid than expected

Nothing was โ€œwrongโ€ โ€” but it wasnโ€™t predictable enough for a small, fixed budget site.

So I stepped back and asked a simpler question:

What is the most boring, controllable architecture that will still work five years from now?


The Architecture I Settled On

I landed on a single Ubuntu VM, intentionally small:

  • Azure VM: B1ms (1 vCPU, 2 GB RAM)
  • OS: Ubuntu 22.04 LTS
  • Stack: Docker + Nginx + WordPress (PHP-FPM) + MariaDB
  • Disk: 30 GB managed disk
  • Access: SSH with key-based auth
  • Networking: Basic NSG, public IP

No autoscaling. No magic. No illusions.

Just something I fully understand.


Azure Policy: A Reality Check

The first thing that blocked me wasnโ€™t Linux or Docker โ€” it was Azure Policy.

Every resource creation failed until I added mandatory tags:

  • env
  • costCenter
  • owner

Not just on the VM โ€” but on:

  • Network interfaces
  • Public IPs
  • NSGs
  • Disks
  • VNets

Annoying? Slightly.
Realistic? Absolutely.

This is what production Azure environments actually look like.


The โ€œSmallโ€ Issues That Matter

A few things that sound trivial โ€” until you hit them at 2 AM:

  • SSH keys rejected due to incorrect file permissions on Windows/WSL
  • PHP upload limits silently capped at 2 MB
  • Nginx + PHP-FPM + Docker each enforcing their own limits
  • A 129 MB WordPress backup restore failing until every layer agreed
  • Choosing between Premium vs Standard disks for a low-IO workload

None of these are headline features.
All of them determine whether the site actually works.


Cost Reality

My target budget: under $150/month total, including:

  • A static site (tanolis.us)
  • This WordPress blog

The VM-based approach keeps costs:

  • Predictable
  • Transparent
  • Easy to tune (disk tier, VM size, shutdown schedules)

No surprises. No runaway meters.


Why This Experience Matters

This wasnโ€™t about WordPress.

It was about:

  • Designing for longevity, not demos
  • Understanding cost behavior, not just pricing
  • Respecting platform guardrails instead of fighting them
  • Choosing simplicity over abstraction when it makes sense

The cloud is easy when everything works.
Engineering starts when it doesnโ€™t.


Whatโ€™s Next

For now, the site is up.
Backups are restored.
Costs are under control.

Next steps โ€” when I feel like it:

  • TLS with Letโ€™s Encrypt
  • Snapshot or off-VM backups
  • Minor hardening

But nothing urgent. And thatโ€™s the point.

Sometimes the best architecture is the one that lets you stop thinking about it.

IDesign Method: An Overview

Software projects often start small and cute, but can quickly become unmanageable as requirements change. This transformation is usually due to the lack of an appropriate architecture, or an architecture that is not designed for future change.

The IDesign Method: An Overview
The IDesign method, developed by Juval Lรถwy, provides a systematic approach to creating a software architecture that will stand the test of time. Let’s explore its key principles.

Avoid functional decomposition
The first principle of IDesign is to avoid functional decomposition – the practice of translating requirements directly into services. For example, if you’re building an e-commerce platform, don’t create separate services for “user management”, “product catalogue” and “order processing” just because those are your main requirements. Instead, IDesign advocates a more thoughtful approach based on volatility.

Volatility based decomposition
IDesign focuses on identifying areas of volatility – aspects of the system that are likely to change over time. For example, in our e-commerce example, payment methods might be an area of volatility, as you may need to add new payment options in the future.

The three-step process:
Identify 3-5 core use cases
What your system does at its most basic level. For our e-commerce platform, these might be:

Browse and search for products
Manage shopping cart
Completing a purchase

Identify areas of volatility
Identify aspects of the system that are likely to change. In our e-commerce example:
Payment methods
Shipping options
Product recommendation algorithms

Define services
IDesign defines five types of services:
Client: Handles user interaction (e.g. web interface)
Manager: Orchestrates business use cases
Engine: Executes specific business logic
Resource Access: Handles data storage and retrieval
Utility: Provides cross-cutting functionality

For our e-commerce platform example we might have:

A ShoppingManager – to orchestrate the shopping process
A PaymentEngine – to handle different payment methods
A ProductCatalogAccess – to manage product data