When Azure AD Claims Aren’t Enough: Issuing Your Own JWT for Custom Authorization

Modern applications often start with Azure AD (now Microsoft Entra ID) for authentication—and for good reason. It’s secure, battle-tested, and integrates seamlessly with Azure-native services.

But as systems grow, teams frequently hit a wall:

“We can authenticate users, but we can’t express our authorization logic cleanly using Entra ID claims alone.”

At that point, one architectural pattern comes into focus:

Validating the Azure AD token, then issuing your own application-specific JWT.

This post explains when this strategy makes sense, when it doesn’t, and how to implement it responsibly.


The Problem: Identity vs Authorization Drift

Azure AD excels at answering one question:

Who is this user?

It does this using:

  • App roles
  • Group membership
  • Scopes
  • Optional claims

However, real-world authorization often depends on things Azure AD cannot evaluate:

  • Data stored in your application database
  • Tenant-specific permissions
  • Feature flags or subscription tiers
  • Time-bound or contextual access rules
  • Row-level or domain-specific authorization logic

Trying to force this logic into Entra ID frequently leads to:

  • Role explosion
  • Overloaded group membership
  • Fragile claim mappings
  • Slow iteration cycles

This is where many systems start to creak.


The Strategy: Token Exchange with an Application-Issued JWT

Instead of overloading Azure AD, you introduce a clear trust boundary.

High-level flow

  1. User authenticates with Azure AD
  2. Client receives an Azure-issued access token
  3. Your API fully validates that token
  4. Your API issues a new, short-lived JWT containing:
    • Application-specific claims
    • Computed permissions
    • Domain-level roles
  5. Downstream services trust your issuer, not Azure AD directly

This is often referred to as:

  • Token exchange
  • Backend-for-Frontend (BFF) token
  • Application-issued JWT

It’s a well-established pattern in enterprise and government systems.


Why This Works Well

1. Azure AD handles authentication; your app handles authorization

This separation keeps responsibilities clean:

  • Azure AD → Identity proof
  • Your API → Business authorization

You avoid pushing business logic into your identity provider, where it doesn’t belong.


2. You can issue dynamic, computed claims

Your API can:

  • Query databases
  • Apply complex logic
  • Evaluate tenant state
  • Calculate effective permissions

Azure AD cannot do this—and shouldn’t.


3. Downstream services stay simple

Instead of every service needing to understand:

  • Azure tenants
  • Scopes vs roles
  • Group semantics

They simply trust:

  • Your issuer
  • Your claim contract

This dramatically simplifies internal service authorization.


4. Identity provider portability

If you later introduce:

  • Entra B2C
  • A second tenant
  • External identity providers

Your internal JWT remains the stable contract.
Only the validation layer changes.


When This Is Overkill

This pattern is not always the right choice.

Avoid it if:

  • App roles and groups already express your needs
  • Authorization rules are static
  • You don’t have downstream services
  • You want minimal operational overhead

A second token adds complexity—don’t add it unless it earns its keep.


Security Rules You Must Follow

If you issue your own JWT, there are no shortcuts.

1. Fully validate the Azure token

Always validate:

  • Signature
  • Issuer
  • Audience
  • Expiration
  • Required scopes or roles

If you skip any of these, the pattern collapses.


2. Keep your token short-lived

Best practice:

  • 5–15 minute lifetime
  • No refresh tokens (unless explicitly designed)

Azure AD should remain responsible for session longevity.


3. Protect and rotate signing keys

  • Use a dedicated signing key
  • Store it securely (Key Vault)
  • Rotate it regularly
  • Publish JWKS if multiple services validate your token

Your token is only as trustworthy as your key management.


4. Be disciplined with claims

Only include what downstream services actually need.

Avoid:

  • Personally identifiable information
  • Large payloads
  • Debug or “just in case” claims

JWTs are not data containers.


A Practical Mental Model

A simple way to reason about this pattern:

Azure AD authenticates people.
Your application authorizes behavior.

If that statement matches your system’s reality, this approach is not only valid—it’s often the cleanest solution.


Final Thoughts

Issuing your own JWT after validating an Azure AD token is not a workaround or an anti-pattern. It’s a deliberate architectural choice used in:

  • Regulated environments
  • Multi-tenant SaaS platforms
  • Government and municipal systems
  • Complex internal platforms

Like all powerful patterns, it should be applied intentionally, with strong security discipline and a clear boundary of responsibility.

Used correctly, it restores simplicity where identity systems alone fall short.

Architecture Diagram (Conceptual)

Actors and trust boundaries:

  1. Client (Web / SPA / Mobile)
    • Initiates sign-in using Microsoft Entra ID
    • Receives an Azure AD access token
    • Never handles application secrets or signing keys
  2. Microsoft Entra ID (Azure AD)
    • Authenticates the user
    • Issues a standards-based OAuth 2.0 / OpenID Connect token
    • Acts as the external identity authority
  3. Authorization API (Your Backend)
    • Validates the Azure AD token (issuer, audience, signature, expiry, scopes)
    • Applies application-specific authorization logic
    • Queries internal data sources (database, feature flags, tenant configuration)
    • Issues a short-lived, application-signed JWT
  4. Downstream APIs / Services
    • Trust only the application issuer
    • Validate the application JWT using published signing keys (JWKS)
    • Enforce authorization using domain-specific claims

Token flow:

  • Azure AD token → proves identity
  • Application-issued JWT → encodes authorization

This design creates a clean boundary where:

  • Identity remains centralized and externally managed
  • Authorization becomes explicit, testable, and owned by the application

TLS on a Simple Dockerized WordPress VM (Certbot + Nginx)

This note documents how TLS was issued, configured, and made fully automatic for a WordPress site running on a single Ubuntu VM with Docker, Nginx, PHP-FPM, and MariaDB.

The goal was boring, predictable HTTPS — no load balancers, no Front Door, no App Service magic.


Architecture Context

  • Host: Azure Ubuntu VM (public IP)
  • Web server: Nginx (Docker container)
  • App: WordPress (PHP-FPM container)
  • DB: MariaDB (container)
  • TLS: Let’s Encrypt via Certbot (host-level)
  • DNS: Azure DNS → VM public IP
  • Ports:
    • 80 → HTTP (redirect + ACME challenge)
    • 443 → HTTPS

1. Certificate Issuance (Initial)

Certbot was installed on the VM (host), not inside Docker.

Initial issuance was done using standalone mode (acceptable for first issuance):

sudo certbot certonly \
  --standalone \
  -d shahzadblog.com

This required:

  • Port 80 temporarily free
  • Docker/nginx stopped during issuance

Resulting certs live at:

/etc/letsencrypt/live/shahzadblog.com/
  ├── fullchain.pem
  └── privkey.pem

2. Nginx TLS Configuration (Docker)

Nginx runs in Docker and mounts the host cert directory read-only.

Docker Compose (nginx excerpt)

nginx:
  image: nginx:alpine
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./wordpress:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - /etc/letsencrypt:/etc/letsencrypt:ro

Nginx config (key points)

  • Explicit HTTP → HTTPS redirect
  • TLS configured with Let’s Encrypt certs
  • HTTP left available only for ACME challenges
# HTTP (ACME + redirect)
server {
    listen 80;
    server_name shahzadblog.com;

    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS
server {
    listen 443 ssl;
    http2 on;

    server_name shahzadblog.com;

    ssl_certificate     /etc/letsencrypt/live/shahzadblog.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shahzadblog.com/privkey.pem;

    root /var/www/html;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

3. Why Standalone Renewal Failed

Certbot auto-renew initially failed with:

Could not bind TCP port 80

Reason:

  • Docker/nginx already listening on port 80
  • Standalone renewal always tries to bind port 80

This is expected behavior.


4. Switching to Webroot Renewal (Correct Fix)

Instead of stopping Docker every 60–90 days, renewal was switched to webroot mode.

Key Insight

Certbot (host) and Nginx (container) must point to the same physical directory.

  • Nginx serves:
    ~/wp-docker/wordpress → /var/www/html (container)
  • Certbot must write challenges into:
    ~/wp-docker/wordpress/.well-known/acme-challenge

5. Renewal Config Fix (Critical Step)

Edit the renewal file:

sudo nano /etc/letsencrypt/renewal/shahzadblog.com.conf

Change:

authenticator = standalone

To:

authenticator = webroot
webroot_path = /home/azureuser/wp-docker/wordpress

⚠️ Do not use /var/www/html here — that path exists only inside Docker.


6. Filesystem Permissions

Because Docker created WordPress files as root, the ACME path had to be created with sudo:

sudo mkdir -p /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge
sudo chmod -R 755 /home/azureuser/wp-docker/wordpress/.well-known

Validation test:

echo test | sudo tee /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge/test.txt
curl http://shahzadblog.com/.well-known/acme-challenge/test.txt

Expected output:

test

7. Final Renewal Test (Success Condition)

sudo certbot renew --dry-run

Success message:

Congratulations, all simulated renewals succeeded!

At this point:

  • Certbot timer is active
  • Docker/nginx stays running
  • No port conflicts
  • No manual intervention required

Final State (What “Done” Looks Like)

  • 🔒 HTTPS works in all browsers
  • 🔁 Cert auto-renews in background
  • 🐳 Docker untouched during renewals
  • 💸 No additional Azure services
  • 🧠 Minimal moving parts

Key Lessons

  • Standalone mode is fine for first issuance, not renewal
  • In Docker setups, filesystem alignment matters more than ports
  • Webroot renewal is the simplest long-term option
  • Don’t fight permissions — use sudo intentionally
  • “Simple & boring” scales better than clever abstractions

This setup is intentionally non-enterprise, low-cost, and stable — exactly what a long-running personal site needs.

Azure Function Flex Consumption Plan and Key Vault

When we use the Azure Functions Flex Consumption plan, the platform dynamically manages the underlying infrastructure. This means the outbound IP addresses are not static or predictable in the same way they are with dedicated plan like App Service Environment.

The Private IP Address (172.25.1.187) is an internal, virtual network address within the Azure infrastructure and not a public, internet-routable IP address. Key vault’s firewall is designed to filter based on public IP addresses or specific virtual network rules. It wouldn’t accept, non-routable IP address in its allow list.

The correct way to solve this is to use a Virtual Network (VNet) service Endpoint or an Azure Private Endpoint. This method allows your Azure Function to securely connect to the Key Vault over the Azure backbone network without using Public IP addresses.

The Solution

The correct way to solve this is to use a Virtual Network (VNet) Service Endpoint or an Azure Private Endpoint. This method allows your Azure Function to securely connect to the Key Vault over the Azure backbone network without using public IP addresses.

Here’s how you can implement this:

  1. Integrate Your Azure Function with a Virtual Network
    First, you need to integrate your Azure Function App with a virtual network. This feature allows your function to access resources within a VNet. Since your function app is on a Flex Consumption plan, you’ll need to use the Regional VNet Integration feature.
  2. Configure a VNet Service Endpoint for Key Vault
    Once your function app is integrated into a VNet, you can configure a VNet Service Endpoint on your Key Vault. This feature extends the VNet’s private address space and identity to the Azure Key Vault. When a resource (like your function app) in that VNet attempts to access the Key Vault, the traffic stays on the Azure backbone network instead of going over the public internet.

Steps to configure the VNet Service Endpoint:

  • Go to your Azure Key Vault.
  • Navigate to the Networking blade.
  • Under the Firewalls and virtual networks tab, select Allow public access from specific virtual networks and IP addresses.
  • Click + Add existing virtual networks.
  • Select the virtual network and the subnet that your Azure Function is integrated with.
  • Enable the Service endpoint for Microsoft.KeyVault on the subnet.

(Alternative) Use a Private Endpoint
A more secure and private alternative is to use an Azure Private Endpoint. This creates a private network interface for your Key Vault in your VNet, assigning it a private IP address from your VNet’s address space. This makes the Key Vault accessible only from within your VNet.

    Steps to configure the Private Endpoint:

    • Go to your Azure Key Vault.
    • Navigate to the Networking blade.
    • Select the Private endpoint connections tab.
    • Click + Private endpoint.
    • Follow the wizard to create the private endpoint, linking it to your VNet and a specific subnet.
    • Update your function app’s code or configuration to use the private endpoint DNS name for the Key Vault.

    Recommendation: The VNet Service Endpoint approach is generally simpler to implement and is the standard solution for this scenario. The Private Endpoint offers a higher level of network isolation and is often preferred for more sensitive applications.

    This approach resolves the issue by bypassing the public IP address limitation of the Key Vault firewall and establishing a secure, private connection between your Azure Function and the Key Vault.

    Cloud computing

    Cloud computing is the on-demand delivery of IT resources over a network. In traditional data centers, compute and storage resources used to be allocated manually by a dedicated IT team. In the cloud, this process is fully automated, leading to increased agility and significant cost savings.

    Types of clouds

    Cloud types vary depending on who owns or operates them. It is also possible to use more than one cloud at a time in a hybrid or multi-cloud architecture.

    Public cloud

    Public clouds are owned and managed by a cloud service provider. All resources are shared between multiple tenants. Even though the public cloud market is dominated by three major players, hundreds of smaller public cloud providers exist all over the world and run their public cloud infrastructure on Ubuntu.

    More about public clouds ›

    Private cloud

    A private cloud is owned by an organization or an individual. All resources are exclusively dedicated to a single entity or a service. It runs on the organization’s premises or in an external data center. It is managed by the organization’s operations team or a managed service provider.

    More about private clouds ›

    Managed cloud

    Managed clouds are private clouds that are fully managed by a third-party organisation (aka managed service provider). The customer provides the hardware, but cloud operations and maintenance tasks are outsourced. The cloud can either run on the organisation’s premises or in the managed service provider’s data centre.

    More about managed clouds ›

    Micro cloud

    Micro clouds are a new class of infrastructure for on-demand computing at the edge. They differ from the internet-of-things (IoT), which uses thousands of single machines or sensors to gather data, yet they perform computing tasks. Micro clouds reuse proven cloud primitives but with the unattended, autonomous and clustering features that resolve typical edge computing challenges.

    More about micro clouds ›

    Hybrid cloud

    Hybrid cloud is a cloud computing architecture that consists of at least one public cloud, at least one private cloud and a hybrid cloud manager (HCM). It is one of the most popular trends in the IT industry, adopted by 82% of IT leaders, according to the Cisco 2022 Global Hybrid Cloud Trends Report.

    More about hybrid clouds ›

    Multi-cloud

    Multi-cloud (also referred to as multi cloud or multicloud) is a concept that refers to using multiple clouds from more than one cloud service provider at the same time. The term is also used to refer to the simultaneous running of bare metal, virtualised and containerised workloads.

    More about multi-cloud ›

    Cloud computing models

    Cloud computing services are usually available to end users in the form of three primary models. Those include infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS) and software-as-a-service (SaaS). Some more specific use cases exist too, such as container-as-a-service (CaaS). However, in essence, they are a subset of the main three.

    IaaS

    In the IaaS model, you provision resources. Those include the number of virtual CPUs (vCPUs), the amount of RAM, storage, etc. They come in the form of VMs or containers with a pre-installed operating system (OS). You manage everything up from there. IaaS is the most common cloud computing model as it allows for more freedom.

    PaaS

    In the PaaS model, you provision workloads. While you are still responsible for delivering application code and data management, the PaaS platform takes care of scheduling resources (usually containers) and manages them, including the OS, middleware and runtime. The PaaS model has never been widely adopted due to its overall complexity.

    SaaS

    In the SaaS model, you provision applications. They are deployed from pre-defined templates and can be configured according to your needs. Everything is managed by the cloud provider. Interest in the SaaS model is constantly increasing as it allows for full automation from the ground up.

     Legacy data centreIaaSPaasSaas
    ApplicationsYou manageYou manageYou manageCloud provider
    DataYou manageYou manageYou manageCloud provider
    RuntimeYou manageYou manageCloud providerCloud provider
    MiddlewareYou manageYou manageCloud providerCloud provider
    O/SYou manageYou manageCloud providerCloud provider
    VirtualisationYou manageCloud providerCloud providerCloud provider
    ServersYou manageCloud providerCloud providerCloud provider
    StorageYou manageCloud providerCloud providerCloud provider
    NetworkingYou manageCloud providerCloud providerCloud provider

    Reference

    https://ubuntu.com/cloud/cloud-computing

    App Registration vs Enterprise Applications

    All applications that get registered in AAD, in the tenant, two types of objects get created once the app registration is done.

    • Application Object
    • Service Principal Object

    The Application Object is what you see under App Registrations in AAD. This object acts as the template where you can go ahead and configure various things like API Permissions, Client Secrets, Branding, App Roles, etc. All these customizations that you make to your app, get written to the app manifest file. The application object describes three aspects of an application: how the service can issue tokens in order to access the application, resources that the application might need to access, and the actions that the application can take.

    The Service Principal Object is what you see under the Enterprise Registration blade in AAD. Every Application Object (created through the Azure Portal or using the Microsoft Graph APIs, or AzureAD PS Module) would create a corresponding Service Principal Object in the Enterprise Registration blade of AAD. A service principal is a concrete instance created from the application object and inherits certain properties from that application object. A service principal is created in each tenant where the application is used and references the globally unique app object. The service principal object defines what the app can actually do in the specific tenant, who can access the app, and what resources the app can access.

    Similar to a class in object-oriented programming, the application object has some static properties that are applied to all the created service principals (or application instances).

    You can read more on the following objects here: https://learn.microsoft.com/en-us/azure/active-directory/develop/app-objects-and-service-principals

    Reference

    https://learn.microsoft.com/en-us/training/modules/implement-app-registration/2-plan-your-line-business-application-registration-strategy