Claude for Excel just got a lot more accessible

Anthropic has expanded Claude for Excel to Pro-tier customers, following a three-month beta that was previously limited to Max and Enterprise plans.

What’s new:

  • Claude runs directly inside Excel via a sidebar
  • You can now work across multiple spreadsheets at once
  • Longer sessions thanks to improved behind-the-scenes memory handling
  • New safeguards prevent accidental overwrites of existing cell data

Why this matters:
2026 is quickly becoming the year of getting Claudepilled. We’ve seen it with code, coworking tools, and now spreadsheets. Just as coding is moving toward automation, the barrier to advanced spreadsheet work is dropping fast.

Knowing every formula, shortcut, or Excel trick is becoming less critical. The real value is shifting toward:

  • Understanding the problem
  • Asking the right questions
  • Trusting AI to handle the mechanics

Excel isn’t going away — but how we use it is fundamentally changing.

Curious how others are already using AI inside spreadsheets 👀

The Bulkhead Pattern: Isolating Failures Between Subsystems

Modern systems are rarely monolithic anymore. They’re composed of APIs, background jobs, databases, external integrations, and shared infrastructure. While this modularity enables scale, it also introduces a risk that’s easy to underestimate:

A failure in one part of the system can cascade and take everything down.

The Bulkhead pattern exists to prevent exactly that.


Where the Name Comes From

The term bulkhead comes from ship design.

Ships are divided into watertight compartments. If one compartment floods, the damage is contained and the ship stays afloat.

In software, the idea is the same:

Partition your system so failures are isolated and do not spread.

Instead of one failure sinking the entire application, only a portion is affected.


The Core Problem Bulkheads Solve

In many systems, subsystems unintentionally share critical resources:

  • Thread pools
  • Database connection pools
  • Memory
  • CPU
  • Network bandwidth
  • External API quotas

When one subsystem misbehaves—slow queries, infinite retries, traffic spikes—it can exhaust shared resources and starve healthy parts of the system.

This leads to:

  • Cascading failures
  • System-wide outages
  • “Everything is down” incidents caused by one weak link

What “Applying the Bulkhead Pattern” Means

When you apply the Bulkhead pattern, you intentionally isolate resources so that:

  • A failure in Subsystem A
  • Cannot exhaust or block resources used by Subsystem B

The goal is failure containment, not failure prevention.

Failures still happen—but they stay local.


A Simple Example

Without Bulkheads

  • Public API and background jobs share:
    • The same App Service
    • The same thread pool
    • The same database connection pool

A spike in background processing:

  • Consumes threads
  • Exhausts DB connections
  • Causes API requests to hang

Result: Total outage


With Bulkheads

  • Public API runs independently
  • Background jobs run in a separate process or service
  • Each has its own execution and scaling limits

Background jobs fail or slow down
API continues serving users

Result: Partial degradation, not total failure


Common Places to Apply Bulkheads

1. Service-level isolation

  • Separate services for:
    • Public APIs
    • Admin APIs
    • Background processing
  • Independent scaling and deployments

This is the most visible form of bulkheading.


2. Execution and thread isolation

  • Dedicated worker pools
  • Separate queues for different workloads
  • Isolation between synchronous and asynchronous processing

This prevents noisy workloads from starving critical paths.


3. Dependency isolation

  • Separate databases or schemas per workload
  • Read replicas for reporting
  • Independent external API clients with their own timeouts and retries

A slow dependency should not block unrelated operations.


4. Rate and quota isolation

  • Per-tenant throttling
  • Per-client limits
  • Separate API routes with different rate policies

Abuse or spikes from one consumer don’t impact others.


Cloud-Native Bulkheads (Real-World Examples)

You may already be using the Bulkhead pattern without explicitly naming it.

  • Web APIs separated from background jobs
  • Reporting workloads isolated from transactional databases
  • Admin endpoints deployed separately from public endpoints
  • Async processing moved to queues instead of inline execution

All of these are bulkheads in practice.


Bulkhead vs Circuit Breaker (Quick Clarification)

These patterns are often mentioned together, but they solve different problems:

  • Bulkhead pattern
    Prevents failures from spreading by isolating resources
  • Circuit breaker pattern
    Stops calling a dependency that is already failing

Think of bulkheads as structural isolation and circuit breakers as runtime protection.

Used together, they significantly improve system resilience.


Why This Pattern Matters in Production

Bulkheads:

  • Reduce blast radius
  • Turn outages into degradations
  • Protect critical user paths
  • Make systems predictable under stress

Most large-scale outages aren’t caused by a single bug—they’re caused by uncontained failures.

Bulkheads give you containment.


A Practical Mental Model

A simple way to reason about the pattern:

“What happens to the rest of the system if this component misbehaves?”

If the answer is “everything slows down or crashes”, you probably need a bulkhead.


Final Thoughts

The Bulkhead pattern isn’t about adding complexity—it’s about intentional boundaries.

You don’t need microservices everywhere.
You don’t need perfect isolation.

But you do need to decide:

  • Which failures are acceptable
  • Which paths must stay alive
  • Which resources must never be shared

Applied thoughtfully, bulkheads are one of the most effective tools for building systems that survive real-world conditions.

Bulkhead Pattern in Azure (Practical Examples)

Azure makes it relatively easy to apply the Bulkhead pattern because many services naturally enforce isolation boundaries.

Here are common, production-proven ways bulkheads show up in Azure architectures:

1. Separate compute for different workloads

  • Public-facing APIs hosted in:
    • Azure App Service
    • Azure Container Apps
  • Background processing hosted in:
    • Azure Functions
    • WebJobs
    • Container Apps Jobs

Each workload:

  • Scales independently
  • Has its own CPU, memory, and execution limits

A failure or spike in background processing does not starve user-facing traffic.


2. Queue-based isolation with Azure Storage or Service Bus

Using:

  • Azure Storage Queues
  • Azure Service Bus

…creates a natural bulkhead between:

  • Request handling
  • Long-running or unreliable work

If downstream processing slows or fails:

  • Messages accumulate
  • The API remains responsive

This is one of the most effective bulkheads in cloud-native systems.


3. Database workload separation

Common Azure patterns include:

  • Primary database for transactional workloads
  • Read replicas or secondary databases for reporting
  • Separate databases or schemas for batch jobs

Heavy analytics or reporting queries can no longer block critical application paths.


4. Rate limiting and ingress isolation

Using:

  • Azure API Management
  • Azure Front Door

You can enforce:

  • Per-client or per-tenant throttling
  • Separate rate policies for public vs admin APIs

This prevents abusive or noisy consumers from impacting the entire system.


5. Subscription and resource-level boundaries

At a higher level, bulkheads can also be enforced through:

  • Separate Azure subscriptions
  • Dedicated resource groups
  • Independent scaling and budget limits

This limits the blast radius of misconfigurations, cost overruns, or runaway workloads.


Why Azure Bulkheads Matter

In Azure, failures often come from:

  • Unexpected traffic spikes
  • Misbehaving background jobs
  • Cost-driven throttling
  • Shared service limits

Bulkheads turn these into localized incidents instead of platform-wide outages.

When Azure AD Claims Aren’t Enough: Issuing Your Own JWT for Custom Authorization

Modern applications often start with Azure AD (now Microsoft Entra ID) for authentication—and for good reason. It’s secure, battle-tested, and integrates seamlessly with Azure-native services.

But as systems grow, teams frequently hit a wall:

“We can authenticate users, but we can’t express our authorization logic cleanly using Entra ID claims alone.”

At that point, one architectural pattern comes into focus:

Validating the Azure AD token, then issuing your own application-specific JWT.

This post explains when this strategy makes sense, when it doesn’t, and how to implement it responsibly.


The Problem: Identity vs Authorization Drift

Azure AD excels at answering one question:

Who is this user?

It does this using:

  • App roles
  • Group membership
  • Scopes
  • Optional claims

However, real-world authorization often depends on things Azure AD cannot evaluate:

  • Data stored in your application database
  • Tenant-specific permissions
  • Feature flags or subscription tiers
  • Time-bound or contextual access rules
  • Row-level or domain-specific authorization logic

Trying to force this logic into Entra ID frequently leads to:

  • Role explosion
  • Overloaded group membership
  • Fragile claim mappings
  • Slow iteration cycles

This is where many systems start to creak.


The Strategy: Token Exchange with an Application-Issued JWT

Instead of overloading Azure AD, you introduce a clear trust boundary.

High-level flow

  1. User authenticates with Azure AD
  2. Client receives an Azure-issued access token
  3. Your API fully validates that token
  4. Your API issues a new, short-lived JWT containing:
    • Application-specific claims
    • Computed permissions
    • Domain-level roles
  5. Downstream services trust your issuer, not Azure AD directly

This is often referred to as:

  • Token exchange
  • Backend-for-Frontend (BFF) token
  • Application-issued JWT

It’s a well-established pattern in enterprise and government systems.


Why This Works Well

1. Azure AD handles authentication; your app handles authorization

This separation keeps responsibilities clean:

  • Azure AD → Identity proof
  • Your API → Business authorization

You avoid pushing business logic into your identity provider, where it doesn’t belong.


2. You can issue dynamic, computed claims

Your API can:

  • Query databases
  • Apply complex logic
  • Evaluate tenant state
  • Calculate effective permissions

Azure AD cannot do this—and shouldn’t.


3. Downstream services stay simple

Instead of every service needing to understand:

  • Azure tenants
  • Scopes vs roles
  • Group semantics

They simply trust:

  • Your issuer
  • Your claim contract

This dramatically simplifies internal service authorization.


4. Identity provider portability

If you later introduce:

  • Entra B2C
  • A second tenant
  • External identity providers

Your internal JWT remains the stable contract.
Only the validation layer changes.


When This Is Overkill

This pattern is not always the right choice.

Avoid it if:

  • App roles and groups already express your needs
  • Authorization rules are static
  • You don’t have downstream services
  • You want minimal operational overhead

A second token adds complexity—don’t add it unless it earns its keep.


Security Rules You Must Follow

If you issue your own JWT, there are no shortcuts.

1. Fully validate the Azure token

Always validate:

  • Signature
  • Issuer
  • Audience
  • Expiration
  • Required scopes or roles

If you skip any of these, the pattern collapses.


2. Keep your token short-lived

Best practice:

  • 5–15 minute lifetime
  • No refresh tokens (unless explicitly designed)

Azure AD should remain responsible for session longevity.


3. Protect and rotate signing keys

  • Use a dedicated signing key
  • Store it securely (Key Vault)
  • Rotate it regularly
  • Publish JWKS if multiple services validate your token

Your token is only as trustworthy as your key management.


4. Be disciplined with claims

Only include what downstream services actually need.

Avoid:

  • Personally identifiable information
  • Large payloads
  • Debug or “just in case” claims

JWTs are not data containers.


A Practical Mental Model

A simple way to reason about this pattern:

Azure AD authenticates people.
Your application authorizes behavior.

If that statement matches your system’s reality, this approach is not only valid—it’s often the cleanest solution.


Final Thoughts

Issuing your own JWT after validating an Azure AD token is not a workaround or an anti-pattern. It’s a deliberate architectural choice used in:

  • Regulated environments
  • Multi-tenant SaaS platforms
  • Government and municipal systems
  • Complex internal platforms

Like all powerful patterns, it should be applied intentionally, with strong security discipline and a clear boundary of responsibility.

Used correctly, it restores simplicity where identity systems alone fall short.

Architecture Diagram (Conceptual)

Actors and trust boundaries:

  1. Client (Web / SPA / Mobile)
    • Initiates sign-in using Microsoft Entra ID
    • Receives an Azure AD access token
    • Never handles application secrets or signing keys
  2. Microsoft Entra ID (Azure AD)
    • Authenticates the user
    • Issues a standards-based OAuth 2.0 / OpenID Connect token
    • Acts as the external identity authority
  3. Authorization API (Your Backend)
    • Validates the Azure AD token (issuer, audience, signature, expiry, scopes)
    • Applies application-specific authorization logic
    • Queries internal data sources (database, feature flags, tenant configuration)
    • Issues a short-lived, application-signed JWT
  4. Downstream APIs / Services
    • Trust only the application issuer
    • Validate the application JWT using published signing keys (JWKS)
    • Enforce authorization using domain-specific claims

Token flow:

  • Azure AD token → proves identity
  • Application-issued JWT → encodes authorization

This design creates a clean boundary where:

  • Identity remains centralized and externally managed
  • Authorization becomes explicit, testable, and owned by the application

TLS on a Simple Dockerized WordPress VM (Certbot + Nginx)

This note documents how TLS was issued, configured, and made fully automatic for a WordPress site running on a single Ubuntu VM with Docker, Nginx, PHP-FPM, and MariaDB.

The goal was boring, predictable HTTPS — no load balancers, no Front Door, no App Service magic.


Architecture Context

  • Host: Azure Ubuntu VM (public IP)
  • Web server: Nginx (Docker container)
  • App: WordPress (PHP-FPM container)
  • DB: MariaDB (container)
  • TLS: Let’s Encrypt via Certbot (host-level)
  • DNS: Azure DNS → VM public IP
  • Ports:
    • 80 → HTTP (redirect + ACME challenge)
    • 443 → HTTPS

1. Certificate Issuance (Initial)

Certbot was installed on the VM (host), not inside Docker.

Initial issuance was done using standalone mode (acceptable for first issuance):

sudo certbot certonly \
  --standalone \
  -d shahzadblog.com

This required:

  • Port 80 temporarily free
  • Docker/nginx stopped during issuance

Resulting certs live at:

/etc/letsencrypt/live/shahzadblog.com/
  ├── fullchain.pem
  └── privkey.pem

2. Nginx TLS Configuration (Docker)

Nginx runs in Docker and mounts the host cert directory read-only.

Docker Compose (nginx excerpt)

nginx:
  image: nginx:alpine
  ports:
    - "80:80"
    - "443:443"
  volumes:
    - ./wordpress:/var/www/html
    - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    - /etc/letsencrypt:/etc/letsencrypt:ro

Nginx config (key points)

  • Explicit HTTP → HTTPS redirect
  • TLS configured with Let’s Encrypt certs
  • HTTP left available only for ACME challenges
# HTTP (ACME + redirect)
server {
    listen 80;
    server_name shahzadblog.com;

    location ^~ /.well-known/acme-challenge/ {
        root /var/www/html;
        allow all;
    }

    location / {
        return 301 https://$host$request_uri;
    }
}

# HTTPS
server {
    listen 443 ssl;
    http2 on;

    server_name shahzadblog.com;

    ssl_certificate     /etc/letsencrypt/live/shahzadblog.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/shahzadblog.com/privkey.pem;

    root /var/www/html;
    index index.php index.html;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
}

3. Why Standalone Renewal Failed

Certbot auto-renew initially failed with:

Could not bind TCP port 80

Reason:

  • Docker/nginx already listening on port 80
  • Standalone renewal always tries to bind port 80

This is expected behavior.


4. Switching to Webroot Renewal (Correct Fix)

Instead of stopping Docker every 60–90 days, renewal was switched to webroot mode.

Key Insight

Certbot (host) and Nginx (container) must point to the same physical directory.

  • Nginx serves:
    ~/wp-docker/wordpress → /var/www/html (container)
  • Certbot must write challenges into:
    ~/wp-docker/wordpress/.well-known/acme-challenge

5. Renewal Config Fix (Critical Step)

Edit the renewal file:

sudo nano /etc/letsencrypt/renewal/shahzadblog.com.conf

Change:

authenticator = standalone

To:

authenticator = webroot
webroot_path = /home/azureuser/wp-docker/wordpress

⚠️ Do not use /var/www/html here — that path exists only inside Docker.


6. Filesystem Permissions

Because Docker created WordPress files as root, the ACME path had to be created with sudo:

sudo mkdir -p /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge
sudo chmod -R 755 /home/azureuser/wp-docker/wordpress/.well-known

Validation test:

echo test | sudo tee /home/azureuser/wp-docker/wordpress/.well-known/acme-challenge/test.txt
curl http://shahzadblog.com/.well-known/acme-challenge/test.txt

Expected output:

test

7. Final Renewal Test (Success Condition)

sudo certbot renew --dry-run

Success message:

Congratulations, all simulated renewals succeeded!

At this point:

  • Certbot timer is active
  • Docker/nginx stays running
  • No port conflicts
  • No manual intervention required

Final State (What “Done” Looks Like)

  • 🔒 HTTPS works in all browsers
  • 🔁 Cert auto-renews in background
  • 🐳 Docker untouched during renewals
  • 💸 No additional Azure services
  • 🧠 Minimal moving parts

Key Lessons

  • Standalone mode is fine for first issuance, not renewal
  • In Docker setups, filesystem alignment matters more than ports
  • Webroot renewal is the simplest long-term option
  • Don’t fight permissions — use sudo intentionally
  • “Simple & boring” scales better than clever abstractions

This setup is intentionally non-enterprise, low-cost, and stable — exactly what a long-running personal site needs.

Writing code is over

Ryan Dahl built Node.js.

Now he says writing code is over.

When the engineer who helped define modern software says this, pay attention.

Not because coding is dead.

Because the 𝘃𝗮𝗹𝘂𝗲 𝗺𝗼𝘃𝗲𝗱.

𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗲 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝘀.

𝗜𝘁 𝗲𝗹𝗶𝗺𝗶𝗻𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗶𝗹𝗹𝘂𝘀𝗶𝗼𝗻 𝘁𝗵𝗮𝘁 𝘄𝗿𝗶𝘁𝗶𝗻𝗴 𝗰𝗼𝗱𝗲 𝘄𝗮𝘀 𝘁𝗵𝗲 𝗷𝗼𝗯.

𝗧𝗵𝗲 𝗢𝗹𝗱 𝗠𝗼𝗱𝗲𝗹

Value lived in syntax.

Output was measured in lines of code.

𝗧𝗵𝗲 𝗘𝗺𝗲𝗿𝗴𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹

Value lives in systems thinking.

Output is measured in correctness, resilience, and architecture.

You can already see this shift.

The meeting where no one debates the code.

They debate the 𝗮𝘀𝘀𝘂𝗺𝗽𝘁𝗶𝗼𝗻.

The 𝘁𝗿𝗮𝗱𝗲𝗼𝗳𝗳.
The 𝗳𝗮𝗶𝗹𝘂𝗿𝗲 𝗺𝗼𝗱𝗲.

The code is already there.

The decision is not.

𝗦𝘆𝗻𝘁𝗮𝘅 𝘄𝗮𝘀 𝗻𝗲𝘃𝗲𝗿 𝘁𝗵𝗲 𝘀𝗰𝗮𝗿𝗰𝗲 𝘀𝗸𝗶𝗹𝗹.

𝗝𝘂𝗱𝗴𝗺𝗲𝗻𝘁 𝘄𝗮𝘀.

𝗠𝗬 𝗧𝗔𝗞𝗘𝗔𝗪𝗔𝗬

The future of software is not necessarily fewer engineers.

It’s engineers operating at a higher level of consequence.

Teams that optimize for systems will compound.

Teams that optimize for syntax will stall.