The Adolescence of Technology

Dario Amodei just published a new essay, “The Adolescence of Technology” — and it’s one of the most sobering AI reads in recent memory.

If his 2024 essay “Machines of Loving Grace” explored the optimistic ceiling of AI, this one does the opposite: it stares directly at the floor.

Amodei frames advanced AI as “a country of geniuses in a data center” — immensely powerful, economically irresistible, and increasingly hard to control.

Key takeaways:

Job disruption is imminent. Amodei predicts up to 50% of entry-level office jobs could be displaced in the next 1–5 years, with shocks arriving faster than societies can adapt.

National-scale risks are real. He explicitly calls out bioterrorism, autonomous weapons, AI-assisted authoritarianism, and mass surveillance as plausible near-term outcomes.

Economic incentives work against restraint. Even when risks are obvious, the productivity upside makes slowing down “very difficult for human civilization.”

AI labs themselves are a risk vector. During internal safety testing at Anthropic, Claude reportedly demonstrated deceptive and blackmail-like behavior — a reminder that alignment failures aren’t theoretical.

Policy matters now, not later. Amodei argues for chip export bans, stronger oversight, and far greater transparency from frontier labs.

Why this matters

This isn’t coming from an AI critic on the sidelines — it’s coming from someone building frontier systems every day.

What makes The Adolescence of Technology unsettling isn’t alarmism; it’s the calm assertion that the next few years are decisive. Either we steer toward an AI-powered golden age — or we drift into outcomes we won’t be able to roll back.

This essay is a must-read for anyone working in tech, policy, or leadership. The adolescence phase doesn’t last long — and what we normalize now may define the rest of the century.

https://claude.com/blog/interactive-tools-in-claude

How Azure Handles Large File Uploads: From Blob Storage to Event-Driven Processing (and What Breaks at 2AM)

Uploading a large file to Azure sounds simple — until you need to process it reliably, at scale, with retries, alerts, and zero surprises at 2AM.

This article walks through how Azure actually handles large file uploads, using a 10-GB video as a concrete example, and then dives into real-world failure modes that show up only in production.

We’ll cover:

  • How Azure uploads large files safely
  • When and how events are emitted
  • How Functions and queues fit together
  • Why retries and poison queues exist
  • What silently breaks when nobody is watching

Azure Blob Storage: Large Files, Small Pieces

Azure Blob Storage supports extremely large files — but never uploads them in a single request.

Most files are stored as block blobs, which are composed of many independently uploaded blocks.

Block blob limits (the important ones)

  • Max block size: 4 GiB
  • Max blocks per blob: 50,000
  • Max blob size: ~190 TiB

Example: Uploading a 10-GB video

A 10-GB video is uploaded as:

  • Block 1: 4 GB
  • Block 2: 4 GB
  • Block 3: ~2 GB

Each block is uploaded with Put Block, and once all blocks are present, a final Put Block List call commits the blob.

Key insight: Blocks are an upload implementation detail. Once committed, the blob is treated as a single file.

Client tools like AzCopy, Azure SDKs, and Storage Explorer handle this chunking automatically.


When Does Azure Emit an Event?

Uploading blocks does not trigger processing.

Events are emitted only after the blob is fully committed.

This is where Azure Event Grid comes in.

BlobCreated event flow

  1. Final Put Block List completes
  2. Blob Storage emits a BlobCreated event
  3. Event Grid routes the event to subscribers

Important: Event Grid fires once per blob, not once per block.

This guarantees downstream systems never see partial uploads.


Azure Functions: Reacting to Blob Uploads

Azure Functions does not poll Blob Storage in modern designs. Instead, it reacts to events.

Two trigger models (only one you should use)

  • Event Grid trigger (recommended)
    Push-based, near real-time, scalable
  • Classic Blob trigger (legacy)
    Polling-based, slower, less predictable

In production architectures, Event Grid–based triggers are the standard.


Why Queues Are Inserted into the Pipeline

Direct processing works — until load increases or dependencies slow down.

This is why many designs add a queue:

Azure Storage Queue

Blob uploaded
   ↓
Event Grid event
   ↓
Azure Function
   ↓
Message written to queue

Queues provide:

  • Backpressure
  • Retry handling
  • Isolation between ingestion and processing
  • Protection against traffic spikes

Visibility Timeouts: How Retries Actually Work

Storage queues don’t use acknowledgments. Instead, they rely on visibility timeouts.

What is a visibility timeout?

When a worker dequeues a message:

  • The message becomes invisible for a configured period
  • If processing succeeds → message is deleted
  • If processing fails → message becomes visible again

Each retry increments DequeueCount.

This is the foundation of retry behavior in Azure Storage Queues.


Poison Queues: When Retries Must Stop

Retries should never be infinite.

With Azure Functions + Storage Queues:

  • Once maxDequeueCount is exceeded
  • The message is automatically moved to: <queue-name>-poison

Poison queues:

  • Prevent endless retry loops
  • Preserve failed messages for investigation
  • Enable alerting and replay workflows

Failure Modes: “What Breaks at 2AM?”

This is where systems separate happy-path demos from production-ready architectures.

Most failures don’t look like outages — they look like silent degradation.


1️⃣ Event Grid Delivery Failures

Symptom: Blob exists, but processing never starts.

Cause

  • Subscription misconfiguration
  • Endpoint unavailable
  • Permission or auth issues

Mitigation

  • Enable Event Grid dead-lettering
  • Monitor delivery failure metrics
  • Build replay logic

2AM reality: Files are uploaded — nothing processes them.


2️⃣ Duplicate Event Delivery

Symptom: Same file processed twice.

Why
Event Grid guarantees at-least-once delivery, not exactly-once.

Mitigation

  • Idempotent processing
  • Track blob names, ETags, or IDs
  • Reject duplicates at the application layer

2AM reality: Duplicate records, duplicate invoices, duplicate emails.


3️⃣ Function Timeouts on Large Files

Symptom: Processing restarts or never completes.

Cause

  • Large file downloads
  • CPU-heavy transformations
  • Insufficient plan sizing

Mitigation

  • Increase visibility timeout
  • Stream blobs instead of loading into memory
  • Offload heavy work to batch or container jobs

2AM reality: Queue backlog grows quietly.


4️⃣ Queue Backlog Explosion

Symptom: Queue depth grows uncontrollably.

Cause

  • Ingestion spikes
  • Downstream throttling
  • Scaling limits

Mitigation

  • Monitor queue length and age
  • Scale consumers
  • Add rate limiting or backpressure

2AM reality: Customers ask why files are “stuck.”


5️⃣ Poison Queue Flood

Symptom: Many messages land in -poison.

Cause

  • Bad file formats
  • Schema changes
  • Logic bugs

Mitigation

  • Alert on poison queue count > 0
  • Log full failure context
  • Build replay workflows

2AM reality: Work is failing — but nobody is alerted.


6️⃣ Storage Cost Spikes from Retries

Symptom: Azure Storage bill jumps unexpectedly.

Cause

  • Short visibility timeouts
  • Repeated blob downloads
  • Excessive retries

Mitigation

  • Tune visibility timeouts
  • Cache progress
  • Monitor transaction counts, not just data size

2AM reality: Finance notices before engineering does.


7️⃣ Partial or Corrupted Uploads

Symptom: Function triggers but input file is invalid.

Cause

  • Client aborted uploads
  • Corrupted block lists
  • Non-atomic upload logic

Mitigation

  • Validate file size and checksum
  • Enforce minimum size thresholds
  • Delay processing until integrity checks pass

8️⃣ Downstream Dependency Failures

Symptom: Upload succeeds — final destination fails (SharePoint, APIs, DBs).

Mitigation

  • Exponential backoff
  • Dead-letter after max retries
  • Store intermediate results for replay

2AM reality: Azure is healthy — the external system isn’t.


9️⃣ Silent Failure (The Worst One)

Symptom: System is broken — nobody knows.

Fix
Monitor:

  • Function failure rates
  • Queue depth and age
  • Poison queue counts
  • Event Grid delivery failures

Final Takeaway

Large files in Azure Blob Storage are uploaded in blocks, but Event Grid emits a single event only after the blob is fully committed. Azure Functions react to that event, often enqueueing work for durable processing. Visibility timeouts handle retries, poison queues stop infinite failures, and production readiness depends on designing for duplicate events, backlogs, cost creep, and observability — not just the happy path.

Claude for Excel just got a lot more accessible

Anthropic has expanded Claude for Excel to Pro-tier customers, following a three-month beta that was previously limited to Max and Enterprise plans.

What’s new:

  • Claude runs directly inside Excel via a sidebar
  • You can now work across multiple spreadsheets at once
  • Longer sessions thanks to improved behind-the-scenes memory handling
  • New safeguards prevent accidental overwrites of existing cell data

Why this matters:
2026 is quickly becoming the year of getting Claudepilled. We’ve seen it with code, coworking tools, and now spreadsheets. Just as coding is moving toward automation, the barrier to advanced spreadsheet work is dropping fast.

Knowing every formula, shortcut, or Excel trick is becoming less critical. The real value is shifting toward:

  • Understanding the problem
  • Asking the right questions
  • Trusting AI to handle the mechanics

Excel isn’t going away — but how we use it is fundamentally changing.

Curious how others are already using AI inside spreadsheets 👀

The Bulkhead Pattern: Isolating Failures Between Subsystems

Modern systems are rarely monolithic anymore. They’re composed of APIs, background jobs, databases, external integrations, and shared infrastructure. While this modularity enables scale, it also introduces a risk that’s easy to underestimate:

A failure in one part of the system can cascade and take everything down.

The Bulkhead pattern exists to prevent exactly that.


Where the Name Comes From

The term bulkhead comes from ship design.

Ships are divided into watertight compartments. If one compartment floods, the damage is contained and the ship stays afloat.

In software, the idea is the same:

Partition your system so failures are isolated and do not spread.

Instead of one failure sinking the entire application, only a portion is affected.


The Core Problem Bulkheads Solve

In many systems, subsystems unintentionally share critical resources:

  • Thread pools
  • Database connection pools
  • Memory
  • CPU
  • Network bandwidth
  • External API quotas

When one subsystem misbehaves—slow queries, infinite retries, traffic spikes—it can exhaust shared resources and starve healthy parts of the system.

This leads to:

  • Cascading failures
  • System-wide outages
  • “Everything is down” incidents caused by one weak link

What “Applying the Bulkhead Pattern” Means

When you apply the Bulkhead pattern, you intentionally isolate resources so that:

  • A failure in Subsystem A
  • Cannot exhaust or block resources used by Subsystem B

The goal is failure containment, not failure prevention.

Failures still happen—but they stay local.


A Simple Example

Without Bulkheads

  • Public API and background jobs share:
    • The same App Service
    • The same thread pool
    • The same database connection pool

A spike in background processing:

  • Consumes threads
  • Exhausts DB connections
  • Causes API requests to hang

Result: Total outage


With Bulkheads

  • Public API runs independently
  • Background jobs run in a separate process or service
  • Each has its own execution and scaling limits

Background jobs fail or slow down
API continues serving users

Result: Partial degradation, not total failure


Common Places to Apply Bulkheads

1. Service-level isolation

  • Separate services for:
    • Public APIs
    • Admin APIs
    • Background processing
  • Independent scaling and deployments

This is the most visible form of bulkheading.


2. Execution and thread isolation

  • Dedicated worker pools
  • Separate queues for different workloads
  • Isolation between synchronous and asynchronous processing

This prevents noisy workloads from starving critical paths.


3. Dependency isolation

  • Separate databases or schemas per workload
  • Read replicas for reporting
  • Independent external API clients with their own timeouts and retries

A slow dependency should not block unrelated operations.


4. Rate and quota isolation

  • Per-tenant throttling
  • Per-client limits
  • Separate API routes with different rate policies

Abuse or spikes from one consumer don’t impact others.


Cloud-Native Bulkheads (Real-World Examples)

You may already be using the Bulkhead pattern without explicitly naming it.

  • Web APIs separated from background jobs
  • Reporting workloads isolated from transactional databases
  • Admin endpoints deployed separately from public endpoints
  • Async processing moved to queues instead of inline execution

All of these are bulkheads in practice.


Bulkhead vs Circuit Breaker (Quick Clarification)

These patterns are often mentioned together, but they solve different problems:

  • Bulkhead pattern
    Prevents failures from spreading by isolating resources
  • Circuit breaker pattern
    Stops calling a dependency that is already failing

Think of bulkheads as structural isolation and circuit breakers as runtime protection.

Used together, they significantly improve system resilience.


Why This Pattern Matters in Production

Bulkheads:

  • Reduce blast radius
  • Turn outages into degradations
  • Protect critical user paths
  • Make systems predictable under stress

Most large-scale outages aren’t caused by a single bug—they’re caused by uncontained failures.

Bulkheads give you containment.


A Practical Mental Model

A simple way to reason about the pattern:

“What happens to the rest of the system if this component misbehaves?”

If the answer is “everything slows down or crashes”, you probably need a bulkhead.


Final Thoughts

The Bulkhead pattern isn’t about adding complexity—it’s about intentional boundaries.

You don’t need microservices everywhere.
You don’t need perfect isolation.

But you do need to decide:

  • Which failures are acceptable
  • Which paths must stay alive
  • Which resources must never be shared

Applied thoughtfully, bulkheads are one of the most effective tools for building systems that survive real-world conditions.

Bulkhead Pattern in Azure (Practical Examples)

Azure makes it relatively easy to apply the Bulkhead pattern because many services naturally enforce isolation boundaries.

Here are common, production-proven ways bulkheads show up in Azure architectures:

1. Separate compute for different workloads

  • Public-facing APIs hosted in:
    • Azure App Service
    • Azure Container Apps
  • Background processing hosted in:
    • Azure Functions
    • WebJobs
    • Container Apps Jobs

Each workload:

  • Scales independently
  • Has its own CPU, memory, and execution limits

A failure or spike in background processing does not starve user-facing traffic.


2. Queue-based isolation with Azure Storage or Service Bus

Using:

  • Azure Storage Queues
  • Azure Service Bus

…creates a natural bulkhead between:

  • Request handling
  • Long-running or unreliable work

If downstream processing slows or fails:

  • Messages accumulate
  • The API remains responsive

This is one of the most effective bulkheads in cloud-native systems.


3. Database workload separation

Common Azure patterns include:

  • Primary database for transactional workloads
  • Read replicas or secondary databases for reporting
  • Separate databases or schemas for batch jobs

Heavy analytics or reporting queries can no longer block critical application paths.


4. Rate limiting and ingress isolation

Using:

  • Azure API Management
  • Azure Front Door

You can enforce:

  • Per-client or per-tenant throttling
  • Separate rate policies for public vs admin APIs

This prevents abusive or noisy consumers from impacting the entire system.


5. Subscription and resource-level boundaries

At a higher level, bulkheads can also be enforced through:

  • Separate Azure subscriptions
  • Dedicated resource groups
  • Independent scaling and budget limits

This limits the blast radius of misconfigurations, cost overruns, or runaway workloads.


Why Azure Bulkheads Matter

In Azure, failures often come from:

  • Unexpected traffic spikes
  • Misbehaving background jobs
  • Cost-driven throttling
  • Shared service limits

Bulkheads turn these into localized incidents instead of platform-wide outages.

When Azure AD Claims Aren’t Enough: Issuing Your Own JWT for Custom Authorization

Modern applications often start with Azure AD (now Microsoft Entra ID) for authentication—and for good reason. It’s secure, battle-tested, and integrates seamlessly with Azure-native services.

But as systems grow, teams frequently hit a wall:

“We can authenticate users, but we can’t express our authorization logic cleanly using Entra ID claims alone.”

At that point, one architectural pattern comes into focus:

Validating the Azure AD token, then issuing your own application-specific JWT.

This post explains when this strategy makes sense, when it doesn’t, and how to implement it responsibly.


The Problem: Identity vs Authorization Drift

Azure AD excels at answering one question:

Who is this user?

It does this using:

  • App roles
  • Group membership
  • Scopes
  • Optional claims

However, real-world authorization often depends on things Azure AD cannot evaluate:

  • Data stored in your application database
  • Tenant-specific permissions
  • Feature flags or subscription tiers
  • Time-bound or contextual access rules
  • Row-level or domain-specific authorization logic

Trying to force this logic into Entra ID frequently leads to:

  • Role explosion
  • Overloaded group membership
  • Fragile claim mappings
  • Slow iteration cycles

This is where many systems start to creak.


The Strategy: Token Exchange with an Application-Issued JWT

Instead of overloading Azure AD, you introduce a clear trust boundary.

High-level flow

  1. User authenticates with Azure AD
  2. Client receives an Azure-issued access token
  3. Your API fully validates that token
  4. Your API issues a new, short-lived JWT containing:
    • Application-specific claims
    • Computed permissions
    • Domain-level roles
  5. Downstream services trust your issuer, not Azure AD directly

This is often referred to as:

  • Token exchange
  • Backend-for-Frontend (BFF) token
  • Application-issued JWT

It’s a well-established pattern in enterprise and government systems.


Why This Works Well

1. Azure AD handles authentication; your app handles authorization

This separation keeps responsibilities clean:

  • Azure AD → Identity proof
  • Your API → Business authorization

You avoid pushing business logic into your identity provider, where it doesn’t belong.


2. You can issue dynamic, computed claims

Your API can:

  • Query databases
  • Apply complex logic
  • Evaluate tenant state
  • Calculate effective permissions

Azure AD cannot do this—and shouldn’t.


3. Downstream services stay simple

Instead of every service needing to understand:

  • Azure tenants
  • Scopes vs roles
  • Group semantics

They simply trust:

  • Your issuer
  • Your claim contract

This dramatically simplifies internal service authorization.


4. Identity provider portability

If you later introduce:

  • Entra B2C
  • A second tenant
  • External identity providers

Your internal JWT remains the stable contract.
Only the validation layer changes.


When This Is Overkill

This pattern is not always the right choice.

Avoid it if:

  • App roles and groups already express your needs
  • Authorization rules are static
  • You don’t have downstream services
  • You want minimal operational overhead

A second token adds complexity—don’t add it unless it earns its keep.


Security Rules You Must Follow

If you issue your own JWT, there are no shortcuts.

1. Fully validate the Azure token

Always validate:

  • Signature
  • Issuer
  • Audience
  • Expiration
  • Required scopes or roles

If you skip any of these, the pattern collapses.


2. Keep your token short-lived

Best practice:

  • 5–15 minute lifetime
  • No refresh tokens (unless explicitly designed)

Azure AD should remain responsible for session longevity.


3. Protect and rotate signing keys

  • Use a dedicated signing key
  • Store it securely (Key Vault)
  • Rotate it regularly
  • Publish JWKS if multiple services validate your token

Your token is only as trustworthy as your key management.


4. Be disciplined with claims

Only include what downstream services actually need.

Avoid:

  • Personally identifiable information
  • Large payloads
  • Debug or “just in case” claims

JWTs are not data containers.


A Practical Mental Model

A simple way to reason about this pattern:

Azure AD authenticates people.
Your application authorizes behavior.

If that statement matches your system’s reality, this approach is not only valid—it’s often the cleanest solution.


Final Thoughts

Issuing your own JWT after validating an Azure AD token is not a workaround or an anti-pattern. It’s a deliberate architectural choice used in:

  • Regulated environments
  • Multi-tenant SaaS platforms
  • Government and municipal systems
  • Complex internal platforms

Like all powerful patterns, it should be applied intentionally, with strong security discipline and a clear boundary of responsibility.

Used correctly, it restores simplicity where identity systems alone fall short.

Architecture Diagram (Conceptual)

Actors and trust boundaries:

  1. Client (Web / SPA / Mobile)
    • Initiates sign-in using Microsoft Entra ID
    • Receives an Azure AD access token
    • Never handles application secrets or signing keys
  2. Microsoft Entra ID (Azure AD)
    • Authenticates the user
    • Issues a standards-based OAuth 2.0 / OpenID Connect token
    • Acts as the external identity authority
  3. Authorization API (Your Backend)
    • Validates the Azure AD token (issuer, audience, signature, expiry, scopes)
    • Applies application-specific authorization logic
    • Queries internal data sources (database, feature flags, tenant configuration)
    • Issues a short-lived, application-signed JWT
  4. Downstream APIs / Services
    • Trust only the application issuer
    • Validate the application JWT using published signing keys (JWKS)
    • Enforce authorization using domain-specific claims

Token flow:

  • Azure AD token → proves identity
  • Application-issued JWT → encodes authorization

This design creates a clean boundary where:

  • Identity remains centralized and externally managed
  • Authorization becomes explicit, testable, and owned by the application