Most .NET Developers Allocate Memory They Don’t Need To — Span Fixes That

If you work with .NET long enough, you eventually discover that performance issues rarely come from complex algorithms.

They come from small allocations happening millions of times.

And many of those allocations come from code that looks perfectly harmless.


The Hidden Problem: Unnecessary Heap Allocations

Consider common operations like:

  • .Substring()
  • .Split()
  • .ToArray()

These methods feel lightweight, but each one creates new objects on the heap.

That means:

  • More memory usage
  • More work for the garbage collector
  • More latency under load

In an API handling thousands of requests — or inside a tight parsing loop — these tiny costs accumulate quickly.


Enter Span<T>

Span<T> solves this by letting you work with existing memory instead of allocating new memory.

Think of it as:

A lightweight window into data that already exists.

No copying.
No allocations.
No extra GC pressure.


A Simple Example

Imagine you have a date string:

string date = "2025-06-15";

Most developers extract the year like this:

var year = date.Substring(0, 4);

This creates a brand-new string "2025" on the heap.

Now compare that with:

ReadOnlySpan<char> year = date.AsSpan(0, 4);

Same logical result — but zero allocation.

You’re simply pointing to a slice of the original string.


The Core Mental Model

A Span<T> does not own memory.

It only references memory that already exists.

Think of it like:

Original data  ───────────────
[ window ]
Span

You move the window around instead of copying the data.


Where Span<T> Really Shines

Once you understand the concept, you’ll start seeing opportunities everywhere:

Parsing workloads

  • CSV or log file parsing without generating thousands of temporary strings.

HTTP processing

  • Parse headers without copying byte arrays.

Binary protocols

  • Slice buffers directly instead of creating intermediate arrays.

String processing

  • Replace Split() calls that create multiple arrays and strings.

Real-World Impact

In production parsing-heavy services, teams commonly see:

  • 40–60% fewer allocations
  • Noticeably reduced GC pauses
  • Higher throughput under load

Less copying means more CPU time spent doing real work.


The Three Rules You Need to Remember

1️⃣ Span<T> lives on the stack

You cannot store it on the heap or in class fields.

2️⃣ Use ReadOnlySpan<T> for read-only data

Most string scenarios fall into this category.

3️⃣ Use Memory<T> when persistence is required

If you need to store or pass the reference beyond stack scope, use Memory<T>.


How to Adopt It Without a Big Rewrite

You don’t need to refactor your entire codebase.

Start small:

  1. Profile your application
  2. Identify hot paths
  3. Look for repeated Substring, Split, or ToArray calls
  4. Replace them with Span slicing
  5. Measure again

Performance improvements here are often immediate and measurable.


Final Thought

Most .NET performance problems aren’t about writing clever code.

They’re about avoiding unnecessary work.

Span<T> gives you a simple, safe way to reduce allocations and let your application scale more efficiently — without changing how your logic works.

Once you start using it in hot paths, it becomes difficult to go back.

Anthropic Just Changed the AI Pricing Game with Claude Sonnet 4.6

Anthropic has officially rolled out Claude Sonnet 4.6, its latest mid-tier model — and it’s not just an incremental upgrade. It’s a strategic shift.

In a surprising move, Sonnet 4.6 now matches or even outperforms the flagship Opus 4.6 across multiple benchmarks — at one-fifth the price and with a massive 1 million token context window.

This is not normal mid-tier behavior.


🔍 Performance Breakdown

💻 Coding (SWE-Bench Verified)

  • Sonnet 4.6: 79.6%
  • Opus 4.6: 80.8%
  • Cost: Sonnet runs at ~20% of Opus pricing

That’s near-flagship coding performance for dramatically lower cost — a serious signal for engineering teams running large volumes of inference.


📊 Financial & Office Task Benchmarks

For the first time, a mid-tier Claude model:

  • Outscored Opus 4.6 in agentic financial analysis
  • Beat Opus 4.6 in office-task evaluations

This is significant because “agentic” tasks require planning, tool use, multi-step reasoning, and domain understanding — not just raw language generation.


🧑‍💻 Claude Code Preference Testing

Early testers preferred:

  • Sonnet 4.6 over its predecessor 70% of the time
  • Sonnet 4.6 over Opus 4.5 at a 59% rate

That suggests practical usability gains — not just benchmark inflation.


🖥 Computer Use Is Accelerating Fast

Sonnet’s OSWorld score jumped from under 15% in late 2024 to 72.5%.

That’s not a small improvement. That’s an inflection point.

The implication?
Desktop automation and real-world AI agents are moving from experimental to operational viability.


🧠 Why This Matters

Anthropic appears to be executing a trickle-down strategy at warp speed:

  1. Launch a flagship (Opus 4.6).
  2. Rapidly push near-flagship capability into a lower-priced tier.
  3. Compete directly in the high-volume “agentic layer” of the AI market.

With aggressive Chinese frontier models undercutting pricing across the industry, cost-performance ratio is becoming the real battlefield.

Sonnet 4.6 looks like a direct response.


🚀 Strategic Implications

For teams building:

  • Developer copilots
  • Financial analysis tools
  • Automation agents
  • SaaS back-office systems
  • Multi-step AI workflows

The calculus changes.

If you can get ~98% of flagship capability at 20% of the cost, the default choice shifts.

This isn’t just about benchmarks.
It’s about the economics of deploying AI at scale.


Final Take

Claude Sonnet 4.6 may be the clearest signal yet that:

  • Mid-tier models are becoming the real production workhorses.
  • Price-performance efficiency is overtaking raw capability.
  • The “volume layer” of AI agents is about to scale rapidly.

Anthropic isn’t just improving models.

It’s compressing the performance gap — fast.

And that changes everything.

https://www.anthropic.com/news/claude-sonnet-4-6?utm_source=www.therundown.ai

How Metadata-Driven SharePoint Libraries Enable Future SaaS Automation

Most teams use SharePoint as a file storage system. Folders get created, documents get uploaded, and over time structure becomes messy. Search becomes harder, reporting becomes manual, and automation becomes nearly impossible.

The turning point comes when you stop thinking in folders and start thinking in metadata.

A metadata-driven SharePoint library doesn’t just store files — it stores structured information about your business operations. That structure is what enables automation and future SaaS capabilities.

Here’s how.


Folders Organize Storage. Metadata Organizes Meaning.

Folders answer:

Where is the file stored?

Metadata answers:

What is this file, who owns it, and how is it used?

For example, instead of:

Projects
 └── ClientA
      └── Contract.pdf

you get:

DocumentProject IDClientTypeStatus
Contract.pdf2026-0001ClientAContractSigned

Now SharePoint understands the document, not just its location.


Why Metadata Matters for Automation

Automation tools don’t understand folder names. They understand data.

Example automations enabled by metadata:

Automatic Document Routing

If:

Document Type = Invoice

Then:

  • Move to Finance workflow
  • Trigger billing automation
  • Notify accounting

No folder scanning required.


Contract Expiration Alerts

If:

Expiration Date = 2026-03-31

Then:

  • Notify team 30 days earlier
  • Start renewal workflow automatically

Folders alone cannot do this.


Cross-Project Reporting

With metadata:

Show all Active projects with High risk
Show all invoices pending payment
Show all contracts expiring this quarter

Without metadata, reporting requires manual effort.


Metadata Enables SaaS Product Thinking

This is where SharePoint work starts looking like SaaS architecture.

Your SaaS product later will need:

  • Projects
  • Documents
  • Contracts
  • Billing
  • Compliance tracking
  • Deliverables
  • Work logs

Each of these is metadata-driven.

In other words:

SharePoint metadata model = future product data model

Your document structure becomes a prototype for your SaaS logic.


Document Sets: Project Containers

Using Document Sets adds structure:

Project
 ├── Contracts
 ├── Finance
 ├── Delivery
 └── Admin

Project metadata lives at the container level, while documents inherit project context but keep their own lifecycle metadata.

This creates a natural separation:

LevelOwns
ProjectClient, status, risk, dates
DocumentType, owner, version, expiration

This mirrors SaaS project systems.


Automation Comes Later — Structure Comes First

A common mistake is trying to automate before structure exists.

Correct sequence:

  1. Standardize folder structure
  2. Define metadata
  3. Separate project vs document data
  4. Organize views
  5. Start automation
  6. Build dashboards
  7. Integrate systems
  8. Productize workflows

Automation works only when data is structured.


Long-Term Benefits

A metadata-driven library enables:

  • Faster search
  • Clean reporting
  • Automated workflows
  • Compliance tracking
  • Financial oversight
  • Project dashboards
  • SaaS-ready data models

And most importantly:

Less manual effort as operations scale.


Final Takeaway

The moment your document system understands business context, not just file paths, automation becomes possible.

Metadata turns SharePoint from file storage into an operational platform.

And once operations are structured, productization becomes achievable.

Building a Scalable SharePoint Project Workspace — Lessons from Today’s Setup

Today I finalized a major restructuring of my SharePoint project workspace, moving from an improvised document layout to a scalable, metadata-driven structure suitable for consulting, subcontracting, and future SaaS delivery work.

The goal was simple: build a project system that will still work five years from now without constant redesign.

Here’s what happened and what I learned.


Starting Point: Folder Chaos vs Structure

Like many teams, documents were growing organically:

  • Contracts in one place
  • HR documents somewhere else
  • Weekly reports in another folder
  • Financial and timesheet data mixed with operations

This works for small teams, but quickly breaks once projects multiply.

So I standardized the structure.


Standardized Project Folder Model

Each project now follows the same lifecycle structure:

01 — Contract & Governance

Everything that legally establishes and governs the project.

Examples:

  • Prime contracts
  • Subcontracts
  • Amendments
  • NDAs
  • Compliance documents

02 — Planning & Design

Pre-execution project preparation.

Examples:

  • Proposals
  • Staffing plans
  • Architecture/design documents
  • Project plans

03 — Execution & Delivery

Core delivery and operational work.

Examples:

  • Technical work
  • Weekly reports
  • Deliverables
  • Work logs

04 — Financials

Billing and financial tracking.

Examples:

  • Invoices
  • Timesheets
  • Banking records
  • Expenses
  • Tax documentation

05 — Admin & Closeout

Administrative and HR matters.

Examples:

  • Training certificates
  • Onboarding docs
  • Compliance forms
  • Remote work agreements
  • Closeout documentation

The Big Lesson: Metadata Beats Folders

The real breakthrough today wasn’t just folder structure.

It was realizing:

Folders organize storage. Metadata organizes understanding.

By using SharePoint metadata:

  • Project-level data lives on the Document Set
  • Document-level data stays on each document
  • Views show combined data cleanly
  • Documents remain individually searchable
  • Automation becomes possible later

So now:

  • Project metadata appears at project level
  • Document metadata remains editable per document
  • Views can filter, group, and report without moving files

Folders give structure; metadata gives intelligence.


Key Fix That Unblocked Everything

At one point, Document Set configuration kept failing.

The solution:

  • Delete and recreate the document library cleanly.
  • Re-add content types and metadata correctly.
  • Configure Document Sets before heavy customization.

Sometimes resetting is faster than debugging corruption.


Templates and Proposals Standardization

I also organized:

Templates Library

Contains reusable assets:

  • Capability statement
  • Invoice templates
  • NDA/MSA templates
  • Proposal templates
  • Standard project structure guide

Proposals Library

Organized by lifecycle stage:

  • Active
  • Submitted
  • Won
  • Lost

Metadata will later allow reporting without relying on folders alone.


Why This Matters Long-Term

This structure now supports:

  • Consulting projects
  • Government subcontracting
  • Multi-client work
  • Future SaaS delivery operations
  • Automation workflows
  • Reporting dashboards

Most importantly, it removes daily friction.


Final Takeaway

The biggest realization:

Good document structure isn’t about today’s convenience — it’s about future scalability.

A clean SharePoint structure saves time, reduces confusion, and supports automation later.

And today, the foundation is finally in place.

Pentagon Nears ‘Supply Chain Risk’ Designation for Anthropic in AI Use Clash

The U.S. Department of Defense is reportedly close to formally cutting business ties with Anthropic, the AI company behind the Claude language model, and may designate it as a “supply chain risk” — a severe classification usually reserved for foreign adversaries — amid a deepening dispute over how AI can be used by the U.S. military.

What’s Happening

According to Axios, senior Pentagon officials say Defense Secretary Pete Hegseth is nearing a decision to label Anthropic a supply chain risk, a move that would effectively force all U.S. defense contractors to sever ties with the company if they wish to continue working with the military.

This escalation stems from a standoff over usage restrictions that Anthropic has placed on Claude. While the Pentagon wants the flexibility to employ AI for “all lawful purposes,” including in classified military operations and battlefield decision-making, Anthropic has resisted broad use authorizations that could see its technology tied to mass surveillance of Americans or autonomous weapon systems.

Why It Matters

A supply chain risk designation is more than symbolic. It would legally require companies that do business with the Defense Department to certify they are not using Anthropic’s technology — meaning the Pentagon’s widest pool of contractors could potentially drop Claude from their systems. That outcome could reverberate far beyond military procurement: Anthropic has said Claude is in use at eight of the ten largest U.S. companies.

Importantly, Claude remains the only AI model currently cleared for use on some of the Pentagon’s classified networks, where it has been integrated as part of broader systems via contractors such as Palantir. The model was also reportedly used in a classified U.S. military operation earlier this year, though details remain limited and have been recently disputed in public statements.

Anthropic’s Stance

Anthropic has publicly emphasized its commitment to ethical guardrails — opposing uses of AI for mass civilian surveillance or for developing weapons that operate without human oversight. The company has indicated a willingness to negotiate on terms, but only where it can maintain safeguards aligned with its responsible-use principles.

Despite the friction, negotiations between the company and the Pentagon are reported to be ongoing, even as defense officials press for broader permissions.

Broader Implications

This dispute crystallizes a broader tension at the intersection of national security and AI ethics: military agencies seek expansive access to powerful AI tools in pursuit of operational advantage, while leading AI developers insist on guardrails to mitigate risks related to civil liberties, autonomous weapons, and unchecked surveillance.

Experts have long warned that the integration of AI into warfare and intelligence systems carries profound strategic, ethical, and legal consequences — spanning everything from command decision-making to civilian harm prevention. This standoff may mark a watershed moment in who ultimately shapes the rules governing AI’s role in national defense: tech companies, defense institutions, or lawmakers and regulators yet to act.

What Comes Next

At present the Pentagon has not publicly confirmed a final decision, and discussions continue behind closed doors. However, if a supply chain risk designation is finalized, it could dramatically reshape the landscape for AI companies and defense partnerships — with ripple effects across industry and government alike.

https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro