How Metadata-Driven SharePoint Libraries Enable Future SaaS Automation

Most teams use SharePoint as a file storage system. Folders get created, documents get uploaded, and over time structure becomes messy. Search becomes harder, reporting becomes manual, and automation becomes nearly impossible.

The turning point comes when you stop thinking in folders and start thinking in metadata.

A metadata-driven SharePoint library doesn’t just store files — it stores structured information about your business operations. That structure is what enables automation and future SaaS capabilities.

Here’s how.


Folders Organize Storage. Metadata Organizes Meaning.

Folders answer:

Where is the file stored?

Metadata answers:

What is this file, who owns it, and how is it used?

For example, instead of:

Projects
 └── ClientA
      └── Contract.pdf

you get:

DocumentProject IDClientTypeStatus
Contract.pdf2026-0001ClientAContractSigned

Now SharePoint understands the document, not just its location.


Why Metadata Matters for Automation

Automation tools don’t understand folder names. They understand data.

Example automations enabled by metadata:

Automatic Document Routing

If:

Document Type = Invoice

Then:

  • Move to Finance workflow
  • Trigger billing automation
  • Notify accounting

No folder scanning required.


Contract Expiration Alerts

If:

Expiration Date = 2026-03-31

Then:

  • Notify team 30 days earlier
  • Start renewal workflow automatically

Folders alone cannot do this.


Cross-Project Reporting

With metadata:

Show all Active projects with High risk
Show all invoices pending payment
Show all contracts expiring this quarter

Without metadata, reporting requires manual effort.


Metadata Enables SaaS Product Thinking

This is where SharePoint work starts looking like SaaS architecture.

Your SaaS product later will need:

  • Projects
  • Documents
  • Contracts
  • Billing
  • Compliance tracking
  • Deliverables
  • Work logs

Each of these is metadata-driven.

In other words:

SharePoint metadata model = future product data model

Your document structure becomes a prototype for your SaaS logic.


Document Sets: Project Containers

Using Document Sets adds structure:

Project
 ├── Contracts
 ├── Finance
 ├── Delivery
 └── Admin

Project metadata lives at the container level, while documents inherit project context but keep their own lifecycle metadata.

This creates a natural separation:

LevelOwns
ProjectClient, status, risk, dates
DocumentType, owner, version, expiration

This mirrors SaaS project systems.


Automation Comes Later — Structure Comes First

A common mistake is trying to automate before structure exists.

Correct sequence:

  1. Standardize folder structure
  2. Define metadata
  3. Separate project vs document data
  4. Organize views
  5. Start automation
  6. Build dashboards
  7. Integrate systems
  8. Productize workflows

Automation works only when data is structured.


Long-Term Benefits

A metadata-driven library enables:

  • Faster search
  • Clean reporting
  • Automated workflows
  • Compliance tracking
  • Financial oversight
  • Project dashboards
  • SaaS-ready data models

And most importantly:

Less manual effort as operations scale.


Final Takeaway

The moment your document system understands business context, not just file paths, automation becomes possible.

Metadata turns SharePoint from file storage into an operational platform.

And once operations are structured, productization becomes achievable.

Building a Scalable SharePoint Project Workspace — Lessons from Today’s Setup

Today I finalized a major restructuring of my SharePoint project workspace, moving from an improvised document layout to a scalable, metadata-driven structure suitable for consulting, subcontracting, and future SaaS delivery work.

The goal was simple: build a project system that will still work five years from now without constant redesign.

Here’s what happened and what I learned.


Starting Point: Folder Chaos vs Structure

Like many teams, documents were growing organically:

  • Contracts in one place
  • HR documents somewhere else
  • Weekly reports in another folder
  • Financial and timesheet data mixed with operations

This works for small teams, but quickly breaks once projects multiply.

So I standardized the structure.


Standardized Project Folder Model

Each project now follows the same lifecycle structure:

01 — Contract & Governance

Everything that legally establishes and governs the project.

Examples:

  • Prime contracts
  • Subcontracts
  • Amendments
  • NDAs
  • Compliance documents

02 — Planning & Design

Pre-execution project preparation.

Examples:

  • Proposals
  • Staffing plans
  • Architecture/design documents
  • Project plans

03 — Execution & Delivery

Core delivery and operational work.

Examples:

  • Technical work
  • Weekly reports
  • Deliverables
  • Work logs

04 — Financials

Billing and financial tracking.

Examples:

  • Invoices
  • Timesheets
  • Banking records
  • Expenses
  • Tax documentation

05 — Admin & Closeout

Administrative and HR matters.

Examples:

  • Training certificates
  • Onboarding docs
  • Compliance forms
  • Remote work agreements
  • Closeout documentation

The Big Lesson: Metadata Beats Folders

The real breakthrough today wasn’t just folder structure.

It was realizing:

Folders organize storage. Metadata organizes understanding.

By using SharePoint metadata:

  • Project-level data lives on the Document Set
  • Document-level data stays on each document
  • Views show combined data cleanly
  • Documents remain individually searchable
  • Automation becomes possible later

So now:

  • Project metadata appears at project level
  • Document metadata remains editable per document
  • Views can filter, group, and report without moving files

Folders give structure; metadata gives intelligence.


Key Fix That Unblocked Everything

At one point, Document Set configuration kept failing.

The solution:

  • Delete and recreate the document library cleanly.
  • Re-add content types and metadata correctly.
  • Configure Document Sets before heavy customization.

Sometimes resetting is faster than debugging corruption.


Templates and Proposals Standardization

I also organized:

Templates Library

Contains reusable assets:

  • Capability statement
  • Invoice templates
  • NDA/MSA templates
  • Proposal templates
  • Standard project structure guide

Proposals Library

Organized by lifecycle stage:

  • Active
  • Submitted
  • Won
  • Lost

Metadata will later allow reporting without relying on folders alone.


Why This Matters Long-Term

This structure now supports:

  • Consulting projects
  • Government subcontracting
  • Multi-client work
  • Future SaaS delivery operations
  • Automation workflows
  • Reporting dashboards

Most importantly, it removes daily friction.


Final Takeaway

The biggest realization:

Good document structure isn’t about today’s convenience — it’s about future scalability.

A clean SharePoint structure saves time, reduces confusion, and supports automation later.

And today, the foundation is finally in place.

Pentagon Nears ‘Supply Chain Risk’ Designation for Anthropic in AI Use Clash

The U.S. Department of Defense is reportedly close to formally cutting business ties with Anthropic, the AI company behind the Claude language model, and may designate it as a “supply chain risk” — a severe classification usually reserved for foreign adversaries — amid a deepening dispute over how AI can be used by the U.S. military.

What’s Happening

According to Axios, senior Pentagon officials say Defense Secretary Pete Hegseth is nearing a decision to label Anthropic a supply chain risk, a move that would effectively force all U.S. defense contractors to sever ties with the company if they wish to continue working with the military.

This escalation stems from a standoff over usage restrictions that Anthropic has placed on Claude. While the Pentagon wants the flexibility to employ AI for “all lawful purposes,” including in classified military operations and battlefield decision-making, Anthropic has resisted broad use authorizations that could see its technology tied to mass surveillance of Americans or autonomous weapon systems.

Why It Matters

A supply chain risk designation is more than symbolic. It would legally require companies that do business with the Defense Department to certify they are not using Anthropic’s technology — meaning the Pentagon’s widest pool of contractors could potentially drop Claude from their systems. That outcome could reverberate far beyond military procurement: Anthropic has said Claude is in use at eight of the ten largest U.S. companies.

Importantly, Claude remains the only AI model currently cleared for use on some of the Pentagon’s classified networks, where it has been integrated as part of broader systems via contractors such as Palantir. The model was also reportedly used in a classified U.S. military operation earlier this year, though details remain limited and have been recently disputed in public statements.

Anthropic’s Stance

Anthropic has publicly emphasized its commitment to ethical guardrails — opposing uses of AI for mass civilian surveillance or for developing weapons that operate without human oversight. The company has indicated a willingness to negotiate on terms, but only where it can maintain safeguards aligned with its responsible-use principles.

Despite the friction, negotiations between the company and the Pentagon are reported to be ongoing, even as defense officials press for broader permissions.

Broader Implications

This dispute crystallizes a broader tension at the intersection of national security and AI ethics: military agencies seek expansive access to powerful AI tools in pursuit of operational advantage, while leading AI developers insist on guardrails to mitigate risks related to civil liberties, autonomous weapons, and unchecked surveillance.

Experts have long warned that the integration of AI into warfare and intelligence systems carries profound strategic, ethical, and legal consequences — spanning everything from command decision-making to civilian harm prevention. This standoff may mark a watershed moment in who ultimately shapes the rules governing AI’s role in national defense: tech companies, defense institutions, or lawmakers and regulators yet to act.

What Comes Next

At present the Pentagon has not publicly confirmed a final decision, and discussions continue behind closed doors. However, if a supply chain risk designation is finalized, it could dramatically reshape the landscape for AI companies and defense partnerships — with ripple effects across industry and government alike.

https://www.axios.com/2026/02/15/claude-pentagon-anthropic-contract-maduro

Designing a Metadata-Driven SharePoint Project Library with Automated Document Set Structure

Designing a Metadata-Driven SharePoint Project Library with Automated Document Set Structure.

1. Why Start with Metadata (Not Folders)

Most SharePoint project libraries fail for one reason:

They start with folders instead of metadata.

Folders solve navigation.
Metadata solves governance.

For a scalable project portfolio library, the structure must be driven by:

  • Project ID
  • Project Year
  • Portfolio Category
  • Project Type
  • Project Status
  • Client
  • Risk Level
  • Stage Gate

This allows:

  • View filtering (Active, Closed, Government, By Year)
  • Reporting
  • Automation
  • Lifecycle management
  • Future Power BI integration

Folders alone cannot do that.


2. Core Architecture

Site

Tanolis Projects

Library

Projects

Content Type

Document Set (for each project container)


3. Core Metadata Design

Create these site columns:

ColumnType
Project IDSingle line of text
Project YearChoice
Portfolio CategoryChoice
Project TypeChoice
Project StatusChoice
ClientSingle line
Risk LevelChoice
Stage GateChoice
Start DateDate
Target End DateDate

Attach these columns to the Document Set content type, not individual files.

This ensures:

  • Each project container carries structured metadata
  • Views operate on project-level attributes
  • Documents inherit metadata if configured

4. Why Document Set (Not Just Folder)

A Document Set is:

  • A special content type
  • A container with metadata
  • A logical project object

It behaves like a folder but supports:

  • Custom metadata
  • Welcome page
  • Shared columns
  • Governance workflows

A normal folder cannot do that.


5. Required SharePoint Configuration

Enable Document Sets

Site Settings → Site Collection Features
Activate Document Sets

Then:

Library Settings → Advanced Settings
✔ Allow management of content types

Add Document Set to the library.


6. The Problem: Auto-Creating Subfolders Inside Each New Project

Goal:

When I create:

2026-003_ClientZ_NewApp

Power Automate should automatically create:

01-Contract Governance
02-Planning Design
03-Execution Delivery
04-Financials
05-Admin Closeout

Inside that Document Set.

No duplicate containers.
No root-level folder creation.
No accidental “Shared Documents” folder nesting.


7. The Correct Trigger (Important)

Use:

When a file is created (properties only)

Why?

Because:

  • A Document Set inside a document library is treated as a file/folder object.
  • “When an item is created” is for SharePoint lists.
  • Using the wrong trigger causes null content type errors and template failures.

Lock this in.


8. The Final Working Flow Design

Step 1 – Trigger

When a file is created (properties only)
Library: Projects


Step 2 – Condition

Check:

Content Type
is equal to
Document Set

No expressions.
No startsWith.
No ContentTypeId hacks.

Keep it clean.


Step 3 – Initialize Folder Array

Initialize variable:

Type: Array

[
"01-Contract Governance",
"02-Planning Design",
"03-Execution Delivery",
"04-Financials",
"05-Admin Closeout"
]

Step 4 – Apply to Each

Loop through the folder array.

Inside the loop:

Create new folder.


9. The Critical Folder Path Expression

This is where most implementations fail.

Correct expression:

concat(triggerOutputs()?['body/{Path}'], triggerOutputs()?['body/{FilenameWithExtension}'], '/', item())

Why this works:

From trigger output:

{Path} = Shared Documents/
{FilenameWithExtension} = foo set

Final path becomes:

Shared Documents/foo set/01-Contract Governance

Which means:

Folders are created inside the Document Set container — not in the root.


10. Common Mistakes (And Why They Fail)

❌ Using only Filename

concat(triggerOutputs()?['body/{FilenameWithExtension}'],'/',item())

Result:
Creates duplicate root folder or wrong nesting.


❌ Using ContentTypeId startsWith

Leads to:

startsWith expects string but got null

Because wrong trigger context.


❌ Using “When an item is created”

Causes:

  • Null content type
  • Condition failures
  • Inconsistent behavior

11. Handling Race Conditions

Sometimes folder creation hangs because:

The Document Set is not fully provisioned when the flow runs.

Solution:

Add a small Delay (5 seconds minimum on consumption plan).

Or use retry policy.


12. Optional Enhancements

You can extend this design to:

  • Auto-assign permissions based on Portfolio Category
  • Notify PM when project is created
  • Trigger approval workflow at Stage Gate change
  • Auto-create Teams channel per project
  • Sync metadata to Dataverse

13. Architectural Pattern Summary

What you built is:

✔ Metadata-first design
✔ Document Set container model
✔ Automated structural provisioning
✔ Governance-ready foundation

This scales to:

  • 10 projects
  • 100 projects
  • 1,000 projects

Without structural drift.


14. Final Design Philosophy

Folders are operational.
Metadata is strategic.

Document Sets give you both.

Power Automate enforces consistency.

Designing a Secure Home Lab with VLAN Segmentation and TLS Subdomain Separation Using Traefik

Modern home labs and small hosting environments often grow organically. New services are added over time, ports multiply, and TLS certificates become difficult to manage. Eventually, what started as a simple setup becomes hard to secure and maintain.

Over the last few years, I gradually evolved my lab environment into a structure that separates workloads, automates TLS, and simplifies routing using Traefik as a reverse proxy.

This article summarizes the architecture and lessons learned from running multiple Traefik instances across segmented networks with automated TLS certificates.


The Initial Problem

Typical home lab setups look like this:

service1 → host:9000
service2 → host:9443
service3 → host:8123
service4 → host:8080

Problems quickly appear:

  • Too many ports exposed
  • TLS certificates become manual work
  • Hard to secure services individually
  • Debugging routing becomes messy
  • Services mix across trust levels

As services increase, maintenance becomes harder.


Design Goals

The environment was redesigned around a few simple goals:

  1. One secure entry point for services
  2. Automatic TLS certificate management
  3. Network segmentation between service types
  4. Clean domain naming
  5. Failure isolation between environments
  6. Minimal ongoing maintenance

High-Level Architecture

The resulting architecture separates services using VLANs and domain zones.

Internet
    ↓
DNS
    ↓
Traefik Reverse Proxy Instances
    ↓
Segmented Service Networks

Workloads are separated by purpose and risk profile.

Example:

Secure VLAN → internal services
IoT VLAN → containers and test services
Application VLAN → development workloads

Each network segment runs its own services and routing.


Role of Traefik

Traefik serves as the gateway for services by handling:

  • HTTPS certificates (Let’s Encrypt)
  • Reverse proxy routing
  • Automatic service discovery
  • HTTPS redirects
  • Security headers

Instead of accessing services by ports, everything is exposed through HTTPS:

https://sonarqube.example.com
https://portainer.example.com
https://grafana.example.com

Traefik routes traffic internally to the correct service.


TLS Strategy: Subdomain Separation

Instead of creating individual certificates per service, services are grouped by domain zones.

Example zones:

*.dk.example.com
*.pbi.example.com
*.ad.example.com

Each zone receives a wildcard certificate.

Example services:

sonarqube.dk.example.com
traefik.dk.example.com
grafana.dk.example.com

Benefits:

  • One certificate covers many services
  • Renewal complexity drops
  • Let’s Encrypt rate limits avoided
  • Services can be added freely
  • Routing stays simple

Each Traefik instance manages certificates for its own domain zone.


Why Multiple Traefik Instances?

Rather than centralizing everything, multiple Traefik gateways are used.

Example:

  • Unraid services handled by one proxy
  • Docker services handled by another
  • Podman workloads handled separately

Benefits:

  • Failure isolation
  • Independent upgrades
  • Easier experimentation
  • Reduced blast radius during misconfiguration

If one gateway fails, others continue operating.


Operational Benefits Observed

After stabilizing this architecture:

Certificate renewal became automatic

No manual certificate maintenance required.

Service expansion became simple

New services only need routing rules.

Network isolation improved safety

IoT workloads cannot easily reach secure services.

Troubleshooting became easier

Common issues reduce to:

404 → router mismatch
502 → backend unreachable
TLS error → DNS or certificate issue

Lessons Learned

Several practical lessons emerged.

Use container names instead of IPs

Docker DNS is more stable than static IP references.

Keep services on shared networks

Ensures routing remains predictable.

Remove unnecessary exposed ports

Let Traefik handle public access.

Back up certificate storage

Losing certificate storage can trigger renewal rate limits.

Avoid unnecessary upgrades

Infrastructure components should change slowly.


Is This Overkill for a Home Lab?

Not necessarily.

As soon as you host multiple services, segmentation and automated TLS reduce maintenance effort and improve reliability.

Even small environments benefit from:

  • consistent routing
  • secure entry points
  • simplified service management

Final Thoughts

Traefik combined with VLAN segmentation and TLS subdomain zoning has provided a stable and low-maintenance solution for managing multiple services.

The environment now:

  • renews certificates automatically
  • isolates workloads
  • simplifies routing
  • scales easily
  • requires minimal manual intervention

What started as experimentation evolved into a practical architecture pattern that now runs quietly in the background.

And in infrastructure, quiet is success.