Rebuilding My Personal Blog on Azure: Lessons From the Trenches

In January, I decided to rebuild my personal WordPress blog on Azure.

Not as a demo.
Not as a “hello world.”
But as a long-running, low-cost, production-grade personal workload—something I could realistically live with for years.

What followed was a reminder of why real cloud engineering is never about just clicking “Create”.


Why I Didn’t Use App Service (Again)

I initially explored managed options like Azure App Service and Azure Container Apps. On paper, they’re perfect. In practice, for a personal blog:

  • Storage behavior mattered more than storage size
  • Hidden costs surfaced through SMB operations and snapshots
  • PHP versioning and runtime controls were more rigid than expected

Nothing was “wrong” — but it wasn’t predictable enough for a small, fixed budget site.

So I stepped back and asked a simpler question:

What is the most boring, controllable architecture that will still work five years from now?


The Architecture I Settled On

I landed on a single Ubuntu VM, intentionally small:

  • Azure VM: B1ms (1 vCPU, 2 GB RAM)
  • OS: Ubuntu 22.04 LTS
  • Stack: Docker + Nginx + WordPress (PHP-FPM) + MariaDB
  • Disk: 30 GB managed disk
  • Access: SSH with key-based auth
  • Networking: Basic NSG, public IP

No autoscaling. No magic. No illusions.

Just something I fully understand.


Azure Policy: A Reality Check

The first thing that blocked me wasn’t Linux or Docker — it was Azure Policy.

Every resource creation failed until I added mandatory tags:

  • env
  • costCenter
  • owner

Not just on the VM — but on:

  • Network interfaces
  • Public IPs
  • NSGs
  • Disks
  • VNets

Annoying? Slightly.
Realistic? Absolutely.

This is what production Azure environments actually look like.


The “Small” Issues That Matter

A few things that sound trivial — until you hit them at 2 AM:

  • SSH keys rejected due to incorrect file permissions on Windows/WSL
  • PHP upload limits silently capped at 2 MB
  • Nginx + PHP-FPM + Docker each enforcing their own limits
  • A 129 MB WordPress backup restore failing until every layer agreed
  • Choosing between Premium vs Standard disks for a low-IO workload

None of these are headline features.
All of them determine whether the site actually works.


Cost Reality

My target budget: under $150/month total, including:

  • A static site (tanolis.us)
  • This WordPress blog

The VM-based approach keeps costs:

  • Predictable
  • Transparent
  • Easy to tune (disk tier, VM size, shutdown schedules)

No surprises. No runaway meters.


Why This Experience Matters

This wasn’t about WordPress.

It was about:

  • Designing for longevity, not demos
  • Understanding cost behavior, not just pricing
  • Respecting platform guardrails instead of fighting them
  • Choosing simplicity over abstraction when it makes sense

The cloud is easy when everything works.
Engineering starts when it doesn’t.


What’s Next

For now, the site is up.
Backups are restored.
Costs are under control.

Next steps — when I feel like it:

  • TLS with Let’s Encrypt
  • Snapshot or off-VM backups
  • Minor hardening

But nothing urgent. And that’s the point.

Sometimes the best architecture is the one that lets you stop thinking about it.

IDesign Method: An Overview

Software projects often start small and cute, but can quickly become unmanageable as requirements change. This transformation is usually due to the lack of an appropriate architecture, or an architecture that is not designed for future change.

The IDesign Method: An Overview
The IDesign method, developed by Juval Löwy, provides a systematic approach to creating a software architecture that will stand the test of time. Let’s explore its key principles.

Avoid functional decomposition
The first principle of IDesign is to avoid functional decomposition – the practice of translating requirements directly into services. For example, if you’re building an e-commerce platform, don’t create separate services for “user management”, “product catalogue” and “order processing” just because those are your main requirements. Instead, IDesign advocates a more thoughtful approach based on volatility.

Volatility based decomposition
IDesign focuses on identifying areas of volatility – aspects of the system that are likely to change over time. For example, in our e-commerce example, payment methods might be an area of volatility, as you may need to add new payment options in the future.

The three-step process:
Identify 3-5 core use cases
What your system does at its most basic level. For our e-commerce platform, these might be:

Browse and search for products
Manage shopping cart
Completing a purchase

Identify areas of volatility
Identify aspects of the system that are likely to change. In our e-commerce example:
Payment methods
Shipping options
Product recommendation algorithms

Define services
IDesign defines five types of services:
Client: Handles user interaction (e.g. web interface)
Manager: Orchestrates business use cases
Engine: Executes specific business logic
Resource Access: Handles data storage and retrieval
Utility: Provides cross-cutting functionality

For our e-commerce platform example we might have:

A ShoppingManager – to orchestrate the shopping process
A PaymentEngine – to handle different payment methods
A ProductCatalogAccess – to manage product data

Design Principles and Patterns

Great software is not written.
It’s designed.

Most systems don’t fail because of bad developers.
They fail because of bad design decisions made early — and scaled blindly.

This is the foundation every serious engineer and tech leader must master 👇

Design Principles & Patterns

🔹 SOLID

SRP – One class, one reason to change
OCP – Extend, don’t modify
LSP – Substitutions must be safe
ISP – Small, focused interfaces
DIP – Depend on abstractions, not concretes

SOLID isn’t theory. It’s how you avoid rewriting your system every 6 months.

🔹 GoF Design Patterns

1) Creational → Control how objects are created (Factory, Builder, Singleton)
2) Structural → Control how objects are composed (Adapter, Facade, Proxy)
3) Behavioral → Control how objects communicate (Strategy, Observer, Command)

Patterns are not “fancy code.”
They are battle-tested solutions to recurring problems.

🔹 DRY – Don’t Repeat Yourself
Duplication is a silent killer.
It multiplies bugs and slows teams.

🔹 KISS – Keep It Simple
Complexity is not intelligence.
Simplicity is.

🔹 MVC + Repository + Unit of Work
Clean separation of concerns.
Predictable codebases.
Scalable teams.

Reality check:

Frameworks change.
Languages change.
Trends change.

Principles don’t.

If you want to build:

Systems that scale
Teams that move fast
Products that survive years

Master the fundamentals.

Everything else is noise.

AI and Developer Productivity

The world of software development is in constant flux, but few forces have driven as profound a shift as artificial intelligence. What once seemed like science fiction is now an everyday reality, with AI tools seamlessly integrating into developer workflows, promising not just incremental gains but a fundamental redefinition of productivity. In 2025, developers are finding themselves empowered by intelligent assistants, automated guardians of code quality, and even AI “colleagues” capable of tackling complex engineering tasks.

I have written a bunch of articles in CODE Magazine about AI. All of them have focused on learning AI, such as image generation, creating a local chat bot, and more. But what if you’re not an AI developer? Maybe you’re a ReactJS developer writing front-end code all day long. Or maybe you write REST APIs using Python all day long.

Let’s be honest, AI is exciting, but many of us are still working day-in-day-out delivering business functionality code, things your employer needs today. Should you ignore AI? Far from it. This article explores how AI is boosting developer productivity across the software development lifecycle, complete with practical examples to illustrate its transformative power.

The AI-Powered Developer: A New Paradigm

As of today, at its core, AI for developers isn’t about replacing human creativity; it’s about augmenting it. By offloading repetitive, time-consuming, and error-prone tasks, AI frees developers to focus on higher-level problem-solving, architectural design, and innovative solutions. This shift fosters a more engaging and less frustrating development experience.

Let’s dive into the key areas where AI is making a tangible difference.

There are many ways I see that AI can help you as a developer. This is by no means an exhaustive list, if you have ideas do share.

AI as Your Pair Programmer

The most visible and widely adopted application of AI in development is in code generation and intelligent completion. There are many competing tools you can use. Tools like GitHub Copilot, Tabnine, and Amazon CodeWhisperer act as highly intelligent pair programmers, anticipating your next move and suggesting relevant code snippets, entire functions, or even boilerplate structures. In fact, there are VSCode extensions that let you plug into any AI model to get specific help for your scenario. You can even use Ollama to run things locally if you’re in an air-gapped secure environment. Of course, the capabilities of cloud-based models are far ahead of what Ollama on your local machine can do, but Ollama with a local model is still superpowers that you didn’t know you had.

There are many benefits of incorporating AI as your pair programmer.

The first is, of course, speed. Using AI drastically reduces the time spent on writing repetitive code or searching for syntax. How often do you find yourself struggling to find the right syntax for a particular thing you’re trying to do? Or writing repetitive code that you know you can write, but would rather have a helper write for you, and maybe even write it better than you? Like, find username out of a jwt token. I know how to do this, I just wish I didn’t have to do this in every project I land in. Yes you decode the token, which means first convert base 64 to JSON, oh wait, first separate the three parts of the token, validate the signature, blah blah! Dear AI: Just do this for me, please?

The other obvious advantage is accuracy. Using AI minimizes typos and common syntax errors, leading to fewer debugging cycles. When I was a programmer in my teens, I took great pride in my accuracy and typing capabilities. I could type at > 140WPM without errors. Alas, as time has passed, my fingers have too grown older. I do make mistakes now. Unfortunate mistakes that take forever to find the errors they introduce. All because of a stupid typo. If I can have Microsoft Word correct my spelling mistakes, wouldn’t it be nice if AI can fix the errors my IDE cannot catch?

And finally, like any good pair programmer, I learn from my AI buddy. See I’ve never been a fan of pair programming. I know I know, you can put those daggers back in their sheaths. But I learn differently from others. When I’m deep into programming, I don’t want another person interrupting my thought process, or constantly interrupting asking questions. Pair programming may be great for the new person on the team, but as an experienced programmer (sorry for putting myself on a pedestal), I found pair programming was a lot of giveth and not enough taketh. I want to pair program with someone better than me, and those can be hard to find.

…this article is continued online. Click here to continue

Building the Bridge Between AI and the Real World

AI is like working with an inept wizard. (Yes, I have a lot of metaphors for this.) When you ask the wizard a question, he responds with the intellect and rapidity of someone who has access to the knowledge of the cosmos. He’s read everything, but he’s a bit dotty. He’s lived his entire life in his lair, consuming his tomes. Despite his vast knowledge, he has no idea what happened in the world yesterday. He doesn’t know what’s in your inbox. Moreover, he knows nothing about your contact list, your company’s proprietary data, or the fact that your cousin’s birthday party got bumped to next Friday. The wizard is a genius. He’s also an idiot savant.

Therein lies the paradox. We have designed amazing tools, but they require a lot of handholding. Context has to be spoon-fed. You can paste an entire mountain of reference documents and a virtual novel of a prompt. That amount of work can often eliminate any benefit you get from using an LLM at all. When it does work, it’s a victory but it feels like you’ve wrestled the LLM into submission instead of working with it.

Users have been cobbling together ad hoc solutions for this problem. Plug-ins. Vector databases. Retrieval systems. These Band-Aids are clever, but fragile. They don’t cooperate with each other. They break when you switch providers. It’s less “responsible plumbing” and more “duct tape and prayer.”

This is where Model Context Protocol (MCP) comes in. It establishes a foundational infrastructure rather than creating one more marketplace for custom connectors. MCP sets up standardized rails for integrating context. This shared framework enables models to request context, retrieve it from authorized sources, and securely use it. It replaces the current kluge of vendor-specific solutions with a unified protocol designed to connect AI to real world systems and data.

As AI transitions from an experimental novelty to practical infrastructure, this utility becomes crucial. For the wizard to be effective, he needs to be able to do more than solve one-off code hiccups or create content for your blog. For true usefulness at scale in a professional environment you need a standardized way to integrate context. That context has to respect permissions, meet security standards, and be up to date.

The Problem of Context in AI

Models tend to make things up and they do it with confidence. Sometimes they cite fictional academic papers. Sometimes they invent dates, statistics, or even people. These hallucinations are a huge problem, of course, but they’re a symptom of a much larger issue: a lack of context.

The Context Window Problem

Developers have been trying to develop workarounds by providing relevant data as needed. Pasting in documents, providing chunks of a database, and formulating absurdly robust prompts. These fixes are great, but every LLM has what we call a context window. The window determines how many tokens a model can remember at any given time. Some of the bigger LLMs have windows that can accommodate hundreds of thousands of tokens, but users still quickly find ways to hit that wall.

Bigger context windows should be the answer, right? But there’s our Catch 22: The more data you provide within that window, the more fragile the entire set up becomes. If there’s not enough context, the model may very well just make stuff up. If you provide too much, the model bogs down or becomes too pricey to run.

The Patchwork Fixes

The AI community wasn’t content to wait for one of the big players to provide a solution. Everyone rushed to be first-to-market with an assortment of potential fixes.

Custom plug-ins let the models access external tools and databases, extending their abilities beyond the frozen training data. You can see the issue here. Plug-ins designed for one platform won’t work with another. Your workspace becomes siloed and fragmented, forcing you to rework your integrations if you try to switch AI providers.

Retrieval Augmented Generation (RAG) converts documents to embed them into a vector database so that you can pull only the most relevant chunks during a query. This method is pretty effective but requires significant technical skills and ongoing fine-tuning based on your organization’s specific requirements.

… this article is continued online. Click here to continue