Debugging an IIS-Hosted ASP.NET Core API on Azure VM: A Real-World Walkthrough

Overview

This article walks through a real-world debugging scenario involving an ASP.NET Core API deployed on an Azure VM behind IIS. The issue initially appeared to be a connectivity or deployment problem but ultimately turned out to be related to IIS hostname bindings and SNI (Server Name Indication).

The goal was to validate API availability directly on the VM and isolate issues between Azure routing, IIS configuration, and application behavior.


Step 1: Initial Problem

The API endpoint:

https://foo-vm.example.com/service/ProcessRequest

was returning:

404 Not Found

This raised several possible concerns:

  • Deployment failure
  • IIS misconfiguration
  • Routing issues
  • Network or SSL problems

Step 2: SSL / Certificate Validation

While testing direct HTTPS calls, the following error appeared:

Could not establish trust relationship for the SSL/TLS secure channel

Action Taken

  • Exported the server certificate (.cer)
  • Installed it on the local machine (Trusted Root / Intermediate store)
  • Alternatively, used curl with -k to bypass SSL validation:
curl -k https://foo-vm.example.com

Outcome

  • SSL issues were eliminated as a blocker
  • Able to reach the server over HTTPS

Step 3: Direct API Testing with curl

Multiple endpoints were tested:

curl -k https://foo-vm.example.com/
curl -k https://foo-vm.example.com/service/health
curl -k https://foo-vm.example.com/api/health

Result

All returned:

404 Not Found (Microsoft-IIS/10.0)

Insight

  • Requests were reaching IIS
  • But no matching route/application was found

Step 4: Validate Application Deployment

A simple health check endpoint was introduced:

/service/health

Expected response:

Healthy

However, even this endpoint returned 404 when accessed via the VM hostname.


Step 5: IIS Investigation

Upon inspecting IIS:

  • The API was not hosted under Default Web Site
  • Instead, it was hosted under a separate site:
Foo.ApiSvc

Key Finding

Requests to:

https://foo-vm.example.com

were hitting:

Default Web Site ❌

—not the actual API site.


Step 6: Binding and SNI Discovery (Root Cause)

IIS bindings for the API site showed:

Host Name: foo.example.com  
Port: 443
SNI: Enabled

Critical Insight

With SNI enabled, IIS routes requests based on the Host header.

So:

https://foo-vm.example.com  → Default Web Site → 404  
https://foo.example.com → Foo.ApiSvc → API

Step 7: Validate Using Host Header Override

Since DNS for foo.example.com was not directly usable from the VM, the Host header was manually injected:

curl -k https://foo-vm.example.com/service/health \
-H "Host: foo.example.com"

Result

Healthy

Conclusion

  • API was functioning correctly
  • IIS routing was working as designed
  • Issue was purely hostname-based routing

Step 8: Azure Layer Insight

The /service/... route seen earlier was part of the Azure routing layer, not IIS.

Architecture:

Azure Front Door / Gateway

foo.example.com

Azure VM (IIS with SNI)

ASP.NET Core API

Key Takeaway:

When bypassing Azure and hitting the VM directly, you must:

  • Use the correct hostname
    OR
  • Override the Host header

Application Pool Configuration Update

The IIS application pool was updated from .NET CLR v4.0 to No Managed Code to align with ASP.NET Core hosting best practices.

ASP.NET Core applications run on the CoreCLR in a separate process and do not depend on the IIS-managed CLR. While the previous setting did not prevent the application from running, updating it improves clarity and avoids confusion in future maintenance.


Step 9: Final Resolution

✅ Correct endpoint:

https://foo.example.com/service/health

✅ Or direct VM access with Host override:

curl -k https://foo-vm.example.com/service/health \
-H "Host: foo.example.com"

Key Learnings

1. IIS with SNI routes based on hostname, not IP

Incorrect hostname results in routing to the wrong site and returns 404.


2. Default Web Site is not always your application

Always verify IIS site bindings and application mapping.


3. Azure routing can mask backend behavior

The /service path was part of the Azure layer, not IIS configuration.


4. curl is a powerful debugging tool

  • -k bypasses SSL issues
  • -v shows detailed request/response
  • -H allows header injection

5. ASP.NET Core hosting configuration

  • App pool should be set to No Managed Code
  • Runtime and hosting bundle were already functioning correctly

Final Summary

What initially appeared to be a deployment or API issue was ultimately traced to a hostname binding mismatch caused by IIS SNI configuration.

Once the correct hostname was used—or injected via the Host header—the API routed correctly and responded as expected.

Root cause: Incorrect Host header when bypassing Azure routing
Resolution: Use the correct hostname or override the Host header

Designing Safer Production Releases: A Practical Journey with Azure DevOps

Production systems don’t usually fail because of missing tools.
They fail because too much happens implicitly.

A merge triggers a deploy.
A fix goes live unintentionally.
Weeks later, no one is entirely sure what version is actually running.

This article documents a deliberate shift I made in how production releases are handled—moving from implicit deployment behavior to explicit, intentional releases using Git tags and infrastructure templates in Azure DevOps.

This wasn’t about adding complexity.
It was about removing ambiguity.


The Problem I Wanted to Solve

Before the change, the release model had familiar weaknesses:

  • Merges to main were tightly coupled to deployment
  • Production changes could happen without a conscious “release decision”
  • Version visibility in production was inconsistent
  • Pipelines mixed application logic and platform concerns

None of this caused daily failures—but it created latent risk.

The question I asked was simple:

How do I make production boring, predictable, and explainable?


The Guiding Principles

Instead of starting with tooling, I started with principles:

  1. Production changes must be intentional
  2. Releases must be immutable and auditable
  3. Application code and platform logic should not live together
  4. Developers should not need to understand deployment internals
  5. The system should scale from solo to enterprise without redesign

Everything else followed from these.


The Core Decision: Tag-Based Releases

The single most important change was this:

Production deployments are triggered only by Git tags.

Not by merges.
Not by branch updates.
Not by UI clicks.

A release now requires an explicit action:

git tag vX.Y.Z
git push origin vX.Y.Z

That’s the moment a human says: “This is production.”


Separating Responsibilities with Repositories

To support this model cleanly, responsibilities were split across two repositories:

Application Repository

  • Contains UI, APIs, and business logic
  • Has a single, thin pipeline entry file
  • Decides when to release (via tags)

Infrastructure Repository

  • Contains pipeline templates and deployment logic
  • Builds and deploys applications
  • Defines how releases happen

This separation ensures:

  • Platform evolution doesn’t pollute application repos
  • Multiple applications can share the same release model
  • Infrastructure changes are treated as infrastructure—not features

Pipelines as Infrastructure, Not Code

A key mindset shift was treating pipelines as platform infrastructure.

That meant:

  • Pipeline entry files are locked behind PRs
  • Changes are rare and intentional
  • Developers generally don’t touch them
  • Deployment logic lives outside the app repo

This immediately reduced accidental breakage and cognitive load.


Versioning: Moving from Build-Time to Runtime

Once releases were driven by tags, traditional assembly-based versioning stopped being useful—especially for static web applications.

Instead, version information is now injected at build time into a runtime artifact:

/version.json

Example:

{ "version": "v2.0.5" }

The application reads this file at runtime to display its version.

This approach:

  • Works cleanly with static hosting
  • Reflects exactly what was released
  • Is easy to extend with commit hashes or timestamps
  • Decouples versioning from build tooling

The Day-to-Day Experience

After the setup, daily work became simpler—not more complex.

  • Developers work in feature branches
  • Code is merged into main without fear
  • Nothing deploys automatically
  • Production changes require an explicit tag

Releases are boring.
And that’s exactly the goal.


Rollbacks and Auditability

Because releases are immutable:

  • Redeploying a version is trivial
  • Rollbacks are predictable
  • There’s always a clear answer to: “What code is running in production?”

This is especially valuable in regulated or client-facing environments.


Tradeoffs and Honest Costs

This approach isn’t free.

Costs:

  • Initial setup takes time
  • Azure DevOps YAML has sharp edges
  • Pipelines must exist before tags will trigger
  • Early experimentation may require tag resets

Benefits:

  • Zero accidental prod deploys
  • Clear ownership and accountability
  • Clean separation of concerns
  • Reusable platform foundation
  • Long-term operational confidence

For long-lived systems, the tradeoff is worth it.


When This Pattern Makes Sense

This model works best when:

  • Production stability matters
  • Systems are long-lived
  • Auditability or compliance is a concern
  • Teams want clarity over convenience

It’s less suitable for:

  • Hackathons
  • Throwaway prototypes
  • “Merge = deploy” cultures

The Leadership Lesson

The most important takeaway wasn’t technical.

Good systems make intent explicit.
Great systems remove ambiguity from critical outcomes.

Production safety doesn’t come from moving slower.
It comes from designing systems where important changes happen on purpose.


Final Thoughts

This wasn’t about Azure DevOps specifically.
The same principles apply anywhere.

If you can answer these questions clearly, you’re on the right path:

  • Who decided this went to production?
  • When did that decision happen?
  • What exactly was released?

If those answers are obvious, production becomes boring.

And boring production is a feature.

Git Branching Strategies

In essence, a Git branch is a movable pointer to a specific commit in the repository’s history. When you create a new branch, you’re creating a new line of development that diverges from the main line. This allows you to make changes without directly affecting the stable codebase.

Let’s understand how this works. I assume you have Git installed and have basic working knowledge of Git.

Read more on code site

Adding Images in Azure DevOps Wiki

The wiki pages get created in their own Git repository and the image files are also added there. You can browse the repos from within DevOps at; https://dev.azure.com/MyOrganisation/MyProject/_git/MyProject.wiki

The markdown path for the image should begin /.attachments/ and make sure if you are adding dimension you use =500x and not =500 if you exclude the height e.g.

![MyImage.png](/.attachments/MyImage-98765432-abcd-1234-abcd-a1234567890b.png =500x)

For more info, click here.