Azure App services debugging and monitoring

It is very familiar. “XXX works fine locally but does not work at all in production”.

I am going to take assumption that the environment variable, ASPNETCORE_ENVIRONMENT, will be used to determine whether the application is running in Development mode or Production mode. Setting it as “Development.

Next move to Azure App services.

Within Azure Portal, add ASPNETCORE_ENVIRONMENT to the Application settings. Instead of “Development”, we will use “Production” here. Navigate to the App Service for the application, then navigate to Configuration:

Turn on App Service Logs to Filesystem in App Service

Within Azure Portal, navigate to App Service logs, Turn on Web server logging. This creates the folder path “D:\home\LogFiles\http\RawLogs” and starts writing server log files there.

A few benefits of us writing Serilog rolling files in the same location:

1-One single spot to access logs

2-When Application Logging (Filesystem) is turned on, you can view the Serilog logs in the Log Stream under App Service Monitoring section as well! Awesome!

Test and view the log files

Assuming your app service URL is https://YourWebAppName.azurewebsites.net, while being logged into Azure Portal, visit https://YourWebAppName.SCM.azurewebsites.net which is the Kudu site. You should be able to locate the log files by going to Debug console/CMD and navigate to “D:\home\LogFiles\http\RawLogs”.

Another spot to diagnose the problem is to look here;

And on the next page, look here;

It is easy to develop and test locally but when you deploy the application to Azure App Service, a few extra steps are required so that logs are properly populated and accessible within the Azure App Service.

Resources

https://shawn-shi.medium.com/proper-use-of-serilog-for-log-stream-and-filesystem-on-azure-app-service-a69e17e54b7b

Best practice for securing web API endpoint

Say, I am working for a bank that lets users use a mobile app. I am a developer working on the mobile app for that bank. The app gets OAuth access token and access a web API hosted by the bank. The app has been released.

A regular user who has a valid account in the bank installs the mobile app. Through the mobile app, the user can view the balance, transfer money, etc.

This user has a dev skill and noticed that he can get the access token after he signs in to the bank with his user name and password.

Because this user has a valid account for the bank, the access token is valid to call the API endpoints. The user does trial and error in his Visual Studio to figure out what requests need to be sent to get a valid response from the API. He can refresh access token as many times as needed with the official mobile app manually and eventually finds a way to make valid calls against the API from his dev tool.

Question is, are there any mechanisms that can be utilized to prevent the user from calling the endpoint without going through the official mobile app? The web API can be marked with [RequiredScope] attribute, for example, but if he was able to sign in, should he have all the permissions to do what the normal users are allowed to do, such as transferring money?

I have done searches on this topic on the web as it seems to be a common topic, but have not found references yet.

Read answers here.

Resources

https://learn.microsoft.com/en-us/aspnet/core/security/authorization/secure-data?view=aspnetcore-6.0

Ajax Request: Response to preflight request doesn’t pass access control check

Today, I started getting this error after making few Ajax calls to a remote server;

Access to XMLHttpRequest at 'https://foo.com' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

This problem relates to CORS. Here is some explanation and work around;

https://stackoverflow.com/questions/35588699/response-to-preflight-request-doesnt-pass-access-control-check

Here is some more useful info;

https://www.edureka.co/community/82342/how-to-add-custom-http-header-to-ajax-request-with-javascript

FIDO2 and WebAuthn

Authentication has been an essential part of applications for some time now because applications need to know some information about the user who’s using the application. For the longest time, the solution to this has been username and passwords. Username passwords are popular because they’re convenient to implement. But they aren’t secure. There are many issues with passwords.

First, there’s the problem of transmitting this password securely. If you send the password over the wire, a man-in-the-middle could sniff it. That pretty much necessitated SSL over such communication or the equivalent of creating a hash of the password that’s sent over the wire instead of the actual password. But even those techniques didn’t solve the problem of the server securing the password, or a secure hash of the password. Or, for that matter, keeping you safe from replay attacks. Increasingly complex versions of this protocol were created, to the point where you could, with some degree of confidence, say that you were safe from man-in-the-middle attacks or replay attacks.

Users created a simple, easy to remember password, and brute force techniques guessed those passwords. So we came up with complex requirements for passwords, such as your password must contain an upper case, lower case, special character, and minimum length – and yet people still picked poor passwords. When they didn’t pick poor passwords that were easy to remember, they would reuse passwords across different systems. Or they would use password managers to store their passwords, until the password manager itself got compromised.

But even then, you’re not safe from passwords being leaked. Worse, leaked passwords are not detected – you don’t know if your password has been leaked until the leak is discovered. And these leaks could occur on a poorly implemented service. This means, no matter what you do, you’re still insecure.

Don’t Despair

There are solutions. There are concepts like MFA or one-time passwords that can be used in addition to your usual password. This is what you’ve experienced when you enter a credential, but in addition, you have to enter a code sent to you via SMS or from an authenticator app on your phone.

MFA and one-time passwords are great. In fact, I’d go to the extent of saying that if there’s a service you’re using that uses only username password, just assume it’s insecure, and don’t use it for anything critical. Additionally, pair it with common-sense practices like own your domain name, and a separate email address from your normal use email address for account recovery. Secret questions and answers that aren’t easy to guess, and answers that don’t make sense to anyone.

As great as MFA and one-time passwords are, they’re still not a perfect picture. There are a few big issues with this approach.

First, they are cumbersome to manage for the end user. I work with this stuff on a daily basis, and I find it frustrating to manage 100s of accounts, multiple authenticator apps, and I worry that if I ever broke my phone accidentally, I’d be transported to neanderthal times immediately. I can’t imagine how a common non-technology-friendly person deals with all this.

Second, MFAs and one-time passwords are both cumbersome and expensive for the service provider. All those SMS messages and push notifications cost money. This creates a barrier to entry for someone trying to get a service off the ground. Then there’s the question of which authenticator app to trust and whether that app be trusted. Is SMS good enough?

Third, there’s the issue of phishing. As great as MFA is, someone can set up a service that looks identical to a legit service, and unless you have very keen eyes watching every step, you may fall for it. Unfortunately, even the best of us is tired and stressed at times, and that’s when you fall for this. In fact, the unscrupulous service that pretends to be a legit service could simply forward your requests to the legit service after authentication while stealing your session. So you may think everything is hunky dory but your session has effectively been stolen.

Finally, there is authentication fatigue. Hey, I just want to login and use a system. Zero trust dictates that you assume a breach, so it’s common for services to over-authenticate. This creates authentication fatigue, and an already fatigued user could blindly approve an MFA request, especially if it’s cleverly disguised. It only takes one mistake for a hacker to get in the house, then they can do plenty of damage, potentially remaining undetected for a long time.

What am I Trying to Solve?

I’m not trying to secure passwords or make a better MFA solution here. The fundamental problem I wish to solve here is how an application can securely trust a user’s identity, such that the identity is not cumbersome to manage, is secure, convenient, and…

… this article is continued online. Click here to continue.

JavaScript Callback function

In JavaScript, we can pass a function to another function as an argument. By definition, a callback is a function that we pass into another function as an argument for executing later.

We are going to focus on Asynchronous callbacks. An asynchronous callback is executed after the execution of the high-order function that uses the callback.

Asynchronicity means that if JavaScript has to wait for an operation to complete, it will execute the rest of the code while waiting.

Here is an example of Asynchronous callback;

Suppose that you need to develop a script that downloads a picture from a remote server and process it after the download completes:

function download(url) {
    // ...
}

function process(picture) {
    // ...
}

download(url);
process(picture);

However, downloading a picture from a remote server takes time depending on the network speed and the size of the picture.

The following download() function uses the setTimeout() function to simulate the network request:

function download(url) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
    },1000);
}

And this code emulates the process() function:

function process(picture) {
    console.log(`Processing ${picture}`);
}

When we execute the following code:

let url = 'https://www.foot.net/pic.jpg';

download(url);
process(url);

we will get the following output:

Processing https://foo.net/pic.jpg
Downloading https://foo.net/pic.jpg ...

This is not what we expected because the process() function executes before the download() function. The correct sequence should be:

  • Download the picture and wait for the download completes.
  • Process the picture.

To resolve this issue, we can pass the process() function to the download() function and execute the process() function inside the download() function once the download completes, like this:

function download(url, callback) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
        
        // process the picture once it is completed
        callback(url);
    }, 1000);
}

function process(picture) {
    console.log(`Processing ${picture}`);
}

let url = 'https://wwww.javascripttutorial.net/pic.jpg';
download(url, process);

Output:

Downloading https://www.foo.net/pic.jpg ...
Processing https://www.foo.net/pic.jpg

Now, it works as expected.

In this example, the process() is a callback passed into an asynchronous function.

When we use a callback to continue code execution after an asynchronous operation, the callback is called an asynchronous callback.

To make the code more concise, we can define the process() function as an anonymous function:

function download(url, callback) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
        // process the picture once it is completed
        callback(url);

    }, 1000);
}

let url = 'https://www.javascripttutorial.net/pic.jpg';
download(url, function(picture) {
    console.log(`Processing ${picture}`);
}); 

Handling errors

The download() function assumes that everything works fine and does not consider any exceptions. The following code introduces two callbacks: success and failure to handle the success and failure cases respectively:

function download(url, success, failure) {
  setTimeout(() => {
    console.log(`Downloading the picture from ${url} ...`);
    !url ? failure(url) : success(url);
  }, 1000);
}

download(
  '',
  (url) => console.log(`Processing the picture ${url}`),
  (url) => console.log(`The '${url}' is not valid`)
);

Nesting callbacks and the Pyramid of Doom

How do we download three pictures and process them sequentially? A typical approach is to call the download() function inside the callback function, like this:

function download(url, callback) {
  setTimeout(() => {
    console.log(`Downloading ${url} ...`);
    callback(url);
  }, 1000);
}

const url1 = 'https://www.foo.net/pic1.jpg';
const url2 = 'https://www.foo.net/pic2.jpg';
const url3 = 'https://www.foo.net/pic3.jpg';

download(url1, function (url) {
  console.log(`Processing ${url}`);
  download(url2, function (url) {
    console.log(`Processing ${url}`);
    download(url3, function (url) {
      console.log(`Processing ${url}`);
    });
  });
});

Output:

Downloading https://www.foo.net/pic1.jpg ...
Processing https://www.foo.net/pic1.jpg
Downloading https://www.foo.net/pic2.jpg ...
Processing https://www.foo.net/pic2.jpg
Downloading https://www.foo.net/pic3.jpg ...
Processing https://www.foo.net/pic3.jpg

The script works perfectly fine.

However, this callback strategy does not scale well when the complexity grows significantly.

Nesting many asynchronous functions inside callbacks is known as the pyramid of doom or the callback hell:

asyncFunction(function(){
    asyncFunction(function(){
        asyncFunction(function(){
            asyncFunction(function(){
                asyncFunction(function(){
                    ....
                });
            });
        });
    });
});

To avoid the pyramid of doom, we use promises or async / await functions.