FIDO2 and WebAuthn

Authentication has been an essential part of applications for some time now because applications need to know some information about the user who’s using the application. For the longest time, the solution to this has been username and passwords. Username passwords are popular because they’re convenient to implement. But they aren’t secure. There are many issues with passwords.

First, there’s the problem of transmitting this password securely. If you send the password over the wire, a man-in-the-middle could sniff it. That pretty much necessitated SSL over such communication or the equivalent of creating a hash of the password that’s sent over the wire instead of the actual password. But even those techniques didn’t solve the problem of the server securing the password, or a secure hash of the password. Or, for that matter, keeping you safe from replay attacks. Increasingly complex versions of this protocol were created, to the point where you could, with some degree of confidence, say that you were safe from man-in-the-middle attacks or replay attacks.

Users created a simple, easy to remember password, and brute force techniques guessed those passwords. So we came up with complex requirements for passwords, such as your password must contain an upper case, lower case, special character, and minimum length – and yet people still picked poor passwords. When they didn’t pick poor passwords that were easy to remember, they would reuse passwords across different systems. Or they would use password managers to store their passwords, until the password manager itself got compromised.

But even then, you’re not safe from passwords being leaked. Worse, leaked passwords are not detected – you don’t know if your password has been leaked until the leak is discovered. And these leaks could occur on a poorly implemented service. This means, no matter what you do, you’re still insecure.

Don’t Despair

There are solutions. There are concepts like MFA or one-time passwords that can be used in addition to your usual password. This is what you’ve experienced when you enter a credential, but in addition, you have to enter a code sent to you via SMS or from an authenticator app on your phone.

MFA and one-time passwords are great. In fact, I’d go to the extent of saying that if there’s a service you’re using that uses only username password, just assume it’s insecure, and don’t use it for anything critical. Additionally, pair it with common-sense practices like own your domain name, and a separate email address from your normal use email address for account recovery. Secret questions and answers that aren’t easy to guess, and answers that don’t make sense to anyone.

As great as MFA and one-time passwords are, they’re still not a perfect picture. There are a few big issues with this approach.

First, they are cumbersome to manage for the end user. I work with this stuff on a daily basis, and I find it frustrating to manage 100s of accounts, multiple authenticator apps, and I worry that if I ever broke my phone accidentally, I’d be transported to neanderthal times immediately. I can’t imagine how a common non-technology-friendly person deals with all this.

Second, MFAs and one-time passwords are both cumbersome and expensive for the service provider. All those SMS messages and push notifications cost money. This creates a barrier to entry for someone trying to get a service off the ground. Then there’s the question of which authenticator app to trust and whether that app be trusted. Is SMS good enough?

Third, there’s the issue of phishing. As great as MFA is, someone can set up a service that looks identical to a legit service, and unless you have very keen eyes watching every step, you may fall for it. Unfortunately, even the best of us is tired and stressed at times, and that’s when you fall for this. In fact, the unscrupulous service that pretends to be a legit service could simply forward your requests to the legit service after authentication while stealing your session. So you may think everything is hunky dory but your session has effectively been stolen.

Finally, there is authentication fatigue. Hey, I just want to login and use a system. Zero trust dictates that you assume a breach, so it’s common for services to over-authenticate. This creates authentication fatigue, and an already fatigued user could blindly approve an MFA request, especially if it’s cleverly disguised. It only takes one mistake for a hacker to get in the house, then they can do plenty of damage, potentially remaining undetected for a long time.

What am I Trying to Solve?

I’m not trying to secure passwords or make a better MFA solution here. The fundamental problem I wish to solve here is how an application can securely trust a user’s identity, such that the identity is not cumbersome to manage, is secure, convenient, and…

… this article is continued online. Click here to continue.

JavaScript Callback function

In JavaScript, we can pass a function to another function as an argument. By definition, a callback is a function that we pass into another function as an argument for executing later.

We are going to focus on Asynchronous callbacks. An asynchronous callback is executed after the execution of the high-order function that uses the callback.

Asynchronicity means that if JavaScript has to wait for an operation to complete, it will execute the rest of the code while waiting.

Here is an example of Asynchronous callback;

Suppose that you need to develop a script that downloads a picture from a remote server and process it after the download completes:

function download(url) {
    // ...
}

function process(picture) {
    // ...
}

download(url);
process(picture);

However, downloading a picture from a remote server takes time depending on the network speed and the size of the picture.

The following download() function uses the setTimeout() function to simulate the network request:

function download(url) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
    },1000);
}

And this code emulates the process() function:

function process(picture) {
    console.log(`Processing ${picture}`);
}

When we execute the following code:

let url = 'https://www.foot.net/pic.jpg';

download(url);
process(url);

we will get the following output:

Processing https://foo.net/pic.jpg
Downloading https://foo.net/pic.jpg ...

This is not what we expected because the process() function executes before the download() function. The correct sequence should be:

  • Download the picture and wait for the download completes.
  • Process the picture.

To resolve this issue, we can pass the process() function to the download() function and execute the process() function inside the download() function once the download completes, like this:

function download(url, callback) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
        
        // process the picture once it is completed
        callback(url);
    }, 1000);
}

function process(picture) {
    console.log(`Processing ${picture}`);
}

let url = 'https://wwww.javascripttutorial.net/pic.jpg';
download(url, process);

Output:

Downloading https://www.foo.net/pic.jpg ...
Processing https://www.foo.net/pic.jpg

Now, it works as expected.

In this example, the process() is a callback passed into an asynchronous function.

When we use a callback to continue code execution after an asynchronous operation, the callback is called an asynchronous callback.

To make the code more concise, we can define the process() function as an anonymous function:

function download(url, callback) {
    setTimeout(() => {
        // script to download the picture here
        console.log(`Downloading ${url} ...`);
        // process the picture once it is completed
        callback(url);

    }, 1000);
}

let url = 'https://www.javascripttutorial.net/pic.jpg';
download(url, function(picture) {
    console.log(`Processing ${picture}`);
}); 

Handling errors

The download() function assumes that everything works fine and does not consider any exceptions. The following code introduces two callbacks: success and failure to handle the success and failure cases respectively:

function download(url, success, failure) {
  setTimeout(() => {
    console.log(`Downloading the picture from ${url} ...`);
    !url ? failure(url) : success(url);
  }, 1000);
}

download(
  '',
  (url) => console.log(`Processing the picture ${url}`),
  (url) => console.log(`The '${url}' is not valid`)
);

Nesting callbacks and the Pyramid of Doom

How do we download three pictures and process them sequentially? A typical approach is to call the download() function inside the callback function, like this:

function download(url, callback) {
  setTimeout(() => {
    console.log(`Downloading ${url} ...`);
    callback(url);
  }, 1000);
}

const url1 = 'https://www.foo.net/pic1.jpg';
const url2 = 'https://www.foo.net/pic2.jpg';
const url3 = 'https://www.foo.net/pic3.jpg';

download(url1, function (url) {
  console.log(`Processing ${url}`);
  download(url2, function (url) {
    console.log(`Processing ${url}`);
    download(url3, function (url) {
      console.log(`Processing ${url}`);
    });
  });
});

Output:

Downloading https://www.foo.net/pic1.jpg ...
Processing https://www.foo.net/pic1.jpg
Downloading https://www.foo.net/pic2.jpg ...
Processing https://www.foo.net/pic2.jpg
Downloading https://www.foo.net/pic3.jpg ...
Processing https://www.foo.net/pic3.jpg

The script works perfectly fine.

However, this callback strategy does not scale well when the complexity grows significantly.

Nesting many asynchronous functions inside callbacks is known as the pyramid of doom or the callback hell:

asyncFunction(function(){
    asyncFunction(function(){
        asyncFunction(function(){
            asyncFunction(function(){
                asyncFunction(function(){
                    ....
                });
            });
        });
    });
});

To avoid the pyramid of doom, we use promises or async / await functions.

How single page application works

Single page applications use hash-based or pushState navigation to support back/forward gestures and bookmarking. The best known example probably is Gmail but now there are other too. If you’re not familiar with how that technique works, here is a brief explanation;

For hash-based navigation, the visitor’s position in a virtual navigation space is stored in the URL hash, which is the part of the URL after a ‘hash’ symbol (e.g., /my/app/#category=shoes&page=4). Whenever the URL hash changes, the browser doesn’t issue an HTTP request to fetch a new page; instead it merely adds the new URL to its back/forward history list and exposes the updated URL hash to scripts running in the page. The script notices the new URL hash and dynamically updates the UI to display the corresponding item (e.g., page 4 of the “shoes” category).

This makes it possible to support back/forward button navigation in a single page application (e.g., pressing ‘back’ moves to the previous URL hash), and effectively makes virtual locations bookmarkable and shareable.

pushState is an HTML5 API that offers a different way to change the current URL, and thereby insert new back/forward history entries, without triggering a page load. This differs from hash-based navigation in that you’re not limited to updating the hash fragment — you can update the entire URL.

Debounce and Throttle in JavaScript

Debounce and throttle are two concepts that we can use in JavaScript to increase the control of our function executions, specially useful in event handlers.

Both techniques answer the same question “How often a certain function can be called over time?” in different ways:

  • Debounce: Think of it as “grouping multiple events in one”. Imagine that you go home, enter in the elevator, doors are closing… and suddenly your neighbor appears in the hall and tries to jump on the elevator. Be polite! and open the doors for him: you are debouncing the elevator departure. Consider that the same situation can happen again with a third person, and so on… probably delaying the departure several minutes.
  • Throttle: Think of it as a valve, it regulates the flow of the executions. We can determine the maximum number of times a function can be called in certain time. So in the elevator analogy.. you are polite enough to let people in for 10 secs, but once that delay passes, you must go!

Event handlers without debouncing or throttling are like one-person-capacity elevators: not that efficient.

Use cases for debounce

Use it to discard a number of fast-pace events until the flow slows down. Examples:

  • When typing fast in a textarea that will be processed: you don’t want to start to process the text until user stops typing.
  • When saving data to the server via AJAX: You don’t want to spam your server with dozens of calls per second.

Use cases for throttle

Same use cases than debounce, but you want to warranty that there is at least some execution of the callbacks at certain interval

  • If that user types really fast for 30 secs, maybe you want to process the input every 5 secs.
  • It makes a huge performance difference to throttle handling scroll events. A simple mouse-wheel movement can trigger dozens of events in a second. Even Twitter had once problems with scroll events, so learn from others mistake and avoid this easy pitfall.

Here is visual guide to these concepts.

Resource

https://redd.one/blog/thinking-in-functions-higher-order-functions

SQL Cursor – Forward Only vs Fast Forward

Lets take a look at two queries using CURSORS, the first one will use the FORWARD_ONLY type cursor, and the second will use the FAST_FORWARD type cursor. These two types sound very similar, but perform quite differently.

DECLARE @firstName as NVARCHAR(50);
DECLARE @middleName as NVARCHAR(50);
DECLARE @lastName as NVARCHAR(50);
DECLARE @phone as NVARCHAR(50);
DECLARE @PeoplePhoneCursor as CURSOR;
 
SET @PeoplePhoneCursor = CURSOR FORWARD_ONLY FOR
SELECT TOP 10 FirstName, MiddleName, LastName, PhoneNumber
  FROM person.Person p
 INNER JOIN person.personphone pp on p.BusinessEntityID = pp.BusinessEntityID;
 
OPEN @PeoplePhoneCursor;
FETCH NEXT FROM @PeoplePhoneCursor INTO @firstName, @middleName, @lastName, @phone;
 WHILE @@FETCH_STATUS = 0
BEGIN
     PRINT ISNULL(@firstName, '') + ' ' +
           ISNULL(@middleName, '') + ' ' +
           ISNULL(@lastName, '')  +
           ' Phone: ' + ISNULL(@phone, '') ;
     FETCH NEXT FROM @PeoplePhoneCursor INTO @firstName, @middleName, @lastName, @phone;
END
CLOSE @PeoplePhoneCursor;
DEALLOCATE @PeoplePhoneCursor;

Now for the FAST_FORWARD CURSOR example. Notice only one line has changed, that’s the line that says “SET @PeoplePhoneCursor = CURSOR FAST_FORWARD FOR”.

DECLARE @firstName as NVARCHAR(50);
DECLARE @middleName as NVARCHAR(50);
DECLARE @lastName as NVARCHAR(50);
DECLARE @phone as NVARCHAR(50);
DECLARE @PeoplePhoneCursor as CURSOR;
 
-- HERE IS WHERE WE SET THE CURSOR TO BE FAST_FORWARD
SET @PeoplePhoneCursor = CURSOR FAST_FORWARD FOR
SELECT TOP 10 FirstName, MiddleName, LastName, PhoneNumber
  FROM person.Person p
 INNER JOIN person.personphone pp on p.BusinessEntityID = pp.BusinessEntityID;
 
OPEN @PeoplePhoneCursor;
FETCH NEXT FROM @PeoplePhoneCursor INTO @firstName, @middleName, @lastName, @phone;
 WHILE @@FETCH_STATUS = 0
BEGIN
     PRINT ISNULL(@firstName, '') + ' ' +
           ISNULL(@middleName, '') + ' ' +
           ISNULL(@lastName, '')  +
           ' Phone: ' + ISNULL(@phone, '') ;
     FETCH NEXT FROM @PeoplePhoneCursor INTO @firstName, @middleName, @lastName, @phone;
END
CLOSE @PeoplePhoneCursor;
DEALLOCATE @PeoplePhoneCursor;

The FORWARD_ONLY CURSOR takes 4 times the time as the FAST FORWARD CURSOR, and the number continues to widen as the number of times the cursor loops is executed.

FAST FORWARD CURSORS are usually the fastest option with SQL Server. There may be cases where another option may work better, but the FAST FORWARD CURSOR is a good place to start if you must use a CURSOR.

The best practice is to avoid cursor but sometime they are inevitable. The alternatives could be temp tables, while loops etc.