.NET Code Analysis with Roslyn Analyzers

NET compiler platform (Roslyn) analyzers inspect your C# or Visual Basic code for style, quality, maintainability, design, and other issues. This inspection or analysis happens during design time in all open files.

Here are the key take aways;

Maintainability index range and meaning

For the thresholds, we decided to break down this 0-100 range 80-20 to keep the noise level low and we only flagged code that was suspicious. We’ve used the following thresholds:

Index value Color Meaning
0-9 Red Low maintainability of code
10-19 Yellow Moderate maintainability of code
20-100 Green Good maintainability of code

Code metrics – Class coupling

“Module cohesion was introduced by Yourdon and Constantine as ‘how tightly bound or related the internal elements of a module are to one another’ YC79. A module has a strong cohesion if it represents exactly one task […], and all its elements contribute to this single task. They describe cohesion as an attribute of design, rather than code, and an attribute that can be used to predict reusability, maintainability, and changeability.”

The Magic Number
As with cyclomatic complexity, there is no limit that fits all organizations. However, S2010 does indicate that a limit of 9 is optimal:

“Therefore, we consider the threshold values […] as the most effective. These threshold values (for a single member) are CBO = 9[…].” (emphasis added)

Code metrics – Cyclomatic complexity

https://learn.microsoft.com/en-us/visualstudio/code-quality/code-metrics-cyclomatic-complexity?view=vs-2022

Cyclomatic complexity is defined as measuring “the amount of decision logic in a source code function” NIST235. Simply put, the more decisions that have to be made in code, the more complex it is.

The Magic Number
As with many metrics in this industry, there’s no exact cyclomatic complexity limit that fits all organizations. However, NIST235 does indicate that a limit of 10 is a good starting point:

“The precise number to use as a limit, however, remains somewhat controversial. The original limit of 10 as proposed by McCabe has significant supporting evidence, but limits as high as 15 have been used successfully as well. Limits over 10 should be reserved for projects that have several operational advantages over typical projects, for example experienced staff, formal design, a modern programming language, structured programming, code walkthroughs, and a comprehensive test plan. In other words, an organization can pick a complexity limit greater than 10, but only if it’s sure it knows what it’s doing and is willing to devote the additional testing effort required by more complex modules.” NIST235

As described by the Software Assurance Technology Center (SATC) at NASA:

“The SATC has found the most effective evaluation is a combination of size and (Cyclomatic) complexity. The modules with both a high complexity and a large size tend to have the lowest reliability. Modules with low size and high complexity are also a reliability risk because they tend to be very terse code, which is difficult to change or modify.” SATC

Putting It All Together
The bottom line is that a high complexity number means greater probability of errors with increased time to maintain and troubleshoot. Take a closer look at any functions that have a high complexity and decide whether they should be refactored to make them less complex.

Code metrics – Depth of inheritance (DIT)

Depth of Inheritance. Depth of inheritance, also called depth of inheritance tree (DIT), is defined as “the maximum length from the node to the root of the tree”.

High values for DIT mean the potential for errors is also high, low values reduce the potential for errors. High values for DIT indicate a greater potential for code reuse through inheritance, low values suggest less code reuse though inheritance to use. Due to lack of sufficient data, there is no currently accepted standard for DIT values.

You can read full article here;

https://learn.microsoft.com/en-us/visualstudio/code-quality/code-metrics-values?view=vs-2022

Add following nuget package in your project;

Microsoft.CodeAnalysis.NetAnalyzers

Integration in Azure DevOps

To integrate in Azure DevOps, follow this article;

https://secdevtools.azurewebsites.net/helpRoslynAnalyzers.html

Web Application Authentication

Launch browser. Enter username password. Log in. Done. How hard could it be? This is a challenge I run into when I talk about security and identity. My demos are so boring, because after all that jazz, you see “Hello username” written in Times New Roman. It really punctures the demo, doesn’t it?

As it turns out, authentication can be quite involved. And it’s still evolving every day because this boundary to your application is under duress, constantly, and our enemies are getting smarter. That’s what this article is about. And although I’ll talk generically and stick to general concepts, I’ll use Azure AD as an example.

A Long Time Ago

A long time ago when the internet was nascent, we used a text-based browser called Lynx. There were other mechanisms to access information on the internet too. There was this protocol called SMTP, which is still widely used today. Did you know that SMTP existed basically as an unauthenticated protocol for almost three decades? All it took was understanding the protocol and Telnetting to the SMTP server, and you could send email as anyone. Boy, did I have some fun with that.

I used to wonder, how could this be so simple? I could set up three email addresses, A, B, and C. A forwards to B and C. B forwards to A and C, and C being the email address. Now, I could just initiate an unauthenticated email to A, and C will get overwhelmed by email messages causing old-style DDOS (distributed denial of service), and you know what? It worked. This is how insecure the internet used to be not too long ago. Frankly, with a slight modification, you could defeat inboxes even today.

This is what keeps me up at night. The internet, still, feels very flimsy, and thanks to my unimpressive demos, almost nobody cares about security in a project. It’s all about deadlines and features, and security is just an inconvenience to get past.

A very long time ago, we invented an authentication protocol called basic authentication. This was just sending the username password over the wire in a header. We tried to secure it using HTTPS, but plenty of governments and organizations can sniff HTTPS traffic as a man-in-the-middle. So we came up with NT Lan Manager version 1 (NTLMv1), which was easily defeatable. Followed by NTLMv2, which was better and is still in wide usage today but has plenty of shortcomings. And perhaps the most common legacy protocol in widespread use today is Kerberos. Kerberos relies on a central authority and plenty of network chit chat, so it didn’t scale to the internet.

As these dinosaur protocols roamed the earth, a meteorite called the internet came by to spoil their party. Organizations tried to solve the puzzle by using VPNs and trust relationships between active directories, but you can’t survive a direct hit from a meteorite like the internet. Well, to be precise, some dinosaurs survived. I mean we still have crocs and gators around us, right? Kerberos still has a place on your intranet. But we needed something different, something that focused on the internet.

WS-Fed and SAML

In the early 2000s, the WS-* standards started taking shape. One of these was WS-Fed, and with it came SAML, security assertion markup language. SAML is an XML packet format. In the early 2000s, accessing the internet was either through a website, i.e., a browser on your desktop, or via protocols such as FTP, Telnet, etc. It became clear that websites could offer value if we found a secure way for users to log in.

A post and redirect-based standard emerged, called WS-Fed. At a very high level, it separated the responsibility of the IdP (or identity provider), the entity that performs authentication, and the RP (or relying party), also known as service provider, which is the application you’re trying to access. The RP trusted the IdP and this trust was established using certificates. The idea was that the user lands on the RP, and the RP says, “hey, you aren’t authenticated, so please go here to prove who you are.” The user could optionally be given more than one choice of an IdP. The user goes to the IdP, proves who they are (via credentials such as username password or more), and the IdP sends back a SAML packet with enough information about the user that the RP can use to establish identity and proceed.

This “enough information” is the attributes about the user, also known as claims.

WS-Fed served us well for many years. One of the products that used it was SharePoint. As time progressed, the demands of applications increased. For instance, they expanded who could initiate an authentication. If the RP initiates an authentication…

… this article is continued online. Click here to continue.

Open Source Licenses

Apache License 2.0: The Basics

Open source licenses come in two flavors: permissive and copyleft. The Apache License 2.0 is in the permissive category, meaning that users can do (nearly) anything they want with the code, with very few exceptions.

However, unlike that of the MIT license, the text of the Apache License 2.0 is quite dense and difficult to read. You can read it here, but be warned — it’s fairly heavy on the legal terminology. Below, we’ll break it down into more digestible language.

Read more here

Best practice for securing web API endpoint

Say, I am working for a bank that lets users use a mobile app. I am a developer working on the mobile app for that bank. The app gets OAuth access token and access a web API hosted by the bank. The app has been released.

A regular user who has a valid account in the bank installs the mobile app. Through the mobile app, the user can view the balance, transfer money, etc.

This user has a dev skill and noticed that he can get the access token after he signs in to the bank with his user name and password.

Because this user has a valid account for the bank, the access token is valid to call the API endpoints. The user does trial and error in his Visual Studio to figure out what requests need to be sent to get a valid response from the API. He can refresh access token as many times as needed with the official mobile app manually and eventually finds a way to make valid calls against the API from his dev tool.

Question is, are there any mechanisms that can be utilized to prevent the user from calling the endpoint without going through the official mobile app? The web API can be marked with [RequiredScope] attribute, for example, but if he was able to sign in, should he have all the permissions to do what the normal users are allowed to do, such as transferring money?

I have done searches on this topic on the web as it seems to be a common topic, but have not found references yet.

Read answers here.

Resources

https://learn.microsoft.com/en-us/aspnet/core/security/authorization/secure-data?view=aspnetcore-6.0