Cognitive complexity theory with example

The cognitive complexity definition is how capable people perceive things in their world. It also describes the number of cognitive processes required to complete a task. Things that are complicated to perform have more processes involved than simple tasks.

Making a sandwich, for example, is a simpler task than writing a term paper. Many more cognitive processes are involved in writing the paper, such as using online resources, doing effective research, and writing within a specific style and tone. Cognitive complexity can also help individuals analyze situations more accurately by allowing them to see shades of nuance and meaning more clearly. The core determining factor in the complexity of an individual’s cognition is their life experience and education. Exposure to complex situations, either through life experience or education and training, allows individuals to form mental constructs.

Expounding upon this theory of what is cognitive complexity is the personal construct theory, which states that individuals interpret the world through mental constructs. These constructs serve as shortcuts, aiding individuals in quickly analyzing situations and tasks. For example, someone could color-code their notebooks to make it easy to identify which notebooks are for which subjects, rather than sifting through multiples of the same color notebook. Mental constructs make it easier to solve complex problems by breaking down parts of the problem-solving process into automatic processes.

Cognitive complexity is also used in computer programming to help software engineers describe units of code. Cognitive complexity, in this sense, is how difficult it is to understand a unit of code in a program. Some functions and classes are straightforward and are easy to understand, while there are those with multiple parts with several logic breaks and references to other files that make it difficult to follow. Complex code is generally more prone to bugs because there are more areas for things to go wrong when the program is being executed. It also makes it more challenging to write new code that interacts with the old complex code, making rework necessary to simplify the code.

Cognitive Complexity Example

Imagine that there is an individual who is learning how to cook and opens their new cookbook to a recipe they’ve been considering. The recipe calls for several ingredients that they’ve never heard of before. So, they must take the time out to research these ingredients, figure out what they are, and search for them at the store. They also discover that they do not have a pan big enough to cook the meal, so they must also purchase a new pan capable of doing so. Meanwhile, an accomplished chef is making the same recipe. However, because they have more experience, they already know what the recipe calls for and have the ingredients in stock. They also already have the correct pan. The first individual consults the recipe for every step, lengthening the prep and cooking process. The chef knows the recipe by heart, so they get the meal prepped and cooked more quickly. In this cognitive complexity example, the chef has more mental constructs available to them from experience with the dish, resulting in less time spent prepping and cooking. They also know what ingredients do what, so if the flavor is off, they know what to add while the inexperienced individual might not.

Cognitive Complexity Communication

Cognitive complexity in communication refers to the number of psychological constructs an individual might use to describe someone. These psychological constructs generally refer to personality traits, like “energetic” or “caring.” Those who are more perceptive of others tend to use more psychological constructs to describe people. These people have higher interpersonal cognitive complexity, allowing them to notice more details about a person than somebody with less skill. An average individual might describe someone as “friendly,” while the person with higher interpersonal cognitive complexity will notice that they are also giving and self-confident.

Uses of Cognitive Complexity Theory

As mentioned previously, cognitive complexity is a topic that is used in software development to describe how complex a given piece of code is. However, it is also used to design computers that think more as humans do. Computers process information through binary code, or ones and zeroes. They see the program to execute, and they do it. They do not pause for consideration or come up with creative solutions. So, software engineers and scientists are trying to develop ways in which a computer can think more like a human to solve more complex problems. These systems are called artificial intelligence. Artificial intelligence aims to develop computers capable of complex cognitive functions rather than simple functions like memory and perception. Teaching computers to see the nuances in different problems allows them to handle more complex situations in more optimal ways, rather than approaching a problem head-on in the most direct path.

In organizations, cognitive complexity refers to the ability of people to notice patterns in their organization so that they can optimize processes for efficiency. This capability requires seeing the organization in a broad sense and a specific one. Optimizing one part of an organization is a much simpler feat, where there are fewer variables to consider. Optimizing an entire organization requires the ability to see any potential bottlenecks, understand supply and demand, know market trends, and much more. If a company is underperforming, leaders need to be able to recognize why it is happening and come up with solutions to the problem. Cognitive complexity allows people to think outside the box and develop creative solutions.

Read more here

Using TransactionScope multiple times

Ignore anything about object structure or responsibilities for persistence. This is an example to help me understand how I should be doing things. Partly because it seems not to work when I try and replace oracle with SqlLite as the db provider factory, and I’m wondering where I should spend time investigating.

Let’s begin with an example;

public class ThingPart
{
    private DbProviderFactory connectionFactory;

    public void SavePart()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SavePartA();
            SavePartB();
            ts.Complete(); 
        }
    }

    private void SavePartA()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }

    private void SavePartB()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

And something which represents the Thing:

public class Thing
{
    private DbProviderFactory connectionFactory;

    public void SaveThing()
    {
        using (TransactionScope ts = new TransactionScope()
        {
            ///save the bits I want to be done in a single transaction
            SaveHeader();
            foreach (ThingPart part in parts)
            {
                part.SavePart();
            }  
            ts.Complete();    
        }
    }

    private void SaveHeader()
    {
        using (Connection con = connectionFactory.CreateConnection()
        {
            con.Open();
            Command command = con.CreateCommand();
            ...
            command.ExecuteNonQuery();             
        }
    }
}

I also have something which manages many things.

public class ThingManager
{    
    public void SaveThings
    {        
        using (TransactionScope ts = new TransactionScope)
        {            
            foreach (Thing thing in things)
            {
                thing.SaveThing();
            }            
        }        
    }    
}

Its my understanding that:

The connections will not be new and will be reused from the pool each time (assuming DbProvider supports connection pooling and it is enabled)

This depends – e.g. the SAME connection will be reused for successive steps in your aggregate transaction if all connections are to the same DB, with the same credentials, and if SQL is able to use the Lightweight transaction manager (SQL 2005 and later). (but SQL Connection pooling still works if that was what you were asking?)


The transactions will be such that if I just called ThingPart.SavePart (from outside the context of any other class) then part A and B would either both be saved or neither would be.

Atomic SavePart – yes, this will work ACID as expected.


If I call Thing.Save (from outside the context of any other class) then the Header and all the parts will be all saved or non will be, ie everything will happen in the same transaction

Yes nesting TransactionScopes with the same scope will also be atomic. Transaction will only commit when the outermost TS is Completed.


If I call ThingManager.SaveThings then all my things will be saved or none will be, ie everything will happen in the same transaction.

Yes , also atomic, but note that you will be escalating SQL locks. If it makes sense to commit each Thing (and its ThingParts) individually, this would be preferable from a SQL concurrency point of view.


If I change the DbProviderFactory implementation that is used, it shouldn’t make a difference.

The Provider will need to be compatable as a TransactionScope resource manager (and probably DTC Compliant as well). e.g. don’t move your database to Rocket U2 and expect TransactionScopes to work.

Just one gotcha – new TransactionScope() defaults to isolation level READ_SERIALIZABLE – this is often over pessimistic for most scenarios – READ COMMITTED is usually more applicable.

Reference

https://stackoverflow.com/questions/7553971/using-transactionscope-multiple-times

Base class implementing interface

Let’s think it this way;

public interface IBreathing
{
    void Breathe();
}

//because every human breathe
public abstract class Human : IBreathing
{
    abstract void Breathe();
}

public interface IVillain
{
    void FightHumanity();
}

public interface IHero
{
    void SaveHumanity();
}

//not every human is a villain
public class HumanVillain : Human, IVillain
{
    void Breathe() {}
    void FightHumanity() {}
}

//but not every human is a hero either
public class HumanHero : Human, IHero
{
    void Breathe() {}
    void SaveHumanity() {}
}

The point is that the base class should implement interface (or inherit but only expose its definition as abstract) only if every other class that derives from it should also implement that interface. So, with basic example provided above, you’d make Human implement IBreathing only if every Human breaths (which is correct here).

But! You can’t make Human implement both IVillain and IHero because that would make us unable to distinguish later on if it’s one or another. Actually, such implementation would imply that every Human is both a villain and hero at once.

Conclusion

  1. There are no risks of base class implementing an interface, if every class deriving from it should implement that interface too.
  2. It is always better to implement an interface on the sub-class, If every class deriving from base should also implement that interface, it’s rather a must
  3. If every class deriving from base one should implement such interface, make base class inherit it. If not, make concrete class implement such interface.

Model binding for HTTP GET requests

We are going to learn how to pass data in ASP.NET Core using Model binding for HTTP GET request. The GET method is generally used for less complex and non-sensitive data transfer.

The model binding refers to converting the HTTP request data (from the query string or form collection) to an action methods parameter. These parameters can be of primitive type or complex type.

We will be using Visual Studio 2022 for this test. We are going to use following modal in this exercise;

    public class Project
    {
        public int Id { get; set; }

        public ProjectIdentity? ProjectIdentity { get; set; }
    }

    public class ProjectIdentity
    {
        public int Id { get; set; }

        public int ModificationId { get; set; }
    }

First test case – Primitive Type

The HTTP GET request embeds data into a query string. MVC framework automatically converts a query string to the action method parameters provided their names are matching.

For example;

Create a new project using MVC template in Visual Studio. Don’t make any changes to the default; Make sure jQuery works. Add following to index.cshtml page;

@section scripts {
    <script type="text/javascript">
    var sample;

    $(function () {
        console.log("ready!");
    });
    }
    </script>
}

Run the project. You should be seeing “ready!” in browser console window. We are ready to do our Get Tests now using jQuery Ajax.

Add an HTML button with onclick event;

<div><button type="button" class="btn btn-primary" onclick="sample.OnButtonClick();">Get Request</button></div>

Create Sample class with onclick event and an instance of Sample class in jQuery ready function;

class Sample
 {
     constructor() { }

     OnButtonClick()
     {
             $.ajax({
                 type: 'GET',
                 url: "@Url.Action("GetInfoPrimitive", "Home")",
                 dataType: 'json',
                 contentType: 'application/json',
                 data: {"ProjectId":"1"},

                 success: function (result) {
                     alert('Data submitted.');
                     console.log(result);
                 },
                 failure: function (result) {
                     alert("Something went wrong. Plese contact System Administrator.");
                     console.log(result);
                 }
             });
     }
 }

Our complete jQuery function would look like this;

@section scripts {
    <script type="text/javascript">
    var sample;

    $(function () {
        console.log("ready!");
        sample = new Sample();
    });

    class Sample
    {
        constructor() { }

        OnButtonClick()
        {
                $.ajax({
                    type: 'GET',
                    url: "@Url.Action("GetInfoPrimitive", "Home")",
                    dataType: 'json',
                    contentType: 'application/json',
                    data: {"ProjectId":"1"},

                    success: function (result) {
                        alert('Data submitted.');
                        console.log(result);
                    },
                    failure: function (result) {
                        alert("Something went wrong. Plese contact System Administrator.");
                        console.log(result);
                    }
                });
        }
    }
    </script>
}

We need to create a method on server side that will be called by Ajax method;

[HttpGet]
        public int GetInfoPrimitive(int projectId)
        {
            Console.WriteLine($"Incoming parameter value: {JsonSerializer.Serialize(projectId)}");
            return projectId;
        }

Put a breakpoint on “return projectId” line. Run the app and click on Get Request button.

The debugger should stop on “return projectId” statement and console results would be like this;

Click continue to resume the application. There is no problem with this method.

Second test case – Binding to Complex Type

Model binding also works on complex types. It will automatically convert the input fields data on the view to the properties of a complex type parameter of an action method in HttpGet request if the properties’ names match with the fields on the view.

For example;

The only change in earlier example code is that we are going to pass an object to the view and would use that object to submit Get request;

        public IActionResult Index()
        {
            var request = new ProjectIdentity();
            return View(request);
        }
       [HttpGet]
       public ProjectIdentity GetInfoComplex(ProjectIdentity request)
       {
           Console.WriteLine($"Incoming parameter value: {JsonSerializer.Serialize(request)}");
           return request;
       }

This is the change on Javascript side;

OnButtonClick()
 {
         $.ajax({
             type: 'GET',
             url: "@Url.Action("GetInfoComplex", "Home")",
             dataType: 'json',
             contentType: 'application/json',
             data: {"Id":"1", "ModificationId": "100"},

             success: function (result) {
                 alert('Data submitted.');
                 console.log(result);
             },
             failure: function (result) {
                 alert("Something went wrong. Plese contact System Administrator.");
                 console.log(result);
             }
         });
 }

Put a breakpoint on “return request;” and run the app. Here is the output;

Third test case – Binding to nested Complex Type

Let’s add a nested class inside parent class and run the same method;

//here is the problem. GET request will ignore nested object mapping
        [HttpGet]
        public Project GetInfo(Project request)
        {
            Console.WriteLine($"Incoming parameter value: {JsonSerializer.Serialize(request)}");
            return request;
        }
   

Change on client side;

OnButtonClick()
       {
               $.ajax({
                   type: 'GET',
                   url: "@Url.Action("GetInfo", "Home")",
                   dataType: 'json',
                   contentType: 'application/json',
                   data: { "Id": "1", "ProjectIdentity": { "Id": "11", "ModificationId": "100" } },

                   success: function (result) {
                       alert('Data submitted.');
                       console.log(result);
                   },
                   failure: function (result) {
                       alert("Something went wrong. Plese contact System Administrator.");
                       console.log(result);
                   }
               });
       }

The output would be;

So how we can fix this problem in GET request? Auto mapping will not work and in this case we need to be implicit to map the properties. Here is how;

       //this is how to solve for GET request
       [HttpGet]
       public Project GetInfoFixed(
               [FromQuery(Name = "Id")] int ProjectId,
               [FromQuery(Name = "ProjectIdentity[Id]")] int ProjectIdentityId,
               [FromQuery(Name = "ProjectIdentity[ModificationId]")] int? ProjectModificationId)
       {
           var response = new Project() { Id = ProjectId, ProjectIdentity = new ProjectIdentity() { Id = ProjectIdentityId, ModificationId = ProjectModificationId.Value } };
           Console.WriteLine($"Incoming parameter value: {JsonSerializer.Serialize(response)}");
           return response;
       }
        

Client side change;

OnButtonClick()
        {
                $.ajax({
                    type: 'GET',
                    url: "@Url.Action("GetInfoFixed", "Home")",
                    dataType: 'json',
                    contentType: 'application/json',
                    data: { "Id": "1", "ProjectIdentity": { "Id": "11", "ModificationId": "100" } },

                    success: function (result) {
                        alert('Data submitted.');
                        console.log(result);
                    },
                    failure: function (result) {
                        alert("Something went wrong. Plese contact System Administrator.");
                        console.log(result);
                    }
                });
        }

Here is the result;

We are able to create full object graph this time.

Conclusion

As you have seen, that the ASP.NET MVC framework automatically converts request values into a primitive or complex type object. Model binding is a two-step process. First, it collects values from the incoming HTTP request, and second, it populates primitive type or a complex type with these values.

Value providers are responsible for collecting values from requests, and Model Binders are responsible for populating values.

By default, the HTTP methods GETHEADOPTIONS, and DELETE do not bind data implicitly from the request body. To bind data explicitly from the request body, marked as JSON, for these methods, we can use the [FromBody] attribute. Also, we can use the [FromQuery] attribute.

Use POST for destructive actions such as creation (I’m aware of the irony), editing, and deletion, because you can’t hit a POST action in the address bar of your browser. Use GET when it’s safe to allow a person to call an action. So a URL like:

http://myblog.org/admin/posts/delete/357

Should bring you to a confirmation page, rather than simply deleting the item. It’s far easier to avoid accidents this way.

Resources

https://learn.microsoft.com/en-us/aspnet/core/mvc/models/model-binding?view=aspnetcore-7.0#simp7

Web Application Authentication

Launch browser. Enter username password. Log in. Done. How hard could it be? This is a challenge I run into when I talk about security and identity. My demos are so boring, because after all that jazz, you see “Hello username” written in Times New Roman. It really punctures the demo, doesn’t it?

As it turns out, authentication can be quite involved. And it’s still evolving every day because this boundary to your application is under duress, constantly, and our enemies are getting smarter. That’s what this article is about. And although I’ll talk generically and stick to general concepts, I’ll use Azure AD as an example.

A Long Time Ago

A long time ago when the internet was nascent, we used a text-based browser called Lynx. There were other mechanisms to access information on the internet too. There was this protocol called SMTP, which is still widely used today. Did you know that SMTP existed basically as an unauthenticated protocol for almost three decades? All it took was understanding the protocol and Telnetting to the SMTP server, and you could send email as anyone. Boy, did I have some fun with that.

I used to wonder, how could this be so simple? I could set up three email addresses, A, B, and C. A forwards to B and C. B forwards to A and C, and C being the email address. Now, I could just initiate an unauthenticated email to A, and C will get overwhelmed by email messages causing old-style DDOS (distributed denial of service), and you know what? It worked. This is how insecure the internet used to be not too long ago. Frankly, with a slight modification, you could defeat inboxes even today.

This is what keeps me up at night. The internet, still, feels very flimsy, and thanks to my unimpressive demos, almost nobody cares about security in a project. It’s all about deadlines and features, and security is just an inconvenience to get past.

A very long time ago, we invented an authentication protocol called basic authentication. This was just sending the username password over the wire in a header. We tried to secure it using HTTPS, but plenty of governments and organizations can sniff HTTPS traffic as a man-in-the-middle. So we came up with NT Lan Manager version 1 (NTLMv1), which was easily defeatable. Followed by NTLMv2, which was better and is still in wide usage today but has plenty of shortcomings. And perhaps the most common legacy protocol in widespread use today is Kerberos. Kerberos relies on a central authority and plenty of network chit chat, so it didn’t scale to the internet.

As these dinosaur protocols roamed the earth, a meteorite called the internet came by to spoil their party. Organizations tried to solve the puzzle by using VPNs and trust relationships between active directories, but you can’t survive a direct hit from a meteorite like the internet. Well, to be precise, some dinosaurs survived. I mean we still have crocs and gators around us, right? Kerberos still has a place on your intranet. But we needed something different, something that focused on the internet.

WS-Fed and SAML

In the early 2000s, the WS-* standards started taking shape. One of these was WS-Fed, and with it came SAML, security assertion markup language. SAML is an XML packet format. In the early 2000s, accessing the internet was either through a website, i.e., a browser on your desktop, or via protocols such as FTP, Telnet, etc. It became clear that websites could offer value if we found a secure way for users to log in.

A post and redirect-based standard emerged, called WS-Fed. At a very high level, it separated the responsibility of the IdP (or identity provider), the entity that performs authentication, and the RP (or relying party), also known as service provider, which is the application you’re trying to access. The RP trusted the IdP and this trust was established using certificates. The idea was that the user lands on the RP, and the RP says, “hey, you aren’t authenticated, so please go here to prove who you are.” The user could optionally be given more than one choice of an IdP. The user goes to the IdP, proves who they are (via credentials such as username password or more), and the IdP sends back a SAML packet with enough information about the user that the RP can use to establish identity and proceed.

This “enough information” is the attributes about the user, also known as claims.

WS-Fed served us well for many years. One of the products that used it was SharePoint. As time progressed, the demands of applications increased. For instance, they expanded who could initiate an authentication. If the RP initiates an authentication…

… this article is continued online. Click here to continue.