I think most programmers agree that simpler solutions (generally matching "lower cognitive load") are preferred, but the disagreements start about which ones are simpler: often a lower cognitive load comes with approaches one is more used to, or familiar with; when the mental models one has match those in the code.
For instance, the article itself suggests to use early/premature returns, while they are sometimes compared to "goto", making the control flow less obvious/predictable (as paxcoder mentioned here). Intermediate variables, just as small functions, can easily complicate reading of the code (in the example from the article, one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once, and an "is secure" comment could be used to assist skimming). As for HTTP codes, those are standardized and not dependent on the content, unlike custom JSON codes: most developers working with HTTP would recognize those without additional documentation. And it goes on and on: people view different things as good practices and being simpler, depending (at least in part) on their backgrounds. If one considers simplicity, perhaps it is best to also consider it as subjective, taking into account to whom it is supposed to look simple. I think sometimes we try to view "simple" as something more objective than "easy", but unless it is actually measured with something like Kolmogorov complexity, the objectivity does not seem to be there.
This was my main takeaway from A Philosophy Of Software Design by John Ousterhout. It is the best book on this subject and I recommend it to every software developer.
Basically, you should aim to minimise complexity in software design, but importantly, complexity is defined as "how difficult is it to make changes to it". "How difficult" is largely determined by the amount of cognitive load necessary to understand it.
I'm probably one of the "smart developers" with quirks. I try to build abstractions.
I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.
People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.
But the thing is... I'm not sure there's a better alternative.
You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.
Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.
"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?
You can't build careful bug-free abstractions in corporate environments.
So, is pile-of-if-statements the best we can do for business software?
The ability to create code that imposes low cognitive load on others not only is a rare and difficult skill to cultivate- it takes active effort and persistence to do even for someone who already has the ability and motivation. I think fundamentally the developer is computing a mental compression of the core ideas - distilling them to their essence - and then making sure that the code exposes only the minimum essential complexity of those ideas. not easy and rare to see in practice
This article reminds me of my early days at Microsoft. I spent 8 years in the Developer Division (DevDiv).
Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).
But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.
Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.
Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius.
For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.
Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.
None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.
A lot of comments mention John Ousterhout's book Philosophy of software design and it's definition of complexity of a system being cognitive load (I.e the number of disparate things one has to keep in mind when making a change). However IIRC from the book, complexity of a system = Cognitive load * Frequency of change.
The second component, frequency of change is equally important as when faced with tradeoffs, we can push high cognitive load to components edited less frequently (eg: lower down the stack) in exchange for lower cognitive load in the most frequently edited components.
This is one of the reasons I fear AI will harm the software engineering industry. AI doesn't have any of these limitation so it can write extremely complex and unreadable code that works... until it doesn't. And then no one can fix it.
It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.
Cognitive load is an important concept in aviation. It is linked to the number of tasks to run and the number of parameters to monitor, but it can be greatly reduced by training. Things you know inside and out don't seem to consume as much working memory.
So in software development there may be an argument to always structure projects the same way. Standards are good — even when they're bad! because one of their main benefit is familiarity.
The bit about "smart developer quirks" looks suspiciously like the author only understands code that they have written, or is in a specific style that they recognize. That's not the biggest driver behind cognitive load.
Reducing cognitive load comes from the code that you don't have to read.
Boundaries between components with strong guarantees let you reason about a large amount of code without ever reading it. Making a change (which the article uses as a benchmark) is done in terms of these clear APIs instead of with all the degrees of freedom available in the codebase.
If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much. They are visible in the volume, but not in the surface area.
I have a hard time separating the why and the what so I document both.
The biggest offender of "documenting the what" is:
x = 4 // assign 4 to x
Yeah, don't do that. Don't mix a lot of comments into the code. It makes it ugly to read, and the context switching between code and comments is hard.
Instead do something like:
// I'm going to do
// a thing. The code
// does the thing.
// We need to do the
// thing, because the
// business needs a
// widget and stuff.
setup();
t = setupThing();
t.useThing(42);
t.theWidget(need=true);
t.alsoOtherStuff();
etc();
etc();
Keep the code and comments separate, but stating the what is better than no comments at all, and it does help reduce cognitive load.
"A single page on Doordash can make upward of 1000 gRPC calls (see the interview). For many engineers, upward of a thousand network calls nicely illustrate the chaos and inefficiency unleashed by microservices. Engineers implicitly diff 1000+ gRPC calls with the orders of magnitude fewer calls made by a system designed by an architect looking at the problem afresh today. A 1000+ gRPC calls also seem like a perfect recipe for blowing up latency. There are more items in the debit column. Microservices can also increase the costs of monitoring, debugging, and deployment (and hence cause greater downtime and worse performance)."
Love it. Make code accessibility a first-class citizen. Turn the rule books and their principles into guidelines. A smart coder knows to follow rules. A master knows code is meant to be read and develops contextual awareness for when and why to break a rule, or augment it, as the case may be. So, reintroduce judgment and critical thinking in your coding practice. Develop an intuitive feel for the cognitive costs and trade-offs of your decisions. Whether you choose to duplicate or abstract, think of the next person (who sometimes is you in six months).
For those asking why author doesn't come up with their own new rules that can then be followed, this would just be trading a problem for the same problem. Absentmindedly following rules. Writing accessible code, past a few basic guidelines, becomes tacit knowledge. If you write and read code, you'll learn to love some and hate some. You'll also develop a feel for heavy handedness. Author said it best:
> It's not imagined, it's there and we can feel it.
We can feel it. Yes, having to make decisions while coding is an uncomfortable freedom. It requires you to be present. But you can get used to it if you try.
This is why I make lists. Of everything. Checklists for technical processes (work and personal). Checklists for travel. Little "how to" docs on pretty much everything I do that I'm sure I won't remember past a week.
It completely removes the stress of doing things repeatedly. I recently had to do something I hadn't done in 2 years. Yep, the checklist/doc on it was 95% correct, but it was no problem fixing the 5%.
I think it's pretty tiresome that "smart authors" are blamed for writing complex code. Smart authors generally write simpler code. It's much harder to write simple code than complex for reasons that boil down to entropy -- there are simply many more ways to write complex code than simple code, and finding one of the simple expressions of program logic requires both smarts and a modicum of experience.
If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.
E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.
Unit and Integration testing is great for decreasing cognitive load too. When you are staring at an error stack trace of a complex code base, and go through mentally what could have played out to cause this, it's great to have confidence in components due to testing. Hypothesis/QuickCheck is allows dropping entire classes of worries.
Boy if I had a dollar for every “we’ve been doing it wrong” posts.
The issue with this stance is, it’s not a zero sum game. There’s no arriving to a point where there isn’t a cognitive load on the task you’re doing. There will always be some sort of load. Pushing things off so that you reduce your load is how social security databases end up on S3.
Confusion comes from complexity. Not a high cognitive load. You can have a high load and still know how it all works. I would better word this as Cognitive load increases stress as you have more things to wrestle about in your head. Doesn’t add or remove confusion (unless that’s the kind of person you are), it just adds or removes complexity.
An example of a highly complex thing with little to no cognitive load due to conditioning, driving an automobile. A not-complex thing that imparts a huge cognitive load, golf.
This is actually some wonderful work that succinctly explains a lot of my experience. Much of how I was formally taught to program is counterproductive to the big picture the second someone else has to understand the code. It's part of the reason that I hate dealing with Rust and C++, and breathe a sigh of relief when the codebase I need to suck into my head is good old C. C offers fewer ways to hide all the working code in six layers of templates.
It's always interesting that many people who push the cognitive load argument also push for simpler languages. To me once I have learned a language well the features it has don't add to the cognitive load. they become basically second nature. It even has a great benefit, many things that are explicit in simple languages because there is no language support fall away in more complex languages. So more complex languages reduce cognitive load, at least for me.
The most important user of my temporary variables, à la "isValid" or "isSecure" is older/later me.
I could be adding a new feature six months later, or debugging a customer reported issue a week later. Especially in the latter case, where the pressure is greater and available time more constrained, I love that earlier/younger me was thoughtful enough to take the extra time to make things clear.
> There is no “simplifying force” acting on the code base other than deliberate choices that you make. Simplifying takes effort, and people are too often in a hurry.
There is a simplifying force: the engineers on the project who care about long-term productivity. Work to simplify the code is rarely tracked or rewarded, which is a problem across our industry. Most codebases I've worked in had some large low-hanging-fruit for increasing team productivity, but it's hard to show the impact of that work so it never gets done.
We need an objective metric of codebase cognitive complexity. Then folks can take credit for making the number go down.
These are good tips. A lot of it boils down to writing well-organized code geared for human consumption.
Junior programmers too often make the mistake of thinking the code they write is intended for consumption by the machine.
Coding is an exercise in communication. Either to your future self, or some other schmuck down the line who inherits your work.
When I practice the craft, I want to make sure years down the line when I inevitably need to crack the code back open again, I'll understand what's going on. When you solve a problem, create a framework, build a system... there's context you construct in your head as you work the project or needle out the shape of the solution. Strive to clearly convey intent (with a minimum of cognitive load), and where things get more complicated, make it as painless as possible for the next person to take the context that was in your head and reconstruct it in their own. Taking the patterns in your brain and recreating them in someone else's brain is in fact the essence of communication. In practice, this could mean including meaningful inline comments or accompanying documentation (eg. approach summary, drawings, flowcharts, state change diagrams, etc). Whatever means you have to efficiently achieve that aim. If it helps, think of yourself as a teacher trying to teach a student how your invention works.
I spend a few decades in the industry and in even more teams. I think, the quality of code strongly correlates with the team's ability to articulate its members cognitive load and skills. In some projects it is just not opportune to point out a need to skill up, so everybody just accepts whatever in PRs and quality never gets any better.
On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".
> Then QA engineers come into play: "Hey, I got 403 status, is that expired token or not enough access?"
To be fair, the HTTP status line allows for arbitrary informational text, so something like “HTTP/1.1 401 JWT token expired” would be perfectly allowable.
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" -- Brian Kernighan
I think these books and resources should not be viewed as hard rules, but as sets of examples explaining guiding principles, and the internet is full of discussions that turn into religious wars over it.
It is always worth it for a programmer to dwell over what complexity is according to Osterhaur; it is worth it to reason over what Uncle Bob thinks is "clean" code, etc. I'm not benefiting from either by applying what they say dogmatically, but I improve my taste in what is good software to me, by discovering and trying many approaches. Without reading them I might never even have thought at a particular solution, or a particular frame of mind.
Lowering the cognitive load by assigning temporary variables requires more thought and skill than credited here.
In particular these variables need to be extremely well named, otherwise people reading the code will still need to remember what exactly is abstracted if the wording doesn't exactly fit their vision.
E.g.
> isSecure = condition4 && !condition5
More often than not the real proper name would be "shouldBeSecureBecauseWeAlsoCheckedCondition3Before"
To a point, avoiding the abstraction and putting a comment instead can have better readability. The author's "smart" code could as well be
```
if (val > someConstant // is valid
&& (condition2 || condition3) // is allowed
&& (condition4 && !condition5) // is secure
) {
...
}
```
Children were always told to cram as much as possible into their memory. It was even claimed that, more you put into memory, you mind works better.
Not quite. Human mind has evolved to interpret the sensory data collected by senses, and cause necessary action. Some of that interpretation uses memory to correlate the perceived data with the memory data. That's pretty much it.
Overloading the human memory with tons of data which is not related to the context in which the person lives, can cause negative effects. I suspect it can also cause faster aging. New experiences, new information is like scales on a tree trunk. As you accumulate more of it, you age more.
Cognitive load is super important and should be optimised for. We all should have as our primary objective the taming of complexity.
I was surprised to find an anti-framework, anti-layering perspective here. The author makes good points: it’s costly to learn a framework, costly to break out of its established patterns, and costly when we tightly couple to a framework’s internals.
But the opposite is also true. Learning a framework may help speed up development overall, with developers leaning on previous work. Well designed frameworks make things easy to migrate, if they are expressive enough and well abstracted. Frameworks prevent bad and non-idiomatic design choices and make things clear to any new coder who is familiar with the framework. They prevent a lot of glue, bad abstractions, cleverness, and non-performant code.
Layering has an indirection cost which did not appeal to me at all as a less experienced developer, but I’ve learnt to appreciate a little layering because it helps make predictable where to look to find the source of a bug. I find it saves time because the system has predictable places for business logic, serialisation, data models, etc.
I completely agree with every in this article, but seems like it's just at different way of looking at the well-known software-engineering concept of "complexity." Yeah, the main difference is cognitive load is considering the complication of the system from how it effects the developer, while complexity focuses on the amount of complications in the system itself.
Yeah, if you go through this article and replace most of the places where it mentions "cognitive load" with "complexity," it still makes sense.
Yeah, this isn't a criticism of the article - In fact, there are important difference, like having more of a focus on what the dev is experiencing handling the complications of the system - But for those really interested in its concept, may want to learn about complexity too, as there is a lot of great info on this.
I think the viewpoint articulated in this post fits quite well with the one expressed in the often-shared "Programming as Theory-building" article (I think it was shared here just a few days ago).
Scientists, mathematicians, and software engineers are all really doing similar things: they want to understand something, be it a physical system, an abstract mathematical object, or a computer program. Then, they use some sort of language to describe that understanding, be it casual speech, formal mathematical rigor, scientific jargon -- or even code.
In fact, thinking about it, the code specifying a program is just a human-readable description (or "theory", perhaps) of the behavior of that program, precise and rigorous enough that a computer can convert the understanding embodied in that code into that actual behavior. But, crucially, it's human readable: the reason we don't program in machine code is to maximize our and other people's understanding of what exactly the program (or system) does.
From this perspective, when we write code, articles, etc., we should be highly focused on whether our intended audience would even understand what we are writing (at least, in the way that we, the writer, seem to). Thinking about cognitive load seems to be good, because it recognizes this ultimate objective. On the other hand, principles like DRY -- at least when divorced from their original context -- don't seem to implicitly recognize this goal, which is why they can seem unsatisfactory (to me at least). Why shouldn't I repeat myself? Sometimes it is better to repeat myself!? When should I repeat myself??
If you want to see an example of a fabulous mathematician expressing the same ideas in his field (with much better understanding and clarity than I could ever hope to achieve), I highly recommend Bill Thurston's article "On proof and progress in mathematics" <https://arxiv.org/abs/math/9404236>.
I don't know, I'm seduced by the elitist approach: code with a high cognitive load keeps mediocre developers away.
Case in point: Forth. It generally has a heavy cognitive load. However, Forth also enables a radical kind of simplicity. You need to be able to handle the load to access it.
The mind can train to a high cognitive load. It's a nice "muscle" to train.
Should we care about cognitive load? Absolutely. It's a finite budget. But I also think that there are legitimate reasons to accept a high cognitive load for a piece of code.
One might ask "what if you need to onboard mediocre developers into your project?". Hum, yeah, sure. In that case, this article is correct. But being forced to onboard mediocre developers highlights an organizational problem.
Is it possible to have a system prompt for an LLM to follow some of these best practices? Has anyone made a reduced system prompt with these principles?
People can argue about this all day, but one thing is always crystal clear.
Simplicity comes from practice writing and refactoring large code thousands of times. People with limited or shallow experience may think they are good at this but only when they isolate themselves to known patterns of comfort or some giant framework. There is a lot of insecurity there.
Super experienced people, that is people with lots of practice writing large original applications, don’t think like the pretenders. Simplicity is built in like muscle memory. They just solve fucking problem and go drink a beer. There is no memorized pattern nonsense.
The super experienced developers see the pretenders for what they are while the pretenders either can’t see the distinction or just feel hostility at the deviation far outside a memorized convention.
Cognitive load and the benefits of simplification aren't just for systems and code. Reducing cognitive load is critically important in delivering good requirements. It enables engineers to focus on the technical and organization aspects of the solution, not interpreting the problem.
The fact is, despite all the process and pipelines and rituals we've invented to guide how software is made, the best thing leadership can do is to communicate incremental, unambiguous requirements and provide time and space for your engineers to solve the problem. If you don't do that, none of the other meetings and systems and processes and tools will matter.
I would just add the IsAllowed etc. as a comment next to the relevant line.
Often the explanation is bigger than what you'd want in a variable name, I find it less overhead than making more variables, and it makes better use of screen-space.
I'd only lean towards intermediate variables if
a) there's lots of smaller conditionals being aggregated up into bigger conditionals which makes line-by-line comments insufficient or
b) I'm reusing the same conditional a lot (this is mostly to draw the reader's attention to the fact that the condition is being re-used).
> business logic and http status codes
Why hold this custom mapping in our working memory? It's better to abstract away your business details from the HTTP transfer protocol, and return self-descriptive codes directly in the response body:
{ "code": "jwt_has_expired" }
While the logic behind it sounds reasonable, REST does the exact opposite with the same goal: simplicity, easy to learn, i.e. reduce mental load.
I know there are other reasons for REST/SOAP/Graphql, etc.
Still makes mental load a somewhat subjective matter to me.
I don't seem to come across such limits when i am doing something i am not supposed to be doing (procrastinating, for example obsessively reading something other than what i should be reading).
Great read. At my last job, everything was quite monolithic when I joined, and I led the crusade to move to more segmented, module-driven development. There was definitely a period where I eventually swung too far in that direction and only realized it after a dependency issue led to an escalation.
Hopefully someone can learn from this before they spin a complex web that becomes a huge effort to untangle.
Thanks for being patience with me. All my questions were answered and the support continued even after the delivery of the service . I will continue to work with you and I have already started referring family to you that needed similar assistance. Thanks JBEE SPY TEAM on telegram +44 7456 058620
I love love love monorepo + fat encapsulated modules + a couple of deployables. Why does the software industry create fake complexity/creativity in the craft? Fashion/hype architectures for dopamine and fulfilment. You're just wasting energy finding new ways to create machine code for hardware. Why not get creative in the hardware and actually make something new? Another thing that really grinds my gears, is how many human hours are poured into js frameworks instead of improving browsers. What an utter waste of time. Maybe mediocre engineers like me who could never design a CPU or extend a browser need to feel seen by embracing a new domain whatever concept to make us feel warm and fuzzy but not really innovate the real hard things that move the needle.
I think cognitive load has a lot more to do with the paradigm that the code is written in than any particular type of author's contribution to the code. For instance, the object-oriented paradigm by design increases cognitive load by encouraging breaking up otherwise straightforward logic into multiple interfaces, classes, and methods.
I had the exact same experience with layered architectures like described in this article. Avoid them as much as possible, naive and simple code is often better. It might look messy on the surface, and the layered code might look much cleaner. Until you drown in indirections, that are impossible to keep track of.
Balancing a cup on a tray isn't too hard. The skill comes in when you can balance 10 cups, and a tray on top of them, and then ten more cups, and another tray, and a vase on that... each step isn't difficult, but maintain the structure is difficult. It's like that, but with ideas.
There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.
In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.
Feels like the author completely misunderstood at least one of the fundamental and basic concepts of DDD - writing how it is only about the problem and not the solution space, where it is actually very clearly about both - but still decided to write down a very sure judgement of it. Disappointing.
A tip: if you're ever writing an article like this which is essentially do's and don'ts, adopt a consistent format for each. In many of these it's not immediately clear which is the do and which is the don't, creating, ironically, cognitive load for the reader.
I think I'm not smart enough for it. I can't really take anything new away from it, mainly just a message of "we're smart people, and trust us when we say smart things are bad. All the smart sounding stuff you learned about how to program from smart sounding people like us? Lol, that's all wrong now."
Okay, I get the cognitive load is bad, so what's the solution?
"Just do simple dumb stuff, duh." Oh, right... Useful.
The problem is never just the code, or the architecture, or the business, or the cognitive load. It's the mismatch of those things against the people expected to work with them.
Walk into a team full of not-simple engineers, and tell them all what they've been doing is wrong, and they need to just write simple code, some of them will fail, some will walk out, and you'll be no closer to a solution.
I wish I knew of the tech world before 20 years ago, where technical roles were long and stable enough for teams to build their own understanding of a suitable level of complexity. Without that, churn means we all have to aim for the lowest common denominator.
"Domain-driven design has some great points, although it is often misinterpreted”. Agreed. The worst shops I’ve ever worked in are ones where the DDD/evans/fowler orthodoxy has run amok.
“Smart developer’s quirks” tend to peak in 3-8 years of experience and fade off thereafter. A hipster will never fade off and instead continue hipster coding alongside their identity in perpetuity.
Unless you have a quad-core brain like me and can vibecode four separate parts of the project in four terminal tabs. Of course, using your own method. Just kidding, of course.
Still processing this article, but so far enjoying that it opens with some humour, and also shows off logistics ideas that are not locked into one domain if you zoom out. Thank you :)
Speaking only of intrinsic and extraneous cognitive load oversimplifies things. This works for tasks of pilots (see e.g. NASATLX scores).
However, if new information that needs to be learned is in the game there is also germane cognitive load [0]. It is a nice theory, however, practically there is unfortunately no easy way to separate them and look at them totally independently.
This is also a proper framework for evaluating AI replies – they do not only need to be appropriate, they also need to consume the minimum cognitive load for parsing.
while i think this is generally good advice, i also think reality isn't easy to define
i like what others would call complexity, i always have, and have from very very early on been mindful of that, i think to a fault since i no longer trust my intuition
is it good to try to turn wizards into brick layers? is there no other option?
Fast response from JBEE SPY TEAM 10/10 accuracy, finally I can see my spouse phone now I got all prove i needed in court I recommend JBEE SPY TEAM on instagram you can still reach them on telegram +44 7456 058620, Email conleyjbeespy606@gmail.com
While I support the goal of article, reducing extraneous cognitive load, I think some of the comments, and the article are missing a key point about cognitive load — it depends on the existing mental model the reader/author/developer has about the whole thing. There is no universal truth to reducing cognitive load like reducing abstractions / not relying on frameworks.
Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.
What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.
I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.
Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.
If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)
Like another user said it depends on each developer background what's simpler and what's not. For example I have a problem with intermediate variables for improving readability, like isAllowed, it really is more readable but more often than not in large codebases what the name implies is not what the conditional check is or it is but it's not exaustive. So i have to inspect the variable to see what it actually is. The thing is that comments and variable names must be maintained as well as code, so it implies a certain degree of cognitive load to maintain. While a conditional like (condition2 || condition3) looks bad it still is more straightforward
Cognitive load may be the dominant form of stress but it is not the only one. I feel like this is very close to correct but subtlety and critically broken.
In particular, when the shit hits the fan, your max cognitive load tanks. Something people who grumble at the amount of foolproofing I prefer often only discover in a crisis. Because they’re used to looking at something the way they look at it while sipping their second coffee of the day. Not when the servers are down and customers are calling angry.
You’ll note that we only see how the control room at NASA functions in movies and TV when there’s a massive crisis going on, or intrigue. Because the rest of the time it’s so fucking boring nobody would watch it.
Don’t bother with this if you want to get promoted. Others have discussed this in thread and are right. If you build beautiful, simplified abstractions, your skill will be taken for granted as these interfaces appear obvious once discovered (by virtue of their proximity to truth, incredibly difficult to create, easy to verify). If you are in even a reasonably large org, go the other way. Be an Architecture astronaut. Build complex, clever stuff that is deliberately high cognitive load. Get your bus-factor as close to one as possible. Go the other way only if your comp is directly tied to company performance.
Introducing intermediate variables is what I call "indirection". You're adding another step to someone reading the code.
Let's take a recipe:
Ingredients:
large bowl
2 eggs
200 grams sugar
500 grams flour
1/2 tsp soda
Steps:
Crack the eggs into a bowl. Add sugar and whisk. Sift the flower. Add the soda.
When following the instruction, you have to always refer back to the ingredients list and search for the quantity, which massively burdens you with "cognitive load". However, if you inline things:
Crack 2 eggs into a large bowl. Add 200g sugar and whisk. Sift 500g of flower. Add 1/2 tsp soda.
At some point, you just need to go with the flow. Worrying about the metacognitive consequences of your work and trying to actively manage this with technological policy will turn into a death spiral. You should always try to take advantage of the momentum in the things around you. Our profession has a reputation for going out of its way to do things like push for a rewrite of a company's codebase after having seen the legacy for 20 minutes. This isn't even a chesterson's fence conversation. This is crude, impulsive behavior that makes any kind of productive business infeasible.
Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.
Whilst I agree with lots of ideas in this piece, I fell out of love with it when clicking into the discussion on what should be done instead of using a layered architecture.
The author makes valid points but they are vacuous and do not provide concrete alternatives.
Many engineering articles disappoint me in this way, I get hyped by all the “don’t dos”, but the “do dos” never come.
> We need something more fundamental, something that can't be wrong.
What is this bug in software people's brains that keeps thinking "I can come up with a perfect idea that is never wrong" ? Can a psychologist explain this to me please?
Like, scientists know this is dumb. The only way something can be perceived as right, scientifically, is if lots of people independently test an idea, over and over and over and over again, and get the same result. And even then, they just say it's true so far.
But software people over here like "If I spend 15 minutes thinking about an idea, I can come up with a fundamental principle of everything that is always true forever." And sadly the whole "fundamental principle" is based in ignorance. Somebody heard an interesting-sounding term, never actually learned what it meant, but decided to make up their own meaning for it, and find anything else in their sphere (software) that backs up their theory.
If they'd at least quoted any of the academic study and research about cognitive load over the past 35 years, maybe I might be blowing this out of proportion? But nope. This is literally just a clickbait rant, based on vibes, backed up by quotes from blogs. The author doesn't seem to understand cognitive load at all, and their descriptions of what it is, and what you should do in relation to it, are all wrong. The article doesn't even mention all three types of cognitive load. And one of the latest papers on the subject (Orru G., Longo L. (2019)) basically came to the conclusion that 1) the whole thing is very complex, and 2) all the previous research might be bunk or at least need brand new measurement methods, so... why is anyone taking this all as if it's fact?
But I'm not really bothered by the ignorance. It's the ego that kills me. The idea that these random people who know nothing about a subject are rushing to debate this, as if this idea, or these people's contributions, have merit, just because they think they're really smart.
In my experience, writing readable code and writing code that behaves correctly (fulfills the contract/requirements without hiding potential faults) is often mutually exclusive -- most people end up doing one or the other. This is related to the never-ending functional programming vs. "traditional programming" (a target in motion, largely OOP or in the very least "whatever is taught at the graduate schools"), since the former, in contrast to the article which pretty much _assumes_ the latter, doesn't even facilitate "variables", literally or in informal sense (things you can "assign to", whether changing or not).
Anyway, I happen to belong in the latter category according to most -- the longer I have been doing this, the more I lean into the purely functional style, almost mathematical vigor, because I have learned how much (or rather little) margin there is to introduce subtle errors once you have actual _variables_ that may change freely, which start to encourage you to do other things which in the end contribute to lack of correctness, readable or not.
Now, you may blame people like me, and I cannot blame you for not having the cognitive load capacity to understand some of the code I write "succinctly", but my point is that for all the merit of the article (yes, I agree code is read much more often than it is written, lending value to the "readability" argument), it doesn't acknowledge the fact readability and correctness are _in practice_ often mutually exclusive. Like, in the field. Because I wager that the tendency is to approach a more mathematical expression style as one becomes better at designing software, with adversarial conditions manifesting in terms of bugs hiding in mutability of state and large, if "simple", bodies of functions, classes (which have methods you cannot guarantee to not mutate the object's state).
We need to find means to write code that is readable but without compromising other factors like mutability which _too_ has been shown to compromise correctness. What good is readable software that never manages to escape the vortex of issues, driving the perpetually busy industry "fixing bugs".
At my place of work, I obviously see both kinds of the "mutually exclusive", and I can tell you without due pride and yet with good confidence, people who write readable code -- consisting of aliasing otherwise complex expression with eloquently named variables (or sometimes even "constants", bless their heart), and designing clumsy class hierarchies -- spend a lot of subsequent effort never being able to be "done" with the code, and I don't mean just because requirements keep changing, no -- they sit and essentially "fixup commit" to the code they write, in perpetuity, seemingly. And we have select few who'd write a code-base with as few variables as possible, with a lot of pure function -- what I referred to as "mathematical programming" in a way -- and I never hear from them much offering "PRs" to fix their earlier mishaps. The message that sends me is pretty clear.
So yeah, by all means, let's find ways to write code our fellow man can understand, but the article glosses over a factor that is at least as important -- all the mutability and care for "cognitive load" capacity (which _may be_ lower for current generation of software engineers vs earlier ones) may be keeping us in the rotating vortex of bugs we so "proudly" crouch over as we pretend we are "busy". I, for one, prefer to write code that works right from get-go, and not have to come back to said code unless the requirements which made me write it the way I did, change. On a very rare occasion, admittedly, I have to sacrifice readability for correctness, not because it's inherently one or the other, but because I too haven't yet found the perfect means to always have both, and yet correctness is on the absolute top of my list, and I advocate that it should be on top of your as well, dare I to say so. But that is me -- perhaps I set the bar too high?
As a coder, these are the exact problems which shouldn’t cause problems for you. You should be a coder because you’re better in these problems than others.
More coders are needed, than to who these are “simple”, I understand. But, if you have problems with these, I would definitely try to pivot to something else, like managerial positions. Especially with AI on us. Of course, if you are fine to be an “organic robot”, then it’s fine, but you’ll never really get why this profession is awesome. You’ll never have the leverage.
Cognitive load is what matters
(github.com)1580 points by nromiun 30 August 2025 | 526 comments
Comments
For instance, the article itself suggests to use early/premature returns, while they are sometimes compared to "goto", making the control flow less obvious/predictable (as paxcoder mentioned here). Intermediate variables, just as small functions, can easily complicate reading of the code (in the example from the article, one would have to look up what "isSecure" means, while "(condition4 && !condition5)" would have shown it at once, and an "is secure" comment could be used to assist skimming). As for HTTP codes, those are standardized and not dependent on the content, unlike custom JSON codes: most developers working with HTTP would recognize those without additional documentation. And it goes on and on: people view different things as good practices and being simpler, depending (at least in part) on their backgrounds. If one considers simplicity, perhaps it is best to also consider it as subjective, taking into account to whom it is supposed to look simple. I think sometimes we try to view "simple" as something more objective than "easy", but unless it is actually measured with something like Kolmogorov complexity, the objectivity does not seem to be there.
Basically, you should aim to minimise complexity in software design, but importantly, complexity is defined as "how difficult is it to make changes to it". "How difficult" is largely determined by the amount of cognitive load necessary to understand it.
I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.
People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.
But the thing is... I'm not sure there's a better alternative.
You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.
Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.
"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?
You can't build careful bug-free abstractions in corporate environments.
So, is pile-of-if-statements the best we can do for business software?
Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).
But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.
Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.
Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius. For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.
Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.
None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.
The second component, frequency of change is equally important as when faced with tradeoffs, we can push high cognitive load to components edited less frequently (eg: lower down the stack) in exchange for lower cognitive load in the most frequently edited components.
It's also why I urge junior engineers to not rely on AI so much because even though it makes writing code so much faster, it prevents them from learning the quirks of the codebase and eventually they'll lose the ability to write code on their own.
So in software development there may be an argument to always structure projects the same way. Standards are good — even when they're bad! because one of their main benefit is familiarity.
Reducing cognitive load comes from the code that you don't have to read. Boundaries between components with strong guarantees let you reason about a large amount of code without ever reading it. Making a change (which the article uses as a benchmark) is done in terms of these clear APIs instead of with all the degrees of freedom available in the codebase.
If you are using small crisp API boundaries to break up the system, "smart developer quirks" don't really matter very much. They are visible in the volume, but not in the surface area.
I have a hard time separating the why and the what so I document both.
The biggest offender of "documenting the what" is:
Yeah, don't do that. Don't mix a lot of comments into the code. It makes it ugly to read, and the context switching between code and comments is hard.Instead do something like:
Keep the code and comments separate, but stating the what is better than no comments at all, and it does help reduce cognitive load."A single page on Doordash can make upward of 1000 gRPC calls (see the interview). For many engineers, upward of a thousand network calls nicely illustrate the chaos and inefficiency unleashed by microservices. Engineers implicitly diff 1000+ gRPC calls with the orders of magnitude fewer calls made by a system designed by an architect looking at the problem afresh today. A 1000+ gRPC calls also seem like a perfect recipe for blowing up latency. There are more items in the debit column. Microservices can also increase the costs of monitoring, debugging, and deployment (and hence cause greater downtime and worse performance)."
For those asking why author doesn't come up with their own new rules that can then be followed, this would just be trading a problem for the same problem. Absentmindedly following rules. Writing accessible code, past a few basic guidelines, becomes tacit knowledge. If you write and read code, you'll learn to love some and hate some. You'll also develop a feel for heavy handedness. Author said it best:
> It's not imagined, it's there and we can feel it.
We can feel it. Yes, having to make decisions while coding is an uncomfortable freedom. It requires you to be present. But you can get used to it if you try.
It completely removes the stress of doing things repeatedly. I recently had to do something I hadn't done in 2 years. Yep, the checklist/doc on it was 95% correct, but it was no problem fixing the 5%.
If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.
E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.
The issue with this stance is, it’s not a zero sum game. There’s no arriving to a point where there isn’t a cognitive load on the task you’re doing. There will always be some sort of load. Pushing things off so that you reduce your load is how social security databases end up on S3.
Confusion comes from complexity. Not a high cognitive load. You can have a high load and still know how it all works. I would better word this as Cognitive load increases stress as you have more things to wrestle about in your head. Doesn’t add or remove confusion (unless that’s the kind of person you are), it just adds or removes complexity.
An example of a highly complex thing with little to no cognitive load due to conditioning, driving an automobile. A not-complex thing that imparts a huge cognitive load, golf.
I could be adding a new feature six months later, or debugging a customer reported issue a week later. Especially in the latter case, where the pressure is greater and available time more constrained, I love that earlier/younger me was thoughtful enough to take the extra time to make things clear.
That this might help others is lagniappe.
There is a simplifying force: the engineers on the project who care about long-term productivity. Work to simplify the code is rarely tracked or rewarded, which is a problem across our industry. Most codebases I've worked in had some large low-hanging-fruit for increasing team productivity, but it's hard to show the impact of that work so it never gets done.
We need an objective metric of codebase cognitive complexity. Then folks can take credit for making the number go down.
Junior programmers too often make the mistake of thinking the code they write is intended for consumption by the machine.
Coding is an exercise in communication. Either to your future self, or some other schmuck down the line who inherits your work.
When I practice the craft, I want to make sure years down the line when I inevitably need to crack the code back open again, I'll understand what's going on. When you solve a problem, create a framework, build a system... there's context you construct in your head as you work the project or needle out the shape of the solution. Strive to clearly convey intent (with a minimum of cognitive load), and where things get more complicated, make it as painless as possible for the next person to take the context that was in your head and reconstruct it in their own. Taking the patterns in your brain and recreating them in someone else's brain is in fact the essence of communication. In practice, this could mean including meaningful inline comments or accompanying documentation (eg. approach summary, drawings, flowcharts, state change diagrams, etc). Whatever means you have to efficiently achieve that aim. If it helps, think of yourself as a teacher trying to teach a student how your invention works.
On the other end of the spectrum you hear sentences starting with: "It would help me to understand this more easily, if ...".
Guess, what happens over time in these teams?
To be fair, the HTTP status line allows for arbitrary informational text, so something like “HTTP/1.1 401 JWT token expired” would be perfectly allowable.
It is always worth it for a programmer to dwell over what complexity is according to Osterhaur; it is worth it to reason over what Uncle Bob thinks is "clean" code, etc. I'm not benefiting from either by applying what they say dogmatically, but I improve my taste in what is good software to me, by discovering and trying many approaches. Without reading them I might never even have thought at a particular solution, or a particular frame of mind.
In particular these variables need to be extremely well named, otherwise people reading the code will still need to remember what exactly is abstracted if the wording doesn't exactly fit their vision. E.g.
> isSecure = condition4 && !condition5
More often than not the real proper name would be "shouldBeSecureBecauseWeAlsoCheckedCondition3Before"
To a point, avoiding the abstraction and putting a comment instead can have better readability. The author's "smart" code could as well be
Not quite. Human mind has evolved to interpret the sensory data collected by senses, and cause necessary action. Some of that interpretation uses memory to correlate the perceived data with the memory data. That's pretty much it.
Overloading the human memory with tons of data which is not related to the context in which the person lives, can cause negative effects. I suspect it can also cause faster aging. New experiences, new information is like scales on a tree trunk. As you accumulate more of it, you age more.
https://news.ycombinator.com/item?id=42489645 (721 comments)
I was surprised to find an anti-framework, anti-layering perspective here. The author makes good points: it’s costly to learn a framework, costly to break out of its established patterns, and costly when we tightly couple to a framework’s internals.
But the opposite is also true. Learning a framework may help speed up development overall, with developers leaning on previous work. Well designed frameworks make things easy to migrate, if they are expressive enough and well abstracted. Frameworks prevent bad and non-idiomatic design choices and make things clear to any new coder who is familiar with the framework. They prevent a lot of glue, bad abstractions, cleverness, and non-performant code.
Layering has an indirection cost which did not appeal to me at all as a less experienced developer, but I’ve learnt to appreciate a little layering because it helps make predictable where to look to find the source of a bug. I find it saves time because the system has predictable places for business logic, serialisation, data models, etc.
Statuscode: 200 { success: false, error: "..." }
Yeah, if you go through this article and replace most of the places where it mentions "cognitive load" with "complexity," it still makes sense.
Yeah, this isn't a criticism of the article - In fact, there are important difference, like having more of a focus on what the dev is experiencing handling the complications of the system - But for those really interested in its concept, may want to learn about complexity too, as there is a lot of great info on this.
Scientists, mathematicians, and software engineers are all really doing similar things: they want to understand something, be it a physical system, an abstract mathematical object, or a computer program. Then, they use some sort of language to describe that understanding, be it casual speech, formal mathematical rigor, scientific jargon -- or even code.
In fact, thinking about it, the code specifying a program is just a human-readable description (or "theory", perhaps) of the behavior of that program, precise and rigorous enough that a computer can convert the understanding embodied in that code into that actual behavior. But, crucially, it's human readable: the reason we don't program in machine code is to maximize our and other people's understanding of what exactly the program (or system) does.
From this perspective, when we write code, articles, etc., we should be highly focused on whether our intended audience would even understand what we are writing (at least, in the way that we, the writer, seem to). Thinking about cognitive load seems to be good, because it recognizes this ultimate objective. On the other hand, principles like DRY -- at least when divorced from their original context -- don't seem to implicitly recognize this goal, which is why they can seem unsatisfactory (to me at least). Why shouldn't I repeat myself? Sometimes it is better to repeat myself!? When should I repeat myself??
If you want to see an example of a fabulous mathematician expressing the same ideas in his field (with much better understanding and clarity than I could ever hope to achieve), I highly recommend Bill Thurston's article "On proof and progress in mathematics" <https://arxiv.org/abs/math/9404236>.
Case in point: Forth. It generally has a heavy cognitive load. However, Forth also enables a radical kind of simplicity. You need to be able to handle the load to access it.
The mind can train to a high cognitive load. It's a nice "muscle" to train.
Should we care about cognitive load? Absolutely. It's a finite budget. But I also think that there are legitimate reasons to accept a high cognitive load for a piece of code.
One might ask "what if you need to onboard mediocre developers into your project?". Hum, yeah, sure. In that case, this article is correct. But being forced to onboard mediocre developers highlights an organizational problem.
Simplicity comes from practice writing and refactoring large code thousands of times. People with limited or shallow experience may think they are good at this but only when they isolate themselves to known patterns of comfort or some giant framework. There is a lot of insecurity there.
Super experienced people, that is people with lots of practice writing large original applications, don’t think like the pretenders. Simplicity is built in like muscle memory. They just solve fucking problem and go drink a beer. There is no memorized pattern nonsense.
The super experienced developers see the pretenders for what they are while the pretenders either can’t see the distinction or just feel hostility at the deviation far outside a memorized convention.
The fact is, despite all the process and pipelines and rituals we've invented to guide how software is made, the best thing leadership can do is to communicate incremental, unambiguous requirements and provide time and space for your engineers to solve the problem. If you don't do that, none of the other meetings and systems and processes and tools will matter.
I'd only lean towards intermediate variables if a) there's lots of smaller conditionals being aggregated up into bigger conditionals which makes line-by-line comments insufficient or b) I'm reusing the same conditional a lot (this is mostly to draw the reader's attention to the fact that the condition is being re-used).
While the logic behind it sounds reasonable, REST does the exact opposite with the same goal: simplicity, easy to learn, i.e. reduce mental load. I know there are other reasons for REST/SOAP/Graphql, etc. Still makes mental load a somewhat subjective matter to me.
Hopefully someone can learn from this before they spin a complex web that becomes a huge effort to untangle.
I literally relaxed in my body when I read this. It was like a deep sigh of cool relief in my soul.
"Cognitive Load" is a buzzword which is abstract.
Cognitive Load is just one factor of projects, and not the main one.
Focus on solving problems, not Cognitive Load, or other abstract concepts.
Use the simple, direct, and effective method to solve problems.
Cognitive Load is relative, it is a high Cognitive Load for one person, but low cognitive load for another person for the same thing.
The Programmer's Brain: What every programmer needs to know about cognition. By Felienne Hermans
https://www.manning.com/books/the-programmers-brain
Finding flow while coding is a juggling act to keep things in the Goldilocks zone: not too hard, not too easy.
This is tricky on an individual level and even trickier for a team / project.
Coding is communicating how to solve a problem to yourself, your team, stakeholders and lastly the computer.
The Empathic Programmer?
Balancing a cup on a tray isn't too hard. The skill comes in when you can balance 10 cups, and a tray on top of them, and then ten more cups, and another tray, and a vase on that... each step isn't difficult, but maintain the structure is difficult. It's like that, but with ideas.
There are separate contexts involved here: the coder, the compiler, the runtime, a person trying to understand the code (context of this article), etc. What's better for one context may not be better for another, and programming languages favor certain contexts over others.
In this case, since programming languages primarily favor making things easier for the compiler and have barely improved their design and usability in 50 years, both coders and readers should employ third party tools to assist them. AI can help the reader understand the code and the coder generate clearer documentation and labels, on top of using linters, test driven development, literate documentation practices, etc.
How real is this use case? Unless you switch projects really often, this is like a week per two years.
Perhaps we should focus on solving problems that are hard by nature, not by experience of a developer or other external factors.
I think I'm not smart enough for it. I can't really take anything new away from it, mainly just a message of "we're smart people, and trust us when we say smart things are bad. All the smart sounding stuff you learned about how to program from smart sounding people like us? Lol, that's all wrong now."
Okay, I get the cognitive load is bad, so what's the solution?
"Just do simple dumb stuff, duh." Oh, right... Useful.
The problem is never just the code, or the architecture, or the business, or the cognitive load. It's the mismatch of those things against the people expected to work with them.
Walk into a team full of not-simple engineers, and tell them all what they've been doing is wrong, and they need to just write simple code, some of them will fail, some will walk out, and you'll be no closer to a solution.
I wish I knew of the tech world before 20 years ago, where technical roles were long and stable enough for teams to build their own understanding of a suitable level of complexity. Without that, churn means we all have to aim for the lowest common denominator.
However, if new information that needs to be learned is in the game there is also germane cognitive load [0]. It is a nice theory, however, practically there is unfortunately no easy way to separate them and look at them totally independently.
[0] https://mcdreeamiemusings.com/blog/2019/10/15/the-good-the-b...
i like what others would call complexity, i always have, and have from very very early on been mindful of that, i think to a fault since i no longer trust my intuition
is it good to try to turn wizards into brick layers? is there no other option?
Certainly giving me some pause for thought, in my own work.
https://youtu.be/SxdOUGdseq4
Reducing cognitive load doesn't happen in a vacuum where simple language constructs trump abstraction/smart language constructs. Writing code, documents, comments, choosing the right design all depend upon who you think is going to interact with those artifacts, and being able to understand what their likely state of mind is when they interact with those artifacts i.e. theory of mind.
What is high cognitive load is very different, for e.g. a mixed junior-senior-principal high-churn engineering team versus a homogenous team who have worked in the same codebase and team for 10+ years.
I'd argue the examples from the article are not high cognitive load abstractions, but the wrong abstractions that resulted in high cognitive load because they didn't make things simpler to reason about. There's a reason why all modern standard libraries ship with standard list/array/set/hashmap/string/date constructs, so we don't have manually reimplement them. They also give a team who is using the language (a framework in its own way) common vocabulary to talk about nouns and verbs related to those constructs. In essense, it is reducing the cognitive load once the initial learning phase of the language is done.
Reading through the examples in the article, what is likely wrong is that the decision to abstract/layer/framework is not chosen because of observation/emergent behavior, but rather because "it sounds cool" aka cargo cult programming or resume-driven programming.
If you notice a group of people fumble over the same things over and over again, and then try to introduce a new concept (abstraction/framework/function), and notice that it doesn't improve or makes it harder to understand after the initial learning period, then stop doing it! I know, sunk cost fallacy makes it difficult after you've spent 3 months convincing your PM/EM/CTO that a new framework might help, but then you have bigger problems than high cognitive load / wrong abstractions ;)
In particular, when the shit hits the fan, your max cognitive load tanks. Something people who grumble at the amount of foolproofing I prefer often only discover in a crisis. Because they’re used to looking at something the way they look at it while sipping their second coffee of the day. Not when the servers are down and customers are calling angry.
You’ll note that we only see how the control room at NASA functions in movies and TV when there’s a massive crisis going on, or intrigue. Because the rest of the time it’s so fucking boring nobody would watch it.
Let's take a recipe:
When following the instruction, you have to always refer back to the ingredients list and search for the quantity, which massively burdens you with "cognitive load". However, if you inline things: Much easier to follow!Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.
The author makes valid points but they are vacuous and do not provide concrete alternatives.
Many engineering articles disappoint me in this way, I get hyped by all the “don’t dos”, but the “do dos” never come.
What is this bug in software people's brains that keeps thinking "I can come up with a perfect idea that is never wrong" ? Can a psychologist explain this to me please?
Like, scientists know this is dumb. The only way something can be perceived as right, scientifically, is if lots of people independently test an idea, over and over and over and over again, and get the same result. And even then, they just say it's true so far.
But software people over here like "If I spend 15 minutes thinking about an idea, I can come up with a fundamental principle of everything that is always true forever." And sadly the whole "fundamental principle" is based in ignorance. Somebody heard an interesting-sounding term, never actually learned what it meant, but decided to make up their own meaning for it, and find anything else in their sphere (software) that backs up their theory.
If they'd at least quoted any of the academic study and research about cognitive load over the past 35 years, maybe I might be blowing this out of proportion? But nope. This is literally just a clickbait rant, based on vibes, backed up by quotes from blogs. The author doesn't seem to understand cognitive load at all, and their descriptions of what it is, and what you should do in relation to it, are all wrong. The article doesn't even mention all three types of cognitive load. And one of the latest papers on the subject (Orru G., Longo L. (2019)) basically came to the conclusion that 1) the whole thing is very complex, and 2) all the previous research might be bunk or at least need brand new measurement methods, so... why is anyone taking this all as if it's fact?
But I'm not really bothered by the ignorance. It's the ego that kills me. The idea that these random people who know nothing about a subject are rushing to debate this, as if this idea, or these people's contributions, have merit, just because they think they're really smart.
Anyway, I happen to belong in the latter category according to most -- the longer I have been doing this, the more I lean into the purely functional style, almost mathematical vigor, because I have learned how much (or rather little) margin there is to introduce subtle errors once you have actual _variables_ that may change freely, which start to encourage you to do other things which in the end contribute to lack of correctness, readable or not.
Now, you may blame people like me, and I cannot blame you for not having the cognitive load capacity to understand some of the code I write "succinctly", but my point is that for all the merit of the article (yes, I agree code is read much more often than it is written, lending value to the "readability" argument), it doesn't acknowledge the fact readability and correctness are _in practice_ often mutually exclusive. Like, in the field. Because I wager that the tendency is to approach a more mathematical expression style as one becomes better at designing software, with adversarial conditions manifesting in terms of bugs hiding in mutability of state and large, if "simple", bodies of functions, classes (which have methods you cannot guarantee to not mutate the object's state).
We need to find means to write code that is readable but without compromising other factors like mutability which _too_ has been shown to compromise correctness. What good is readable software that never manages to escape the vortex of issues, driving the perpetually busy industry "fixing bugs".
At my place of work, I obviously see both kinds of the "mutually exclusive", and I can tell you without due pride and yet with good confidence, people who write readable code -- consisting of aliasing otherwise complex expression with eloquently named variables (or sometimes even "constants", bless their heart), and designing clumsy class hierarchies -- spend a lot of subsequent effort never being able to be "done" with the code, and I don't mean just because requirements keep changing, no -- they sit and essentially "fixup commit" to the code they write, in perpetuity, seemingly. And we have select few who'd write a code-base with as few variables as possible, with a lot of pure function -- what I referred to as "mathematical programming" in a way -- and I never hear from them much offering "PRs" to fix their earlier mishaps. The message that sends me is pretty clear.
So yeah, by all means, let's find ways to write code our fellow man can understand, but the article glosses over a factor that is at least as important -- all the mutability and care for "cognitive load" capacity (which _may be_ lower for current generation of software engineers vs earlier ones) may be keeping us in the rotating vortex of bugs we so "proudly" crouch over as we pretend we are "busy". I, for one, prefer to write code that works right from get-go, and not have to come back to said code unless the requirements which made me write it the way I did, change. On a very rare occasion, admittedly, I have to sacrifice readability for correctness, not because it's inherently one or the other, but because I too haven't yet found the perfect means to always have both, and yet correctness is on the absolute top of my list, and I advocate that it should be on top of your as well, dare I to say so. But that is me -- perhaps I set the bar too high?
More coders are needed, than to who these are “simple”, I understand. But, if you have problems with these, I would definitely try to pivot to something else, like managerial positions. Especially with AI on us. Of course, if you are fine to be an “organic robot”, then it’s fine, but you’ll never really get why this profession is awesome. You’ll never have the leverage.