Most of our thinking is automated, enabling us to make rapid fire decisions on the fly. But at what cost? In the second piece of our crash course in behavioural science, we dive into the how, what and why of decision making, exploring Dual Process Theory, system 1 error and simple responses within the customer journey.
In my previous article, Behavioural Science 101: Influence, Nudging and Choice Architecture I discussed the importance of influence and a very high-level view of nudging in the context of a customer journey. Now it’s time to go deeper. To become an effective choice architect, you must understand the systems at work when we make decisions. In this post, we begin by taking a high-level overview of behavioural science in general, before getting stuck into Dual Process Theory, heuristic bias and system 1 error. By the end of this post, you’ll recognise when you and your customers are thinking fast or slow, the automatic nature of cognitive errors and how to implement simple responses into your business practices.
Behavioural and Decision Science – a little background
On the face of it, decision making seems like a difficult thing to get your head around. And when you really drill down into what kind of a problem a decision or choice is, the dimensions of the problem quickly spiral. It’s not just the number of factors involved, but their interactions. Problem types, external stimuli, probabilities, mental states, physiology, prior beliefs, and many other elements, are each interwoven into our conscious and subconscious processes of decision making.
Attempting to understand the nuances of this mental soup and how it plays out on the micro level of choice, is to enter the realm of mathematicians, psychologists, and neuroscientists. Fortunately, practical applications of modern psychological research can be employed by anyone, with just a little knowledge of the fundamentals being enough to get started.
The same cannot be said for previous models of choice. Classical theorists painted humans in a very dim light, their models of choice portraying ‘homo sapiens’ more like ‘homo economicus’.
Characterising our decision making as perfectly rational, calculated, and self-interested. These theories didn’t do very well at explaining actual behaviour though and modelling interactions below the macro level was reliably ineffective.
Times have changed. Our thinking selves are more accurately described as ‘homo emotionalis’, acknowledging the growing evidence of our irrational biases, probabilistic patterns and emotive propensity towards error. The frontrunners of these changes were Nobel prize winner Daniel Kahneman, and long-time colleague Amos Tversky, whose work transformed our practical understanding of the mind. Together, they gave rise to the multidisciplinary subject that’s popularly known as behavioural economics.
Getting started
There’s a lot to get to grips with when trying to understand how the brain works. But understanding every modern advancement in cognitive neuroscience isn’t necessary to develop a toolkit for nudging! For now, let’s swerve the realm of higher academics and keep only two basic principles in mind. First that people seem to use two different and complimentary systems for decision making, and second, that these systems are prone to habitual error. For the uninitiated, these two principles should serve as a basis for putting insights into practice, and a three-minute crash course in behavioural science.
The first principle: Dual Process Theory
The first thing to keep in mind, is that decision making may be split between two broad systems, each working in tandem with the other – system 1 [fast/involuntary thinking] and system 2 [slow/deliberate thinking]1. There’s more nuance to how the mind works than just this single interpretation: these two systems may contain a series of modular child systems, and some have argued for a third system operating in the gaps between the two2 – but for our purposes, basic dual process theory should serve as a good introductory basis.
System 1 – Thinking Fast
System 1 refers to fast, emotional, intuitive, and automatic thinking. It’s what‘s working to constantly form the subconscious impressions, feelings, and associations, that guide everything we do. It’s the vast majority of all thought, used to make quick, involuntary decisions and complete habituated actions: from tying shoelaces, to pulling away from a fire or avoiding an accident on the motorway.
It’s also ballistic - once we start a system 1 action we can rarely stop. It’s the hero that gets us through our day to day, but system 1’s speed can also cause us problems.
System 2 – Thinking Slow
System 2 is the slower, more analytical, and reasonable sibling of system 1. This mode of thought requires mental effort and is associated with the subjective experience of agency – thinking thoughts! It’s responsible for all information recall tasks, no matter how trivial or complicated.
This is the sort of thinking you do when trying to remember a phone number or picking the best deal for a mobile phone upgrade. It’s our default mode for difficult, longer term decisions. But it also works to rationalise certain decisions made by system 1, as we’ll see in the next post when we look at confirmation bias.
These two complimentary systems can be used as a practical framework for understanding the way people approach problems. And when you begin to think about your own decisions in this way, it can become quite the rabbit hole. In fact, it’s excruciatingly difficult to maintain an awareness of these systems in everyday life, and this is why influencing strategies which take into account dual process theory are so effective.
Because, whilst accurate most of the time, our reliance on automatic processes does lead to a systemic tendency towards error. This is the second principle to keep in mind – through automaticity we are prone to error. And it’s through exploitation of this error, that we are then vulnerable to external influence.
The second principle: Automaticity leads to error
Because 90% of what we do is automatic, we sometimes make errors. And these errors may come about in many ways. For system 1, many of these errors come from what Kahneman, calls WYSIATI – “what you see is all there is”…
System 1 thinking processes information very quickly, but only constructs a narrative or decision structure based on what is in front of us in that very moment. This story is constructed from what we feel and perceive in the present, informed by our pre-existing biases and prior experiences. This representation can be incredibly accurate and reliable – since through skill acquisition and learning, complex system 2 thinking can become automated!
Consider a Formula 1 driver hurtling along a racetrack. The driver carries out a multitude of complex decisions concurrently, with the accuracy of each decision critically important, not only to the race, but their survival! The consistent precision of this kind of decision making is amazing! But what’s more incredible is that many of the driver’s decisions have become automated over time, their imbedded expertise the product of years of experience and practice.
The automation of decision making is made possible by mental shortcuts used to speed up information processing. For most tasks in our day-to-day lives they may be reliable but as complication of these tasks increases, so too does the likelihood of inaccuracy and error.
Heuristic Bias
Behavioural scientists call these shortcuts heuristics. They are simple, automatic strategies or mental processes, that people use to quickly form judgements, make decisions, and find solutions to complex
problems. Natural results of heuristics are bias, which is a cause of error in any system based on generalisation. The growing acknowledgment of these biases has led marketers, behavioural scientists, and policy makers to begin incorporating intentional and behavioural design into their work. In doing so, choice architects exploit our natural tendencies to appeal to simple, habitual bias.
This influence may be positive or negative and is more prolific than you might expect. And whilst it’s tempting to think that only our super-fast system 1 would be prone to this, that’s not the case, as we’ll see in the next post on type 2 errors.
To get an idea for how bias works, we’ll go through a couple of examples, before moving into some commercial applications.
System 1 error: WYSIATI & Heuristics
Think about the following problem and try to think of the answer as quickly as you can:
If it takes 10 whales 10 hours to eat 10 tonnes of Krill, how long will it take 100 whales to eat 100 tonnes of Krill?
The answer may surprise you. You might think the answer is 100 hours. But it’s actually 10, and if you read this again, you’ll spot the schoolboy error! This is an example of system 1 thinking and WYSIATI, leading to error. Your brain perceives a pattern, and naturally inserts in what seems to be the appropriate number. If the problem had been more complicated, there’s a good chance system 2 would have got involved to solve the problem, but no such luck for most people! But they can be even trickier than this...
Let’s try one that’s a little more complicated:
There’s a lily patch in a local pond. Each day the lily patch doubles in size. If it takes 48 days for the lily patch to cover the entire pond, how long will it take to cover half?
The correct answer might not be what you expect. Using some conscious effort and working back from 48 days, tells us that the lily patch will take 47 days to cover half the pond. And yet many of us will fail this exercise, mistakenly answering 24 on the first try - myself included!
If you find yourself mistaken in either problem, don’t fret. In a study by Tay et al 4, of 128 2nd, 3rd and 4th year medical students, around half incorrectly relied on system 1 and made system 1 errors, across a small battery of cognitive reasoning tasks, which included the lily pond question. Their analysis aimed to highlight the importance of system 1 decision making in life and death medical situations, and our vulnerability to error when unprepared. Through practice and experience, we imbed expertise, and this imbedded expertise helps physicians rely on system 1 to make life and death decisions accurately on the fly. But their findings also show us the importance of staying grounded in the face of new problems, especially those that may seem as simple as the lily pond task…
WYSIATI, System 1 and the customer journey
Commercial exploitation of WYSIATI and system 1 bias has existed for a very long time, long before the revelations of dual process theory. Take the simple act of buying a tin of beans – there’s a good reason so many alternative brands will use a Heinz-like colour scheme, font and imagery to market their products. As one scans the tins aisle, we are drawn to what we know. Since what you see is all there is, people will often make a ‘simple’ choice and just pick a tin that looks familiar.
The use of WYSIATI goes a lot further than selling copycat baked beans though! And utilising simple choices is an effective way to encourage a customer along a given route within a customer journey.
We utilise keywords, structure, and primes to help our users make simple choices, which is an effective way to reduce friction. And there will be decision points in your customer journey where you will have the opportunity to influence the customer to give simple, system 1 responses.
What’s key is understanding both the context of the decision and your control over this context – what can a customer see? Does the information you provide ease or complicate the decision? And how complex is the choice to begin with? This context is within your control, and through careful design of these factors, you can trigger a type 1 response, when the scope of the problem is small enough to warrant it.
Type 1 responses in Appointment Booking
Appointment booking is an excellent example of a customer journey in which ContactEngine utilises system 1 responses. For example, we elicit simple responses by offering one specific date rather than many and prompt a response through keywords to increase response and success rates. By offering a single date rather than several, we draw the scope of the decision in, reducing friction and making the choice much easier to make. By framing the conversation by way of keywords, we leverage the user’s technology - which suggests these keywords to the user as a possible reply. All the while, we leave room for complex responses if there’s a problem.
Through our conversational AI, we can handle responses where the date offered is unacceptable. But by offering a single date over many as a first option, and prompting a simple response, we greatly reduce the complexity of the decision. The line between system 1 and 2 is a little blurred here, but the goal is to drive a rapid, intuitive response. The less the customer has to think about the decision and the more automatic the choice, the greater the likelihood of a simple, system 1 response. This is an example of minimising cognitive load, which in excess causes a type of system 2 error we will explore in the next post.
It’s important to note, that it’s unscrupulous to aid type 1 responses to complex issues – though something that happens a lot online through clever UI design, particularly with regard to permissions and privacy. Used properly, eliciting simple responses can remove the friction from complex journeys, providing tangible benefits to both customers and the business.
Errors, errors everywhere…
That we frequently make system 1 errors and rely on heuristic bias is clear. But why do we appeal to them so consistently? The answer is complicated… Evidently, part of the problem is lethargy, and another significant part is wiring: our system 2 capabilities are intensely lazy, and as alluded to, thinking is a cognitively expensive process; we only do it when we have to!
At the same time, this is just the way that we’re wired. Pre-historic man evolved in a world where probabilistic, type 1 decision making was fast and accurate enough to prevent an untimely demise. But modern-day problems require modern-day reasoning and carefully evaluated solutions.
Unfortunately for us, this means our lazy brains don’t always automatically engage system 2 when you’d expect that they should, and as we’ve shown, we rely on our first impressions – “What You See Is All There Is!” But understanding system 1 error is only half of a deeper and less predictable story… In the next post, we’ll explore the practical ramifications of system 2 errors, including strategies for dealing with cognitive load, confirmation bias and mental accounting.