The consumer decision-making process is more than a purely emotional or rational assessment, with a third option powered by the imagination.
In recent years we’ve become used to thinking about decisions as “system 1” or “system 2”. System 1 choices are automatic decisions, made without thinking, based on an immediate emotional or sensory reaction. System 2 is used to stop and rationally calculate the consequences of our choices, and determine the best cost-benefit tradeoff.
But these two processes don’t capture every decision. Indeed they might only encompass a minority of our daily choices.
Recent work in neuroscience and psychology has discovered another way of making choices: with the imagination. Customers imagine their possible futures: the outcomes they would experience after a choice, and how those outcomes will make them feel. The future that makes them feel happiest will be the one they choose. These choices use different parts of the brain than System 1 and 2. They are called System 3 choices.
Think about how you might buy a car. System 1 would suggest that you see a colour, or shape, or brand of car, immediately fall in love with it and buy without thinking. System 2 implies that you calculate the price, financing options, fuel efficiency, resale value – and pick the model that makes the most financial sense.
A System 3 decision would look like this: imagine yourself driving that car. Feel, in your mind, the sensations of the seats and how it drives. Imagine how your partner or your friends would view you in it. Consider, too, the impact on your bank account and what else you would be missing out on to pay for it. How you’d feel about the environmental impact and the safety this model offers your family. How do you feel? Is it good? Maybe you also have another model in mind. Try the same process on that. Does it feel better? The car you feel best in – within this mental simulation – is probably the one you’ll choose.
This System 3 process applies to our big choices in life, but also to smaller ones. At the shelf, considering a new breakfast cereal or a new skincare product, you’ll imagine how it tastes or feels before buying it. You might test out the moisturiser from a sample jar, but you still need to project yourself into the future – will it feel the same when you’re applying it before bed, or after you wake up? Your System 3 imagination combines past and present experiences with possible futures, and works out which it enjoys most.
System 1 still has a place: once you are familiar with a product, you might buy it automatically. And we still use system 2 for lots of financial and practical calculations. Indeed, System 3 incorporates elements of systems 1 and 2 in how it works.
But if you’re:
Launching something new
Trying out a new communications or pack approach
Building a brand
Or selling something with consequences beyond just the few moments after purchase
Then your customers are probably using System 3 to make their decisions, and you need to use System 3 tools to predict the success of what you’re testing.
Some existing research tools can be used to measure System 3, and new ones are emerging too. Implicit Prospection Tools, newly designed qualitative projection techniques, and Adaptive Concept Tests are among them.
You can find out more about System 3 in our talk at IIeX Behaviour London on 10th May, digging into Leigh’s blog on the science behind System 3 or by searching for “prospection psychology”. It’s likely to be a hot topic in the coming years (use your System 3 to imagine that possible future!) – and you can get ahead of the curve if you learn about it today.
WordPress.com is excited to announce our newest offering: a course just for beginning bloggers where you’ll learn everything you need to know about blogging from the most trusted experts in the industry. We have helped millions of blogs get up and running, we know what works, and we want you to to know everything we know. This course provides all the fundamental skills and inspiration you need to get your blog started, an interactive community forum, and content updated annually.
Last week I had the pleasure of seeing what life is on the other side of the research fence when I attended the ESOMAR World Inspiration Network conference in Amsterdam where the only presenters were clients.
After attending 40+ industry conferences where agencies mostly speak to other agencies about their work, I was really looking forward to hearing exclusively client-side perspectives into our industry. As a cherry on top, the conference was held at the top of Heineken Experience – a quick 10min bike ride from my home in Amsterdam!
Overall it was a highly insightful experience and I felt like I learned more than I do in an average conference so hats off to the programme committee and ESOMAR for putting together a great programme! The two days were packed with great talks and I’ll share some highlights from the conference through my tweets at the time.
“For agencies, research is their business but that’s not the case for us as clients.” Silke Muenster (Phillip Morris International) gives a valuable perspective to kick off the #esomar World Inspiration Network conference.
The day was kicked off by Programme Committee Chair Silke Muenster from Philip Morris International who reminded us, the handful of clients in the room, of our differing perspectives – of course, as agencies, we focus on research, but it was a good wakeup call first thing in the morning that we must spend more time stepping into the shoes of the clients we work with.
Sanita Pinchback from Unilever talks about cultural differences in expressing emotions #esomar – very true! We also need theoretical frameworks to map out these differences. #esomarpic.twitter.com/bR979DQmAH
Sanita Pinchback’s keynote explored the different challenges Unilever has from a HR perspective. One particular slide on cultural differences in expressing emotion caught my attention – I completely agree with that, but one thing that market research is currently missing (somewhat) is valid and up to date theoretical understanding of how cultural context influences emotional expression. I realise that that discussion will be quite technical and academic, but without a solid theoretical understanding of the phenomenon you want to measure, increasingly sophisticated technology will be just shiny new things – not necessarily better.
As an agency this really hammers home that we need to invest in continuous & extensive training if clients like Unilever are doing it too! Very humbling but also inspirational. #esomar#mrxpic.twitter.com/i57GbxYmVs
Sanita also talked about the amount of internal training and upskilling they do at Unilever – the number of sessions and learning topics was incredible! Although we already invest heavily in learning internally, this really hammered home just how important it is for us to stay on top of the latest developments in our industry – after all, research is our business so if clients are upskilling, we need to do it too.
Tony Costella from Heineken speaks on the need for insights professionals to develop a different knowledge profile than what used to be necessary in the not-so-distant past. #ESOMAR#mrxpic.twitter.com/axod3fPTJb
especially in our case, stay on top of the thousands of research papers and new thinking published every year because science does not stand still! I feel like my profile is a bit more like two M’s back to back… #ESOMAR#mrx
Another talk that really made me think about our work on agency side was by Tony Costella on how the insights function is changing from “farmers to fishers” – in other words, in addition to “farming” commissioned custom research to also “fishing” around in readily available behavioural data.
As a part of that change, he talked about the changing needs for the skills profile required of an insights professional: while it was perfectly acceptable to have a T-shaped profile with deep knowledge in a specific area and an ability to work across a broad area outside that, these days that’s no longer enough. Instead, we need people with an M-shaped skills/knowledge profile: multiple deep skills combined with an ability to apply across situations and domains.
That resonated particularly strongly because I feel like we need even more than that on the agency side: not only is it enough for a small specialist agency to have deep knowledge on our specialism (behavioural science), we also need to understand clients’ industries so that we can more easily apply our expertise across different areas. Adding to that, we also need to stay top of technical innovations such as AI, chatbots, machine learning etc. Finally, even the deep specialist knowledge in behavioural science isn’t just one “prong”: solving client problems requires a broad knowledge of a wide range of theories and concepts across different fields of behavioural science. As the saying goes, if you only have a hammer, every problem looks like a nail.
One great practical idea I took from the conference was from the presentation of Olga Kornilova from Ferring Pharmaceuticals. Insights Vernissage as an alternative to expressing themes from research was really engaging, and Olga did a great job of bringing this method to life.
“The best way to have a good idea is to have a lot of ideas” (Dr Linus Pauling) – quoted at #ESOMAR
The best quote of the day was definitely the above: the best way to have a good idea is to have a lot of ideas – and I certainly got a lot of ideas from ESOMAR WIN and am already looking forward to next year!
If you found this post interesting, we’d be grateful if you’d nominate us as an Innovative Supplier in the GRIT industry survey. Just click here, answer the survey (it should take 10-15 minutes) and put Irrational Agency as an innovative supplier. In return for completing the survey, GreenBook will give you early access to the report when it’s published.
Over the last two years implicit association tests have started to become a standard tool in market research. The 2017 GRIT report tells us 80% of researchers are using (53%), or considering (27%) nonconscious research methods, and that:
“Implicit/IAT is perceived to be one of the fastest growing nonconscious methods in the industry”
These methods are based on academic research originally done at Harvard University*, which measured unconscious racism or sexism by timing how fast people respond to different stimuli. Researchers found that the vast majority of people hold unconscious associations based on race, gender or age.
This opens a huge question: if people have prejudiced attitudes, does this mean they act in a racist, sexist or ageist way? The answer is not obvious – and in fact, more recent research summarised here has cast doubt on this. The correlation between racist attitudes and racist behaviour is lower than you might expect.
This led us to wonder about how implicit tools are being used for marketing and branding questions. These tools often uncover implicit brand associations (Coke is associated with “Authentic”, or BMW with “Exciting”). But we are now learning that these associations don’t necessarily drive behaviour. People might think Coke is authentic but not go ahead and buy it.
What you really need to know is not only what people think and feel, but how they will behave.
We realized that brands need a more direct way to predict what their customers will actually do. As a result, we have developed a set of tools for measuring implicit choice, rather than implicit association.
Implicit choice uses the same technique of measuring reaction time, to find out how intuitive people’s choices are, how confident they are in their judgements and how reliable those choices are. But the measures are choices – which can be used to directly predict market performance – rather than associations.
When linked with message priming, implicit choice allows us to test the impact of claims or concepts, and whether they work in changing customer behaviour (not just their attitudes or opinions, but what they will actually buy).
A popular method used by many of our clients is behavioural conjoint**, which combines conjoint analysis with implicit choice. This method tells us which product features or attributes most strongly drive consumer choice: does a price cut work better than a bigger pack or stronger claim? Or how much of a price premium can you earn by getting the messaging right?
No research method is perfect, and implicit choice is still only a proxy for what people choose in the real world. But this method gives much stronger correlations with true behaviour than either implicit associations, or traditional stated-preference survey research. Implicit choice measurement gives the best of both worlds.
Ultimately, choice is the only thing that matters for your bottom line – measure choice and you’ll be able to drive the business outcomes you want.
* Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. (1998). Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6), 1464.
** Caldwell, L. (2015). Making conjoint behavioural. International Journal of Market Research, 57(3), 495-502.
If you found this post interesting, we’d be grateful if you’d nominate us as an Innovative Supplier in the GRIT industry survey. Just click here, answer the survey (it should take 10-15 minutes) and put Irrational Agency as an innovative supplier. In return for completing the survey, GreenBook will give you early access to the report when it’s published.
Or: The day I tried to literally singlehandedly crush the patriarchy
At a conference this week I was standing with my two colleagues; one male, one female (company founder). We were chatting to someone at a stand about their product. A few minutes in and the sales director turns up and stands next to me (too close). His colleague stops to introduce our company founder, Elina, who he had been chatting to. She introduces me first and then my male colleague, Will. The sales director, despite standing next to me (too close), and me being introduced first, shakes Will’s hand first. He does this by launching across my personal space, brushing my shoulder (I’m short). He then turns back to me and offers his hand. He didn’t shake Will’s hand first on purpose; he made an unconscious assumption that tall man was more important than short woman. Continue reading “The drip, drip, drip of the microaggression”→
There are few truly universal books on behavioural science: like most of the others, this one has a particular reader in mind. Richard’s reader works in advertising, and it must be a rare advertising executive who still hasn’t heard of behavioural economics. Richard therefore heads straight into the meat of the book with little beating around the rational-agent bush. A couple of connected anecdotes start us off and we quickly get to the first of 25 chapters, each on a single bias, that make up the body of the text.
The book is very readable, and even if you already know what the fundamental attribution error, the pratfall effect and Veblen goods are, you’ll probably still enjoy the stories and quotes that illustrate them. I hadn’t heard of some of the experiments and anecdotes that Rich discusses – and he and his colleagues have carried out many of their own original tests – so even as a professional in the field there is much here that’s worthwhile.
Structuring a book around a list of biases has the advantage of user friendliness. Each chunk is self-contained and easy to get your head around; you can dip in and read a chapter or two without needing to remember a broader framework. The natural counterpart of this is that approach can feel a little shallow. If you’re already familiar with the discipline you may feel there’s not much to learn from another definition of the availability bias. And inevitably several of the “biases” are not really biases: the replication crisis and “habit” are not biases, though these chapters are as useful as any of the others. Another minor drawback of this approach: because the chapters are designed to be read individually, some of the same quotes show up more than once – ever so slightly jarring if you’re reading it all the way through.
The most useful contribution of the book is the original – and very good – set of practical tips at the end of each chapter. If you do work in advertising or marketing there will be a lot to get your teeth into. The first chapter alone gave me three or four ideas that I could see myself applying in the near future. Richard has a good understanding of the culture of advertising, and the book may well help people in ad agencies – or the advertising function of large companies – persuade their colleagues of the efficacy of behavioural principles.
Those in other fields may find less the book less directly practical, but there will probably be something to stimulate you in most chapters. And you can always get good stuff by following Richard on Twitter.
P.S. Full disclosure: Richard interviewed me when writing the book and you’ll see some of what we discussed reflected in the pricing chapters.
Much of decision-making psychology (and by extension behavioural economics) explores the processes by which people solve a problem or achieve a goal. Usually the papers in this field contrast the rational, expected-utility way to solve these problems with the approaches people actually use in practice.
An important question they rarely address is “Why that goal?” How is it that people choose the particular problem they want to solve, the objective to work towards? In the psychology lab, the answer is easy: the person in a white coat gives it to them. In real life, that doesn’t happen.
Answering this question is essential to developing a comprehensive theory to replace or challenge classical economics. Standard microeconomic theory has a clear, simple answer to this: we always have the same goal, maximising utility. Any other objective (finding the best job, working out how much money to save, picking what to eat, choosing a romantic partner, deciding whether to rob a convenience store – to pick at random from the typical subjects of economics papers) is a means to an end. According to the classical theorist, we choose between these different goals based on which we think will bring the highest marginal utility. Independent goals which don’t conflict with each other are pursued more-or-less simultaneously: I seek a promotion at work during the day while trying to find the ideal spouse at night, choose the best mutual fund at lunchtime and weigh up the risk and reward of the convenience store holdup before bed.
Psychologists, while rightly challenging the claim that I can simultaneously optimise across all these different life goals, don’t propose an alternative way to choose between them. My conscious problem-solving mind can only focus properly on one objective at a time, but which one?
The fast-and-frugal heuristics school gives an argument that simple heuristics are the best way to solve apparently complex problems like catching a baseball, allocating investment money or walking through a crowd, but doesn’t tell me why I want to catch the baseball or invest money in the first place. The heuristics and biases approach tells me that I am anchored on a particular rate of return for my investments but not whether I will spend this afternoon trying to beat that rate or watching football.
You could easily ignore this question and assert that sooner or later I’ll get round to dealing with most of the important problems in life, and that the real work of psychology should be focused on how I’ll tackle them. But there are plenty of counterexamples. Many people never get around to thinking about investments or savings, or not until it’s too late to do anything meaningful about them. Our success in achieving health objectives is strongly influenced by what we spend our time thinking about – unconscious eating and conscious exercise are in conflict. Status quo bias in the labour market and in consumption patterns is responsible for lots of apparently suboptimal behaviour and there’s a strong argument that the cause is a (possibly rational) lack of attention to the goal.
Here is a candidate theory of how we select our goals.
I draw inspiration from a Glöckner and Betsch paper, Modeling option and strategy choices with connectionist networks. Although this paper is within the narrow paradigm I’m critiquing – how do people solve a problem that is exogenously given to them – it contains a model we can borrow to address the broader problem. They propose that the mind answers questions by collecting data and using it to populate a network of nodes representing a model of the problem it is working on. It tests how self-consistent this model is, and if it is highly consistent it is more likely to consider the problem solved. If it is not consistent (e.g. two different answers to the question can still be true within the mental model) the mind seeks out more information to try to increase consistency. As they say:
“One of the basic ideas of Gestalt psychology (e.g., Köhler, 1947) is that the cognitive system tends automatically to minimize inconsistency between given piece of information in order to make sense of the world and to form consistent mental representations”
The unconscious (automatic) mind determines that there are two or more inconsistent ideas simultaneously held, and prompts the conscious (deliberative) systems to gather more information with which to populate the model, in order to try to resolve the conflict between them. This process continues until the mental model reaches a certain level of consistency – in effect, when it stops changing and reaches a stable representation. This representation is taken as the answer to the question.
In a very different context, John Yorke writes:
“The facts change to fit the shape, hoping to capture a greater truth than the randomness of reality can provide.”
I propose that the mind relies on a similar connectionist, associative network to choose which goals to focus on.
This network represents the actions that the decision maker could take, the consequences of those actions, and the reward that would accompany those outcomes. In any situation a person could take thousands of potential actions with tens of thousands of consequences, and it is unlikely the mind (even the highly parallel automatic system) can simultaneously evaluate all of them. Instead, a small subset of those potential actions and outcomes will be activated by sensory stimuli or familiarity: nodes representing regular, repeated actions are likely to remain active much of the time; nodes representing outcomes such as the satisfaction of hunger may be activated by biological need, and other less frequent actions or consequences may be activated by seeing or hearing messages which remind us of them.
Activation automatically spreads from node to node in this network. The network’s connections link actions to their consequences – the action of eating food links to satisfaction of hunger; saving money is linked to a higher bank balance, which in turn links to an emotional payoff from feeling secure; smoking a cigarette links to the quieting of cravings and a feeling of relief. Thus, when an action is activated a consequence will become active; and vice versa, when an outcome is activated the actions which could lead to it will also become active. If only one action-node is activated, the decision maker will take that action. If only one outcome-node is active, the decision maker will choose to pursue that outcome. At this point a goal has been set, and the well-studied processes of decision making will take over.
If nodes representing more than one potential action or outcome have been activated, the automatic system needs to keep working, until it can resolve which one to pursue. This work includes the further spreading of activation to other nodes in the network (the food-eating node could activate nodes that represent spending money and gaining weight, the hunger-satisfaction node could activate alternative actions that lead to the same outcome) which in turn may connect back to some of the same nodes, increasing their activation further. The network might “test out” particular outcomes by activating nodes that represent their second-order consequences. The activation is diluted by this point; the earlier nodes were strongly activated; these ones are weaker.
At this point a similar kind of consistency-testing to that proposed by Glöckner and Betsch comes into play. The activated, imagined actions and outcomes are tested against sensory input and knowledge about the outside world. Are these outcomes plausible? Can I actually take these actions? Are they consistent with what I believe about how the world works? If so, the activation, and the relationship between actions and outcomes, is reinforced. If not, the activation is reduced and the network keeps looking for a consistent, stable, combination of action and outcome.
Eventually, that stable set of active nodes will emerge; or perhaps two or three combinations will continue to compete for attention and plausibility. If so, again following the template of Glöckner and Betsch, the deliberative system comes into play and selects between them. The deliberative mind applies symbolic, logical or linguistic forms of reasoning and decision making instead of the connectionist, activation-driven process of the automatic mind. These symbolic processes (and the biases that can affect them) are the stuff of most decision theory, and I defer to the accumulated body of science to tell us how they work. My claim here is only about the automatic process that selects the options between which we deliberate.
The outcome emerging from this process becomes the goal we consciously seek. It will be paired with an initial action, though that action alone may not be enough to achieve the goal, in which case a planning process of some kind has to take place – again, thoroughly explored by existing decision theory.
This model tells us something about why certain goals or actions might be preferred to others. If an action is particularly salient or easy to imagine, we are more likely to focus on the outcomes that follow naturally from it. If an outcome is particularly consistent with our mental model of the world, we are more likely to take the actions that will cause that outcome. The availability heuristic, effects related to salience and attention, and confirmation bias are all natural outcomes of this emergent-goal process. For now it is a theoretical model, but it is not too hard to imagine empirical tests for it.
Of course, the outcome you choose should not only be consistent with your view of the world, but also be a rewarding one. I agree with both the classical economist and the behaviourist that reward drives us to choose outcomes, and therefore the actions that lead to them. But we clearly do not apply probabilistic, utilitarian calculations to estimate and respond to that anticipated reward, and simple behavioural conditioning is not enough to explain the rich, complex actions and plans we make. In the next post in this series I will suggest a more plausible way to think about reward, how it motivates us to act, and what this means for how we experience life.
This post is an addendum to an article originally published in the January 2016 issue of Impact magazine. (Registration required)
This month’s Impact magazine features an article I wrote about gender stereotypes in the market research industry. The article was inspired by a study we did for Women In Research (in collaboration with the Market Research Society) to understand nonconscious gender perceptions in the market research industry through a wide-scale experiment.
As there is limited space in a magazine, I couldn’t share as much of the results as I would have liked, so I wanted to write a post to complement the print article (with a couple of paragraphs repeated so that this post makes sense to readers who have not yet seen the print article or are from other industries). I also received a surprising amount of feedback about the methodology from researchers so I felt it would be good to provide more detail about that we well. I’ll start by explaining what we actually did, followed by the reasons why. I will also explain some of the specific biases we are naturally susceptible to, and how they influence our impressions of other people. I’ll then include snapshots of some of the results and conclude with some further reading.
What we did
A survey invitation was sent to all MRS members, yielding 440 responses (37% male/63% female). Over half of the sample had worked in the industry over 15 years, and were therefore at least director level with both quantitative and qualitative experience. Two-thirds came from agency side, with a relatively balanced split between different sized agencies.
Some of the feedback I received was concerned that we biased the results by mentioning Women in Research on the landing page, and therefore giving away the purpose of the research. That point is very valid, but in this case it would not have been fair to not disclose the organisation that was mainly driving the initiative (WIRe, MRS kindly agreed to partner with them) and also because the purpose would have revealed itself sooner or later anyway. We were fully aware of this and actually it was not a concern to us if it made gender perceptions and/or discrimination more salient in people’s minds – quite the opposite. The reason for that is that this study was more an experiment than a traditional survey, and as is typical of psychological experiments, we did not want to reveal this upfront which has subsequently caused some confusion with many people.
In the first stage we showed people 2 CVs that were meticulously anonymised from real CVs provided by recruitment agency Hasson Associates. Each respondent saw one male and one female CV (Simon/Susan Taylor/Adams) with the order of genders and CVs randomised, and was asked to rate them on a number of dimensions.
Some of those who sent me feedback felt that the CV did not give them enough information to rate them on e.g. ease of working with, and felt it was difficult to give an opinion. This exercise was actually modelled on a relatively well known case study from Columbia Business School called the Heidi/Howard case where they asked students to read a CV for an entrepreneur called Heidi or for an entrepreneur called Howard, and subsequently rate them on several dimensions. What they found was that “students rated Heidi as highly competent and effective as Howard, but they evaluated her as unlikeable and selfish, and would not have wanted to hire her or work with her.” (More on this and similar work on the likability-competence dilemma women have here.) I agree that perhaps both the original and our version could have been designed better, but I felt it was nonetheless interesting to replicate such a well known case study in our industry. At least in the original case, people had very strong feelings about the person based on their CV, but I appreciate that they did so in a different situation than in the context of a survey like this. Nevertheless, this question worked as an explicit benchmark, demonstrating a rational response, as the female candidate received more favourable ratings across the board.
In the second stage, we showed respondents phrases that might appear in a recruitment agency’s cover letter of a candidate (based on real examples provided by Hasson Associates) and asked them to first person, in absence of other information, they would want to interview for a job, followed a task where they had to choose whether a phrase was more likely to describe a male or a female candidate. These were formulated to test various hypotheses of whether certain behaviours, skills, attributes and achievements were more credible for either gender, and also to understand how they might impact one’s likelihood to get invited to an interview. While the second part of this stage is more directly linked to gender perceptions, the data also gives us interesting and useful insights into what is valued by employers.
More on the likeability-competence dilemma and other issues with gendered behaviour norms here:
In the third stage, we drilled down to specific words and asked respondents in which word would be more positive to appear in a performance review, followed by a task to choose whether a word would be more likely to appear in Simon or Susan’s review.
Despite the extensive academic research supporting these ideas, I felt it was important to ground them in our own context – even if this research is not flawless, it’s at least something we can start a conversation with and I was personally very excited to see the results as I feel passionately about appreciating everyone as who they are without the need to factor in their gender. I’m still sometimes just as biased myself though – it’s only human, and we can’t really help it however much we’d like.
At the beginning of the article I mention this riddle which is designed to reveal gender stereotypes:
A father and son are in a horrible car crash that kills the dad. The son is rushed to the hospital and just as he’s about to go under the knife, the surgeon says, “I can’t operate—that boy is my son!”
An old riddle like this is a good example of the impact gender schemas can have on our thinking. Most people are puzzled by it, and suggest wildly creative options instead of the most obvious one: that the surgeon is a female and the boy’s mother. One particular place where these stereotypes come to play is career choices, progression and recruitment. Everyone likes to think of themselves as enlightened but in reality we all store a range of belief patterns in our minds. We learn these schemas in childhood because they help us generalise and explain life, even though they do not necessarily reflect our personal values. Despite what many think, gender stereotypes also do not just affect women: when the riddle was changed, most people struggled to imagine that a nurse could indeed be a man. Curiously, after writing the article, I came across a real life example that hopefully demonstrates that we are all equally prone to these gender schemas – including women.
These are screenshots of a Facebook discussion that ensued after one particular Humans Of New York post about Syrian refugees who have been approved to enter the US. It was sparked by a woman realising her own prejudices (assuming the doctor in the story was the man), followed by comments from several female doctors (some of whom additionally also either identify as Muslim or are of the Middle Eastern origin) who had – much to their shock – also made the same assumption.
Some people were also concerned that was it was easy for people to answer tactically when they (assumed they) had figured out the research. That point is very valid that it was indeed aimed at understanding stereotypes, but not necessarily in the way that some people have thought. Stereotypes affect people’s decision making in many subtle ways – including the stereotypes women hold of themselves, which were of particular interest to us as this research was conducted on the back of conversations at the previous WIRe event about the impostor syndrome and especially how women sometimes hold themselves back – which is in fact part of what we found.
It’s also not simply a question of discrimination against women like some people have assumed: these stereotypes can also limit men in what behaviour is perceived to be acceptable for them. If, for example, being described creative is more associated with women than men, then that’s also an issue for a man who is, in fact, more creative. However, the first step to taking action in any direction or trying to make a difference is to understand what exactly is it that we are dealing with.
The short-cuts we use to judge people
The truth is that we are all cognitive misers who like to preserve mental energy and processing capacity who only spend it when we really have to – not because we are lazy, but because there is simply too much going on for our brains to handle it. The limitations of our minds mean that all human thought is a trade-off between speed and accuracy, and most often we choose speed because that kept our ancestors out of lions’ mouths on the savannah. While we have now mostly accepted that we have fast and slow ways of thinking about brands, we can also apply it to how we perceive people.
In the first stage, our automatic, effortless System 1 uses heuristics and other mental shortcuts to grasp the essence of whatever we are presented with. Just like we are unaware of these shortcuts when thinking about brands, we are also not aware of how they alter the way we perceive and interpret information about people before ever realising it. Unfortunately, research suggests that we don’t even need to believe in a stereotype for it to affect our thinking – they happen automatically if we are simply aware of them. The second stage of perceiving people where we correct and suppress our prejudices is much more effortful and therefore ultimately the kingdom of System 2 which the cognitive miser would ultimately prefer to avoid altogether.
The cognitive miser in us loves certain short cuts in particular when it is evaluating people. First impressions count because the information we get about someone early on when observing them affects our perception of them more than any information we get later on (the primacy effect). These interpretations can be hard to change because we have a tendency to reconcile the psychological pain conflicting beliefs cause us by changing them (cognitive dissonance). Unfortunately, first impressions are usually out of own hands because confirmation bias filters our perception to see in others what we already expect to see. Stereotypes are one type of filter: by categorising things they ease the load on our minds and help us make sense of the world. They are beliefs about categories of people – some positive, some negative – that are often more like probabilities based on our experiences: on average, we’ve seen more women with long hair then men, so a stereotypical woman has longer hair than a man.
Just like we can’t help but make snap judgments about brands and products, we automatically make snap judgments about people whether we like it or not. And just like it’s not humanly possible to stop System 1 thinking altogether, we also can’t stop being influenced by stereotypes even if we wanted to – they are our way of making sense of the world. Thinking that we are immune from it only makes us more likely to fall prey to it, so our best chance is to accept it. But before that, we need to be aware of the prejudices we have.
It was clear from the research that market researchers are a very enlightened bunch when it comes to gender perceptions, and unlike many industries qualities associated with women are also ones that we value. While this is great news for diversity, we shouldn’t lull ourselves into a false sense of security that gender equality is not a challenge for us: both men and women all put themselves and each other in clear boxes. These perceptions and labels can make us miss both existing talent inside our companies as well as when we are recruiting for new people. Stereotypes can indeed be valuable guides in navigating a complex world, but we should not ourselves become blinkered.
Diving into the results
These charts are copied from the presentation at this fall’s Women in Research event – some quality issues emerged when moving screenshots to blog. There is, of course, even more data than this -if you are interested in anything specific, please drop me a note and I’ll dig it out!
Some further reading
Banaji, M. R., & Greenwald, A. G. (2013). Blindspot: Hidden biases of good people. Delacorte Press.
Devine, P. G. (1989). Stereotypes and prejudice: their automatic and controlled components. Journal of personality and social psychology, 56(1), 5.
Halvorson, H. G. (2015) No One Understands You And What To Do About It. Cambridge: Harvard Business Review Press.
Mueller, J. S., Goncalo, J. A., & Kamdar, D. (2011). Recognizing creative leadership: Can creative idea expression negatively relate to perceptions of leadership potential?. Journal of Experimental Social Psychology, 47(2), 494-498.
Pratkanis, A. R. (1988). The attitude heuristic and selective fact identification.British journal of social psychology, 27(3), 257-263.
Steele, C. (2011). Whistling Vivaldi: And other clues to how stereotypes affect us (issues of our time). WW Norton & Company.
UK readers will already be fed up of hearing about it, but the rest of the world might not yet be aware of how spectacularly the British opinion polls failed to predict the results of our general election this month. Google will happily provide you with a long series of post-mortems on what went wrong, but if you start with Jane Bainbridge, Wikipedia and Peter Kellner via Ben Farmeryou’ll get a decent introduction to what happened.
Although something clearly went wrong, the polling companies deserve to be defended against some of the wilder accusations made in blog posts and tweets over the last two weeks (mostly by people outside the market research industry, but occasionally by insiders too). Many of the issues that have been blamed for the failure to predict the election are well-known to pollsters, who make strenuous attempts to correct for them:
the difficulty of sampling voters who are hard to reach by telephone
skew in the demographics of voters who are online and respond to polls
different turnout rates of people who support different parties
respondents’ tendency to overestimate their likelihood of voting
“Shy Tories” and “shy kippers” – people who feel the interviewer might disapprove of their choice, so avoid reporting their intention to vote Tory or UKIP
It does seem possible that some of these factors were estimated wrongly by the polling models this time round, for reasons that remain to be seen. But I am no more expert in how to correct for these factors than you are (and possibly less), so I’ll leave those to the experts at ICM, Ipsos MORI, YouGov and the rest.
I can, however, offer a contribution from psychology and behavioural economics – including an understanding of the tendency for people to say one thing and do another. Understanding and adjusting for this tendency, with an approach we would call behavioural polling, is a key opportunity for the polling industry.
The (fairly accurate, though not exact) exit polls from 7th May had the advantage of asking respondents about concrete historical behaviour (very recent behaviour, moreover) instead of predictions of future actions. It is unsurprising therefore that they were more accurate than the polls before the election. However, the better we can understand the differences between predicted and actual behaviour, the more accurate our behavioural polls will be.
Why might people have said they would vote Labour and then not do so? Or vote Conservative despite claiming they wouldn’t?
Because they changed their mind in the last few days
Because they were lying
Because they wrongly predicted what they would do
Because action is different from belief
Because they ended up not voting at all.
Each of these reasons has a corresponding explanation in behavioural science: the biases and heuristics at play include false certainty, overconfidence, status quo bias, social norm effects, ambiguity aversion, and others. We can explore them all by designing our research methods accordingly – as always in research, getting the right answer depends on asking the right question in the right way.
One potential change is around priming, to put the respondent in the right mindset and get them thinking in the same way they might think in the ballot box. Another is to replicate the ballot box context – the physical and mental process of choosing a candidate, putting a cross in a box and placing the ballot anonymously into a sealed box is quite specific, and activates idiosyncratic connotations and habits that may not occur in an online or telephone poll.
The effort required to get to the polling station and vote is another difference between survey and reality, and methods which replicate that effort barrier will better understand the likelihood of voting. The impact of last-minute newspaper headlines or TV coverage can also be estimated through simulation of hypothetical messages.
A further gap between prediction and reality is the tendency of respondents to underestimate the impact of emotional and heuristic processes on their own behaviour. Just as many people insist that they are not influenced by brand advertising, despite the evidence of the sales figures of Unilever and Coca-Cola, voters will tell us that Daily Mail headlines or bacon sandwich photographs would never influence their vote. In reality, they can have a subconscious effect on behaviour of which the respondent is not even aware. These subconscious influences can be measured with implicit tools and the effect on voting behaviour estimated with some precision.
It would be expensive to incorporate all of these methodological adjustments in a single poll, let alone in the whole series of polls that take place in the months leading up to a general election. So we can borrow another technique from scientific research and split samples into two treatments: a control group using existing techniques and an experimental group using the new method. By measuring the difference in results between the groups, we can estimate the size of impact of each specific behavioural effect. These effects can then be applied as adjustments to conventional polls.
The adjustments won’t always be simple, as the strength of each effect may vary according to the candidates on offer and the mood of the voter, but with a series of experiments it will be possible to develop a model of how to adjust the prediction to better match reality.
Although we may not be able to determine the success of a full behavioural polling cycle for five more years, there are plenty of elections in the meantime in which to test new approaches. There will be elections in London, Wales, Scotland, Northern Ireland or for local councils every year between now and 2020. We plan to partner with a polling company to test these new methods in those elections, so that 2020 won’t be a repeat of 2015 – in terms of polling accuracy at least, if not political outcome!
Leigh Caldwell is a consultant and writer on pricing and cognitive economics and partner at The Irrational Agency
Heuristics and biases assumes that we’re basically meant to be rational but some design flaws get in the way; adaptive toolbox says that there’s no such thing as rationality, we just use a collection of ad hoc mental tools to solve problems as best we can; and information processing viewpoints propose an underlying mechanism in our minds, which act as a sort of imperfect computer to make decisions based on the information the world throws at us.
In this article I won’t try to answer which model might be better, but let’s say you’ve decided which one best fits the consumers you’re studying. What should you do next?
Each of these models gives you a way to understand how customers are making the decision to buy, or not to buy, your (or your client’s) product. Each of them also challenges the traditional, unstated, powerful assumption that most market researchers make about consumer decisions.
You may be one of the rare marketers who does not make this assumption. While others do not know they are making it. But the assumption is revealed every time a questionnaire asks a respondent how much they like something, and every time an interviewer says “Tell me why you bought that.”
What is this hidden assumption? It is this: that the decision process is based on consumers buying products that they like, because of the attributes of the product, and knowing why they do so. In other words, that it’s an economically rational process. All three parts of this assumption are disproven by behavioural economics. The three models each give an alternative description of the decision and buying process, and this process is where researchers can focus their efforts in order to make behavioural economics work for their clients.
The heuristicsand biases model says that the basic economically rational process is still the right framework, but consumers make errors of judgement in deciding how much they like things. If you follow this model, you should still look at product attributes but you should also look at how they are communicated and what context they are in, because this affects the value that consumers place on them. By understanding these context effects, you can learn how to emphasise the attributes on which your client’s product is strongest, which contexts it will do best in, and how to segment consumers by their biases and contexts. A methodology for analysing this process can be based on listing product attributes, enumerating the contexts in which the product (and its competitors) will be seen, and using a cognitive biases list as a checklist for how consumers might be influenced to put a higher or lower value on each attribute.
If you use the adaptive toolbox model, you will need to instead think of how consumers might make a good-enough decision about this product category – how they will satisfy themselves that the product they are choosing is OK and meets most of their needs. There are a number of standard heuristics or rules of thumb that consumers typically take to do this. One is to pick the one most important product attribute and focus on that (this might be price, or something category-specific like miles per gallon). Another is to copy what their friends or peers are doing. A third is to do what they did last time, if there were no significant negative consequences. You can find out which rule consumers are using by observing them, asking them (in the right way!), or testing their behaviour in a controlled experiment which is designed to distinguish between these rules. When you know the rule(s) they use, you’ll know the basic parameters about how to design product communications, how to position and price the product and how to change consumer behaviour for the better.
Finally, the information processing model says that consumers make their decisions by gathering information in pursuit of a goal. To analyse this process, you would start by understanding what information the consumer already has – this both forms a baseline to ask what new information they will seek out, and influences the goals they choose to satisfy. In this model, other drives such as emotions and preferences are seen as specific types of information. Then consider the capabilities of the consumer to gather information, the sources they use, the way they interpret and combine new facts with existing knowledge, and some basic parameters like how quickly they can read and interpret information, and their rate of consumption of social or other media. You can then quantitatively and qualitatively determine which (and how many) products each consumer might consider. You can also use the same framework to estimate which contextual, product or communication factors they are most likely to focus on when making their final choice.
Whichever model you use, the tools of traditional market research can fit into it. These new frameworks give a new way to think about the consumer and how to understand them, but they do not in themselves provide new methodologies. Qualitative approaches, survey research, concept tests, panels, eye tracking, ethnography and all the rest can fit as important tools into any of the three models. However, by starting from an understanding of how the consumer thinks – informed by behavioural economics research – you’ll have a more powerful and effective way to use those tools and achieve business results for your clients.
Leigh Caldwell is a consultant and writer on pricing and cognitive economics and partner at The Irrational Agency
Readers of this blog are likely to be already familiar with many of the experimental results of behavioural economics (BE). Discoveries such as anchoring, hyperbolic discounting, loss aversion and other cognitive biases are now quite well-known in the market research world. Each of them comes with its own tricks for how to influence consumers, or pitfalls to look out for when designing questions. (those who are less familiar can find out lots more about them from some of the leading BE books:Predictably Irrational by Dan Ariely, Thinking Fast and Slow by Daniel Kahneman or Basic Instinctsby Pete Lunn).
These experiments and their associated influence tricks are the most visible aspect of behavioural economics as a field. And they’re useful too – but only in specific circumstances. Most research projects don’t have a specific need for an understanding of hyperbolic discounting or loss aversion. To put this into practice in market research, we’d like to have a clearer set of rules about what BE says about consumer insight.
There is a more powerful way to look at the empirical discoveries of BE. As well as standalone discoveries, they are also a set of clues to deeper and more important underlying insights about how people think and decide. These general lessons are applicable in many different situations – and can lead us towards finding the specific biases, limitations, heuristics or methods of influence that apply to our own consumers.
The drawback is: there is no single theory of how people make decisions. Scientific psychologists, working backwards from the results of experiments, have come up with a number of alternative frameworks. They aren’t mutually exclusive – think of them as different, valid, ways to look at the world. As a researcher or marketer, you may want to understand more than one of these models in order to decide which one to use in a particular project.
In this article I’ll briefly look at three of the leading theories of decision making. Each of these can be useful in understanding how consumers think about, and hopefully how they decide to buy, your clients’ products.
The first is the information processing model of Payne, Bettman and Johnson. This theory says that when we make a decision, we have to process the information available to us by using a series of smaller individual steps. The steps include small tasks such as estimating how good a product is, comparing two different products, or choosing to look for more information before making the decision. When confronted with a choice such as which car to buy, we decide on a strategy, gather and process more information until we’re ready to make the decision, and then choose one of the options.
Payne and Bettman also propose that while doing this, we are governed by “meta-motives” or goals that we want to satisfy during the decision-making process itself. There are four possible meta-motives:
maximising decision accuracy
minimising cognitive effort
minimising negative emotions such as regret or anxiety
being able to justify our decision to others
Different people focus on different meta-goals, so in order to appeal to the widest set of consumers, your clients might want to communicate in several different ways to match these four decision-making styles.
A second theory is the fast and frugal heuristics approach of Gerd Gigerenzer, Peter Todd, Ralph Hertwig and other researchers in the “ABC” school. This theory says that we have a toolbox of standard mental shortcuts which we use in different situations. For instance, in evaluating products we might use the “Take The Best” rule, which say that we first compare the available products on their most important feature; if one is clearly the best product on this dimension, that’s the one we buy; otherwise we move onto the second most important feature, and so on. Collectively, these shortcuts are known as the adaptive toolbox and they are thought to have been developed by evolutionary pressures as near-optimal solutions for tricky or dangerous environments.
The third model is the modified expected utility (or subjective expected utility) approach, which says that we take a generally “rational” view of our decisions – roughly estimating our expected outcome from each option, and picking the one that seems best – but subject to some modifications or approximations such as avoidance of risk. Under this theory, we mostly avoid risky options, those which might lead to a negative outcome or those whose outcomes are ambiguous, and therefore act in a relatively conservative way. The prospect theory model of Kahneman and Tversky is an example of this approach.
Other models include decision field theory (Busemeyer and Townsend), which says that we gradually “drift” towards a decision as we randomly consider various aspects of the different options available to us; decision by sampling (Neil Stewart), which suggests we compare options with randomly selected experiences from memory and see whether they appear to be better or worse than those memories; and ACT-R (John Anderson), which is less a theory of decision processes than a model of the structure of the mind, and is often used to simulate various different decision approaches and find which best matches the behaviour of real individuals.
Sometimes “theories” that we hear about, such as “nudge theory” are not general theories as such, but collections of techniques for influencing decisions. Nudge theory, as well as most of the experimental observations of behavioural economics, is compatible with several of the above models.
Any of the above frameworks can be used to understand more about how your respondents make decisions either in a real purchase context or during your research process. However, instead of a list of dozens of cognitive biases, you now have several competing decision making frameworks to choose between – a partial improvement but still no clear answer. So in future posts, I’ll suggest ways to unify these into a practical approach you may be able to use in your daily work.
Leigh Caldwell is a consultant and writer on pricing and cognitive economics and partner at The Irrational Agency