First off, let’s talk about your work on nudge theory. Nudges work in current economic systems and reshape them. Do you think it is better for us to reshape our current economic systems, or try redesign them in totality, for example with obviously strategy proof mechanisms?
Well in general, it’s good to design policies with an accurate rather than speculative sense of how people behave. So we might think that if we offer a program for people and it’s advantageous to them, they will benefit from it. But it might be that people are procrastinators or suffer from inertia. So the mere offering the program won’t help.
It might be that we can redesign the program, so it’s really simple to take advantage of it, where it’s easy to find the building or the website where you could benefit from it. Or we might think of automaticity, where people are automatically enrolled in the program – they don’t even have to take steps to opt in. And that can be a very helpful way of increasing participation rates in programs that can change people’s lives or reduce greenhouse gasses or any number of things.
So, in the general case, sometimes we need to use nudges, but other times we should redesign economic systems?
Well, the way I put it is: nudges are a form of redesign. So you could have a program where people are offered an assortment of energy sources. They could go coal; they could go nuclear; they could go wind; they could go solar. You could have a couple different packages, and one design would be just give people coal. Let’s suppose it’s cheaper. But if they want to go green, they can’t. An alternative design would be to go green. But if they want to go coal, they can’t. A third design would be to say they’re going to be defaulted into one or another, and it’s going to be really easy for them to switch or it’s going to be slightly harder for them to switch. These are all forms of choice architecture, in the sense that they preserve freedom of choice.
An alternative approach, of course, would be a mandate, by which we say that people can’t use certain forms of energy – we’ll phase out nuclear or phase out coal. This might be better, but it eliminates freedom of choice. The question therefore is what has a higher assortment of net benefits. So in the environmental context we’re dealing with externalities. You might want to think about air pollution generally and greenhouse gases in particular, and it’s possible that a mandate would have harmful effects on poor people – maybe it would increase their energy prices, and that would be a point for freedom of choice. It’s also possible that a mandate would do much better at promoting public health and climate change goals, in which case that might be the preferred approach.
Now, onto the pandemic. Given the difference in the results of dealing with the pandemic in East Asian countries versus the West, do you think there has been a certain level of probability neglect concerning a pandemic in the West, and to what extent has the 2003 SARS outbreak helped countries be more prepared for such an event?
It’s a great question. So in some nations, including [the US], the very idea of a pandemic seemed speculative in the extreme – a little bit like a Stephen King novel rather than realistic threat. And of course that was in part because of the unfamiliarity of a pandemic – the Spanish flu was a long, long time ago. Whereas in some nations that had experienced something not radically dissimilar from the current pandemic, mask wearing, staying at home, and various forms of precautionary behaviour were part of living memory.
And there’s no question that if you’ve gone through an experience – it could be a terrorist attack, a pandemic, a financial meltdown – you probably have more preparation, more of a sense that this is a realistic threat and of what to do. I would say that in many nations (and this unifies countries that are in some ways very different) using nudges and behaviourally informed strategies has been part of the assortment of tools to encourage people to engage in social distancing, to wash their hands, to wear masks, and sometimes to stay home.
In your new book This is Not Normal you go into this in more depth, but to what extent do you think liberty norms in democratic states have been a hindrance to dealing with the pandemic?
Also an excellent question. So in general, to have a presumption in favour of freedom of choice is a good thing for human well-being. The strength of the presumption should, of course, vary with the context. If we’re talking about people inflicting harm on others, maybe there’s a reason to restrict. And in the case of a pandemic, certainly the risk that people impose on others by virtue of their own failure to take precautions is real, and we face the kind of “free-rider” problem where maybe a lot of people perceive that if they don’t wear a mask that will be okay for them. But if all of us, or a large percent of us think that way, then we’re all in very big trouble.
I would be reluctant to say that our commitment to liberty is a problem. Even in freedom-loving societies, there is recognition that for example you can’t steal from people, you can’t trespass on their property, and you can’t commit fraud. And we’ve seen in real time a recognition that the nature of the pandemic is such that people should do things and maybe even should be required to do things that are not usual, like keep their businesses closed except for certain hours, or not go out a lot at certain times, maybe not go out at all at certain times. And that has been widely accepted as the magnitude of the threat has become increasingly visible.
On freedom norms, how do you distinguish between different freedom norms? For example, we want to have freedom norms generally in society, but also at the same time want to tell people that they should not have it in many cases. Obviously there’s a difference in the threat which can be used to distinguish between different freedom levels, but is there not some major ambiguity there especially for an average person?
Freedom is an extremely complicated as well as essential concept. There are two ideas that have defined political and philosophical debate in the last decades. One is sometimes described as “negative freedom”, which is your freedom not to be imposed on by an external authority that tells you what you can or cannot do. So think of a restriction on for example freedom of movement or freedom of speech as compromising negative freedom. Then there’s an idea that some people embrace called “positive freedom”, where the idea is the freedom to have access to a job, the freedom to a decent life, the freedom to have food and clothing, and a place to live. And some people say that the second generation of rights includes positive freedoms. And the first generation includes negative freedoms. I think it’s fair to say that the distinction is a little less sharp than the words suggest, because to enjoy the right to let’s say a private property, you need to have protection by the authorities against marauders who would come on your property and take it away. So negative freedom has a positive dimension.
In the world of behavioural science there’s been increasing interest in seeing freedom as requiring a capacity to navigate life, meaning a capacity to find your way. So if you’re in a foreign city and don’t know how to get to where you need to, you are lost. I think that can be reasonably seen as compromising your freedom. And a lot of people are lost in multiple ways.
Putting the pandemic to one side, if people are trying to deal with the criminal justice system, or the Social Security system, or, in many nations, the health system, or the educational system, they can need some navigation help, and that could be through behavioural science.
With respect to your question on freedom, there are two things to say broadly. One is, in those traditions that are highly respectful of freedom, it’s recognised that if you’re imposing harm on others, there’s a legitimate reason to interfere with what you want to do. That’s [what] John Stuart Mill [said], who was kind of a radical defender of freedom, who acknowledged and emphasised that point. So in the pandemic if you’re doing something that threatens the well-being of your fellow citizens, and maybe your best friend’s grandmother, then to require you to do something that reduces that thread or eliminates it is certainly on the table, which is why compulsory vaccination is often thought to be an on-the-table policy because unvaccinated people can threaten others. For example in the question “What’s the right way of handling COVID-19 vaccination?”, in some context [compulsory vaccination] has been thought to be on the table.
It’s also the case that if people are making really palpable errors with respect to their own well-being, discussion is appropriate about mandates. So this isn’t Mill’s idea about harm to others. So people might be required to wear motorcycle helmets. They can’t get prescription drugs just because they want to: the law requires them to get a doctor’s authorization. People have to buckle their seatbelts in many nations: that’s thought to be an acceptable mandate because people might otherwise be reckless. And you might think, with respect to certain health issues, including those in the midst of a pandemic, that to require people to do X, Y or Z to protect themselves against their own impulsiveness or recklessness, is on the table again as an intervention in a society that is broadly protective of individual liberty.
In your paper on believing false rumours, you talk about how an instant correction from someone like a political leader can lead to the discrediting of a false rumour very quickly. But given that the trustworthiness of political leaders has been somewhat eroded over the last few years, especially by characters like Trump, how do we go about rebuilding this trust so that instant corrections from political leaders are more effective in dealing with false information?
I have been thinking about this a lot over the last 18 months, and I have a book coming out [in March] called Liars, which is on exactly this topic. So there are two parts of the question, one of which is how to think about falsehoods and lies generally, and their regulation or not. And for good reasons, generally falsehoods and even lies are okay.
If someone says something exaggerating their achievements in a conversation with a would-be girlfriend, that’s not a crime. And if someone says, mistakenly, something about science, and it’s not a lie, just a mistake, that’s not a crime. That’s part of what we have in a society that has a lot of voices, some of them exuberant.
On the other hand, we do know that even if a trustworthy source corrects a falsehood, sometimes the human mind will remember and in some way credit the falsehood, weeks, months and even years later, so that’s a problem. I think we need to think very hard about the obligations of social media providers with respect to the transmission and propagation. This isn’t about the government now – we’re talking about private actors. And, of course, that hard thinking has been going on with some intensity for the last year. However intense it is now, it needs to be about five times more intense, especially when a falsehood poses an imminent threat to safety and health.
With respect to the trust question, if you have someone who is not trusted in the relevant community – it might be someone who has a record of lying or someone who was believed to have a record of lying even if that’s itself untrue, then we need to think of more credible sources. So in the context of a pandemic, health professionals are often trusted even when politicians aren’t. So if the question is what advice to give people with respect X, Y, or Z, the source of the communication often matters at least as much as the content of the communication, and that puts a premium on people who don’t have any political association. Someone who is known in the community to have a particular political view might be discredited by the number of people for that reason.
We also know that cognition is often identity based. And that’s kind of an ugly phrase. But if we think of identity-based cognition, we might say that sometimes people believe or don’t believe something, not by asking about the truth content – something on which they might not have a lot of expertise – but by asking “Is that person someone who is like me?”. And it might be that it’s someone who shares certain demographic characteristics, or it might be someone who’s young, rather than old, or who has a certain ethnicity or certain gender. And that can fortify or undermine trustworthiness. So when we’re thinking of health and safety issues, to think of trusted professionals first, and to think of people who are trustworthy or not by virtue of their social identity can be a helpful path forward for those who were trying to save lives.
It seems that preconceived beliefs cement themselves, regardless of any new information, whether biased towards the original belief or even contradictory to the original belief. So how do we go about undoing them if they are incorrect?
Okay, so let’s say generally that if you have a firm belief in something, it’s not irrational to say that you’re not going to yield that belief readily. I believe that dropped objects fall, and if someone from Oxford or Cambridge said “Actually, they don’t,” and they dropped let’s say a cell phone and instead of falling, it went to the sky, I’d say you’re playing a trick on me, and I wouldn’t yield in my belief that dropped objects fall. And, if someone said there is a nation called Norway, that actually is populated by space aliens and not human beings, independent of the fact that aliens could not live there, I would still believe that Norway is populated by human beings rather than space aliens. So we can think of this as part of rationality. Now whether people are willing to change a belief based on let’s say a true statement from someone who actually is accurate, will depend on two things.
First – and this is the rational part – how firmly and with what evidentiary foundation they are committed to their original belief, and how credible that evidentiary foundation is. Based on the evidence and the identity of the corrector let’s say, the idea can get dislodged. So for many of us, we have a belief about whether genetically modified food creates environmental risk. But for many of us, that’s not the firmest belief in the world, and if we have a credible expert who both has credentials and is not self-evidently on the pay of someone who has an economic interest, we might yield: it’s not like the “dropped objects fall” case.
The other part, though, is what kind of emotional commitment we have to our belief. So if we are strongly emotionally committed to the belief that genetically modified foods are, let’s say, environmentally very damaging, we might be quite reluctant to yield, even if the source is credible. And that is a real problem – emotional commitments to beliefs that outrun evidentiary foundations for beliefs. There’s a great old book called How to Win Friends and Influence People that predated modern behavioural science. It has a brilliant chapter on exactly this issue. It says colourfully, you can’t win an argument. Don’t try, because people will stick fiercely to their beliefs, and if you have better arguments than they do, they’re just going to get mad at you – they won’t change their belief. [This] of course, is too general but it has deep wisdom in it. But the author [Dale Carnegie] says you can’t win an argument. But sometimes you can convince people to want to agree with you. How can you do that? Well, you might be able to appeal to something they already care about. It might be that they have a strong, let’s say religious, conviction of a particular kind, and you might share it or not, but the thing that you are speaking to them speaks to that strong commitment. Or it might be that they consider themselves people who, above all, are committed to evidence. And if you draw attention to that part of their self-understanding, it can be helpful. In the context of climate change, we’ve seen fantastic evidence of exactly this, where people who are skeptical about whether climate change is real are often more willing to believe it’s real if they think that the consequences of believing [in climate change] are some policy they like rather than despise. So, if people think the consequence of my belief that climate change is real is “I’m going to have to favour all sorts of regulation”, which really is costly and makes me think our economy is going in the tank, then they will think climate changes isn’t real, because if I believe in climate change then I have to also favour those policies. But if people are told the consequence of climate change belief is that we have to have a lot of entrepreneurship and innovation, and activity that spurs economic growth in certain sectors. Then people are more likely to say, “Yeah, I think
climate change is real,”, because the consequence of the belief is appealing. It’s called “solution aversion”. And if you’re averse to the solution to a problem, you’re likely to believe the problem is real. If you’re pleased by, or indifferent to the solution, you might credit the factual statement.
Professor Sunstein, thank you very much for taking the time to do this interview. It’s been an absolute pleasure.