Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance.
Daniel Kahneman, Thinking Fast and Slow
Most of microeconomics is based on rational choice – people are assumed to act in such a way as to best advance their own preferences. This is in fact one of the few things about microeconomics that people know, and it is the most common criticism of microeconomics. People clearly aren’t rational, so how can you build your elaborate models assuming that they are? What good are the elaborate signals and incentives of the market system if people make the wrong decisions based on them?
There are three broad reasons why the rationality assumption was (and is) so popular, the third of which I will come back to later in the post. The first is that, as I noted in the previous part, mistakes tend to cancel out in large numbers. People randomly screwing up doesn’t jeopardise the market system as a whole. It’s only when people consistently make an error more often than they make the opposite error (people must be biased, not just error-prone) that we have problems.
But biases do exist, so why assume rationality in the presence of them? The principal virtue of rationality as an assumption is that it’s concrete. Rational behaviour has certain observable properties, even if you don’t know what someone’s preferences are (I’ll talk about these later), which means that it’s at least testable. But simply declaring people to be irrational doesn’t give you any guidance as to how they will behave. For irrationality to be tractable you need specific theories of how human behaviour systematically varies from theoretical ideals.
Enter Daniel Kahnemann and Amos Tversky. Psychologists rather than economists, they nonetheless earned their Economics Nobel by providing a clear outline of cognitive biases that make it possible to discuss how people are irrational, rather than simply declaring that they are irrational.
So what do we mean by irrational? At one level it’s pretty straightforward – a decision is irrational if it predictably leads to outcomes that are contrary to your preferences. If there is another decision you could have made that would predictably lead to better outcomes, by your own definition of “better”, then your decision was irrational. The complicated part is that you can’t perceive someone’s preferences directly. So how do you decide whether a decision is irrational? There are two ways you can determine an action to be irrational:
- You can comment on the rationality of an action contingent on some specified goal. For example: economists and political scientists will often say that voting is irrational if your goal is to change the outcome of the election since there is effectively no chance of your vote changing the outcome of the election. Note that this type of statement is of limited utility as it is only applicable to people whose goals match your statement. If you have goals aside from changing the outcome of an election, voting might be rational for you.
- Alternatively (and more usefully), you can demonstrate that a decision is irrational if no possible set of preferences could support it. This is normally done by looking for inconsistencies in sets of decisions. The textbook example is intransitivity – if a person can be shown to consistently choose A over B, B over C, and C over A, then there is no possible coherent ranking of A, B, and C that could justify that set of choices. Kahenman and Tversky focused on this sort of irrationality in their work.
The biases Kahneman and Tversky outlined form the core of behavioural economics, the branch of economics that considers irrational behaviour. Some of the more important biases include:
- Anchoring Bias – a tendency to evaluate an option by comparing it to irrelevant contextual information. If you ask people to say how much they will pay for something, you will get different answers if you give them a range of options from $0 to $100+ than if you give them a range of options from $0 to $1000+.
- Hyperbolic Discounting – an inconsistency of time preferences that leads people to be less willing to tolerate delays that happen in the near future versus ones that happen in the more distant future. People should find a day’s delay to receiving a payment to be equally bad (if you express badness as a percentage reduction in value) if the delay is from 0 days to 1 or 365 days to 366, but in practice people find the more immediate delay worse.
- Framing Effects – People make inconsistent choices, depending on how the choice is presented. Offer people a choice between risking deaths or certain deaths, they are more inclined to taking risks. Offer them the same choice framed as choosing between a chance of saving lives or a certainty of saving lives, and people are more inclined to choose certainty.
- Improper Probabilistic Reasoning – Whole books have been written about this one. Needless to say, regular people who haven’t had statistical training are utterly useless at dealing with probabilities. Not that trained statisticians are good in any absolute sense, just less useless.
Behavioral economics is the newest branch of market failure economics, so it’s policy prescriptions are still in development. The leading school of thought when it comes to policy solutions is called choice architecture (or nudging) and was popularized by Richard Thaler and Cass Sunstein in their book Nudge. The premise behind nudging is that if subtle features of how choices are presented change what we choose, then why not structure choices so as to more closely produce the decisions someone would make if they weren’t plagued by all these biases. Note that nudging is not compulsion – an example from Nudge is a school cafeteria putting the salads at eye level, and having the desserts at the bottom of the cabinet, where they are harder to see. If you really want a dessert, then it’s still for sale. It’s just less likely to present itself as an option if you haven’t already decided to buy one. Another example is making retirement savings programmes opt-out, rather than opt-in. Now you have to make a deliberate decision not to save for your retirement rather than a deliberate decision to save.
Like a lot of economic terminology, the meaning of nudge can start to slip if you’re not careful. Nudges do not involve reducing consumer choices but merely changing the salience of different options. As soon as you start forbidding some choices or even making them more expensive you have moved beyond nudging. I believe that keeping different kinds of intervention conceptually separate is worth a little pedantry – if someone tries to dress up traditional regulations or taxes as “nudging” I recommend you stop and clarify terms. So many internet arguments are fuelled by conflicts in each interlocutor’s definitions.
There are also dangers with nudging that are less semantic. Earlier in the post I outlined two of three reasons why rational choice is popular in economics. The third is that it takes the agency of the public seriously. The danger with making policy to correct irrational decisions is that you can’t observe people’s preferences directly. The cognitive biases I mentioned above were found in lab conditions with all the noise of the real world filtered out. In the real world, figuring out whether a specific decision is irrational (as distinct from being the rational product of preferences that are very different to yours) is much more difficult.
Eliezer Yudkowsky noted that learning about cognitive biases can make it really easy to find flaws in other people reasoning, thereby paradoxically making it harder to realize when you’ve made a mistake. Doing this in a debate makes you obnoxious but doing it as a policy maker can turn you into a tyrant. If your nudge doesn’t work, it is very easy to conclude that more force is called for – the nudge becomes a shove. And if people object to your attempt to ‘help’ them, you can easily rationalize away their objections by listing all of the cognitive biases they are probably suffering from. This is a danger even if you rely on stated preference as a guide (for example, concluding that since a lot of people who say they want to quit smoking fail to quit, they need a nudge in the right direction) since stated preference is not especially credible. People say all sorts of things because they think it will make them look good or simply because they think they should. This is why economists take people’s choices so seriously – there are serious risks involved in neglecting people’s revealed preferences.
I am not suggesting throwing the baby out with the bath water – cognitive biases are real, no matter how convenient it might be if they weren’t. But you should proceed with caution – not every strange decision is irrational, and if the public doesn’t respond to your nudging you should resist the urge to switch to compulsion. If people aren’t responding to the nudging, their choice might be less irrational then you originally supposed.
Caution will also be the order of the day in the next part where I will talk about the limitations of government as an institution and why everything I’ve discussed so far is more complicated in practice.