Economic Decisions and the EEG: An Example

Many neuroeconomics studies use fMRI machines. In my group, we currently concentrate on the electroencephalograph (EEG). The main reason is that we are interested in the analysis of decision processes in the brain, and the EEG has an excellent time resolution, while the fMRI is better for questions requiring a fine spatial resolution, e.g. brain localization.

An example of what the EEG can do for economic research, and in particular decision theory, is in a paper from my lab (Achtziger, Alós-Ferrer, Hügelschäfer, and Steinhauser, 2014), published in the journal Social Cognitive and Affective Neuroscience (with the lovely acronym SCAN). In case you are wondering, authors are in alphabetical order as per econ conventions; unlike neuroscientists, economists do not quibble about who has contributed more to a given paper.

In that paper, we looked at an abstract decision-making paradigm where participants made decisions under uncertainty. Without going into the details of the design, the kind of decision problem they faced was a bare-bones version of the quintessential decision-making problem. You have to choose an option, A or B, and you have some prior information of how likely it is that one or the other is better. So there is a prior, or more specifically a base rate, for instance 60 times out of 100 A is better. However, you receive some additional information on the particular case you are facing, for instance this year your neighbor did B and it was good for him. This is classic decision-making under uncertainty, and, with the appropriate information, there is only one normative way to solve it: integrate the base rate and the new information through Bayes’ rule (Statistics 101), build your updated beliefs, and then pick up the best option according to those beliefs.

Alas, people rarely decide exactly that way. Roughly speaking, since Bayes’ rule prescribes a very precise balance between the prior information and the new information, there are two mistakes you can make, and human beings will make both, each one in different cases.

The first mistake, often called base-rate neglect, is to put too much weight on the sample information. So you know that thirty times out of hundred bonds of a particular class have defaulted in the past, but last year’s bond of the particular company you are considering fared well. You forget the base rate and act as if last year’s information on the company (which might have been sheer luck) was all that mattered. An extreme example affects the health domain. For instance, no matter how often the physicians tell you that nine out of ten positives in a medical test for a given illness are “false positives,” you will still freak out if you test positive.

A particular example of base-rate neglect is the representativeness heuristic, where people look at the new information as if it was representative of the whole population, effectively behaving as if there was a “law of small numbers.” The latter is closely related to stereotyping and was nicely illustrated by an example due to Kahneman and Tversky. You know that out of a hundred people, 30 are engineers and 70 are lawyers, but when you hear that a particular person, randomly sampled from the set of hundred, was good at maths and likes computer games, you put a completely unjustified confidence (given the 30/100 base rate) on this person being an engineer.

The second mistake is called conservatism, and it is the mirror image of the first: putting too much weight on the base rate and ignoring the new information. Surprisingly, the same people who fall prey to the representativeness heuristic in some cases will make conservative errors in other cases. For example, suppose you know that 50 percent of last year’s students failed a given course. You attend the lecture and attempt to solve the exercises week after week and notice that you are having a lot of trouble keeping up, compared to other students. Do not flatter yourself: unless you do something serious about it, your probability of passing the course is way below 50 percent. You need to update the prior on the basis of the new information (your weekly struggles).

Back to the EEG. In our study, and without going into any details, we could look both at the representativeness heuristic and at conservatism. Regarding representativeness, we relied on the measurement of a so-called event-related potential (ERP). Very roughly speaking, those measure peaks of activity in the brain. One such ERP, with the not-very-exciting name N2, is associated with conflict detection, that is, the reaction that your brain experiences when two possible responses pull it in different directions. Long story short, what we found is that participants who fall more often for the representativeness heuristic, and hence overweight new information, display a lower sensitivity to conflict detection as captured by the N2. This is nothing less than an individual correlate of faulty decision making which can be measured directly in the brain, and clearly shows that such biases are linked to the capacity of detecting decision conflicts, which, in turn, is just an aspect of cognitive control. In decision neuroscience (or neuroeconomics, if you prefer sexy names), we tend to get excited by this kind of stuff. What is also exciting is that the N2 is a very early ERP: in our study, it peaked at around 260 milliseconds after stimulus presentation. That is, whether you will fall for a decision-making bias or not is intricately linked to extremely quick, and hence unconscious brain processes.

The second part of our article looked at conservatism. What we found here is something which I think nicely illustrates why EEG research is important for decision theory. Using a construct called lateralized readiness potential, we can actually measure whether you are preparing to move one hand or the other before you consciously decide to. Cool, uh? There is an old debate in the literature on the determinants of conservatism. Some people argued that conservative errors come from faulty aggregation of information on prior and sample. That is, you try to do something like Bayes’ rule, but get the weights wrong. Other people argued that conservatism mostly results from ignoring the sample information and fixating on the base rates. Well, what we saw is that conservative decision makers started motor preparation of the response dictated by the base rate before the new information was presented. That is, even if they are unaware of this, they actually make up their mind before seeing the sample. One could argue that this pretty much settles the debate: conservatism basically comes from ignoring new information and focusing on the base rate.

What I find particularly nice in this latter result is that we could not have found this out with any other comparable method. Choice data does not tell you when people start preparing to give an answer. Neither do more classical process data as response times. fMRI analyses would also not help here, because they would tell you which parts of the brain were involved in the decision, but could not pinpoint the exact when. It is not difficult to imagine that, as economists become more and more interested in understanding how decisions are actually made, this feature of the EEG will become more and more attractive.

NOTE: This post first appeared in 2014 in my university blog. Since I have taken down that one (one person, one blog seems more than enough to me), I am re-posting here slightly updated versions of some of the posts which used to be there.

Leave a comment