When something old …

Schulte-Mecklenbeck, M., & Kühberger, A. (2014). Out of sight – out of mind? Information acquisition patterns in risky choice framing. Polish Psychological Bulletin, 45, 21–28.

I teamed up with Anton Kühberger to write about one of our old, favorite topics: framing and process tracing …

Here is the abstract:
We investigate whether risky choice framing, i.e., the preference of a sure over an equivalent risky option when choosing among gains, and the reverse when choosing among losses, depends on redundancy and density of information available in a task. Redundancy, the saliency of missing information, and density, the description of options in one or multiple chunks, was manipulated in a matrix setup presented in MouselabWeb. On the choice level we found a framing effect only in setups with non-redundant information. On the process level outcomes attracted more acquisitions than probabilities, irrespective of redundancy. A dissociation between acquisition behavior and choice calls for a critical discussion of the limits of process-tracing measures for understanding and predicting choices in decision making tasks.

New paper on food choice and simple heuristics

We got a new paper out on how people (consumers) use simple rules to make food choices. This is work in collaboration with the Nestlé Research Center in Lausanne.
Here is the reference:

Schulte-Mecklenbeck, M., Sohn, M., Bellis, E., Martin, N., & Hertwig, R. (2013). A Lack of Appetite for Information and Computation: Simple Heuristics in Food Choice. Appetite, 71, 242–251.

Abstract

The predominant, but largely untested, assumption in research on food choice is that people obey the classic commandments of rational behavior: they carefully look up every piece of relevant information, weight each piece according to subjective importance, and then combine them into a judgment or choice. In real world situations, however, the available time, motivation, and computational resources may simply not suffice to keep these commandments. Indeed, there is a large body of research suggesting that human choice is often better accommodated by heuristics—simple rules that enable decision making on the basis of a few, but important, pieces of information. We investigated the prevalence of such heuristics in a computerized experiment that engaged participants in a series of choices between two lunch dishes. Employing MouselabWeb, a process-tracing technique, we found that simple heuristics described an overwhelmingly large proportion of choices, whereas strategies traditionally deemed rational were barely apparent in our data. Replicating previous findings, we also observed that visual stimulus segments received a much larger proportion of attention than any nutritional values did. Our results suggest that, consistent with human behavior in other domains, people make their food choices on the basis of simple and informationally frugal heuristics.

how decisions deplete and breaks help

The New York Times published a nice overview of the work on decision making and ego depletion (often ego depletion is used as a synonym with resource depletion, which is somewhat confusing because of the use of the later in economy to described the situation when raw materials are exhausted in a region).

A new paper from Jonathan Levav (now at Standford – congrats!) is prominently featured in the above article. Levav et al analysed rulings in court cases and link them to the time when these rulings were made during the day. They showed that after breaks (lunches) the probability of getting probation dramatically increased, a results that blends nicely into the ego depletion idea which states that self-control is a limited resource that is depleted by decision (rulings) and can be restored by, e.g., rest.

How WEIRD subjects can be overcome … a comment on Henrich et al.

Joe Henrich published a target article in BBS talking about how economics and psychology base their research on WEIRD (Western, Educated, Industrialized, Rich and Democratic) subjects.

Here is the whole abstract:

Behavioral scientists routinely publish broad claims about human psychology and behavior in the world’s top journals based on samples drawn entirely from Western, Educated, Industrialized, Rich and Democratic (WEIRD) societies. Researchers—often implicitly—assume that either there is little variation across human populations, or that these “standard subjects” are as representative of the species as any other population. Are these assumptions justified? Here, our review of the comparative database from across the behavioral sciences suggests both that there is substantial variability in experimental results across populations and that WEIRD subjects are particularly unusual compared with the rest of the species—frequent outliers. The domains reviewed include visual perception, fairness, cooperation, spatial reasoning, categorization and inferential induction, moral reasoning, reasoning styles, self-concepts and related motivations, and the heritability of IQ. The findings suggest that members of WEIRD societies, including young children, are among the least representative populations one could find for generalizing about humans. Many of these findings involve domains that are associated with fundamental aspects of psychology, motivation, and behavior—hence, there are no obvious a priori grounds for claiming that a particular behavioral phenomenon is universal based on sampling from a single subpopulation. Overall, these empirical patterns suggests that we need to be less cavalier in addressing questions of human nature on the basis of data drawn from this particularly thin, and rather unusual, slice of humanity. We close by proposing ways to structurally re-organize the behavioral sciences to best tackle these challenges.

I would like to make three suggestions that could help to overcome the era of WEIRD subjects and generate more reliable and representative data. These suggestions will mainly touch contrasts 2, 3 and 4 elaborated by Henrich, Heine and Norezayan. While my suggestions tackle these contrasts from a technical and experimental perspective they do not provide a general solution for the first contrast on industrialized versus small scale societies. Here are my suggestions: 1) replications in multiple labs, 2) internet based experimentation and 3) drawing representative samples from a population.
The first suggestion, replication in multiple labs, foremost touches aspects like replication, multiple populations and open data access. For a publication in a journal a replication of an experiment in a different lab would be obligatory. The replication would then be published with the original, e.g., in the form of a comment. This would ensure that other research labs in other states or countries are involved and very different parts of the population could be sampled. Also results of experiments would be freely available to the public and the data sharing problem in Psychology, as described in the target article, but also in other fields like Medicine (Savage & Vieckers, 2009) would be a problem of the past. Of course such a step would be closely linked with certain standards on the one hand in building experiments and on the other hand in storing data. While a standard way to build experiments seems unlikely there are many methods available in computer science to store data in a reusable, for example through the usage of XML (Extensible Markup Language).
The second suggestion is based on the drawing of representative samples from the population. As described in the target article, research often suffers from a restriction to extreme subgroups from the population, from which generalized results are drawn. However, there is published work that overcomes these restrictions. As an example I would like to use the Hertwig, Zangerl, Biedert and Margraf (2008) paper on probabilistic numeracy. The authors based their study on a random-quote sample from the Swiss population including indicators as language, area where participant is living, gender and age. To fulfill all the necessary criteria 1000 participants were recruited using telephone interviews. Such studies are certainly more expensive and somewhat restricted to simpler experimental setups (Hertwig et al., used telephone interviews based on questionnaires).
The third suggestion adds additional data collection in a second location: the Internet. The emphasis in the last sentence should be set on ‘add’. Data collection solely Internet based is of course possible, already often performed and published in high impact journals. Online experimentation is technically much less demanding than ten years ago due to the availability of ready made solutions for questionnaires or even experiments. The point I would like to make here should not be built on a separation of lab and online based experiments. My suggestion combines these two research locations and enables a researcher to profit from the many benefits arising. A possible scenario could include running an experiment in the laboratory first to guarantee, among other things, high control on the situation in order to show an effect with a small, restricted sample. In a second step the experiment is transferred to the Web and run online, admittedly giving away some of the control but providing the large benefit of having access to a diverse, large samples of participants from different populations easily. As an example I would like to point to a recent blog and related experiments started by Paolacci and Warglien (2009) at the University of Venice, Italy. These researchers started replicating well known experiments from the decision making literature like framing, anchoring or the conjunction fallacy with a service called the Mechanical Turk provided by Amazon. This service is based on the idea of crowdsourcing (outsourcing a task to a large group of people) and lets a researcher have easy access to a large group of motivated participants.
Some final words on the combination and possible restrictions of the three suggestions. What would a combination of all three suggestions look like? It would be a replication of experiments, using representative samples of different populations in online experiments. This seems useful from a data quality, logistics and prize point of view. However, several issues were left untouched in my discussion, such as the question of independence of the second lab for replication studies, the restriction of representative samples to one country (as opposed to multiple comparisons as routinely found in, e.g., anthropological studies), the differences between online and lab based experimentation or the instances where equipment needed for an experiments (e.g., eye trackers or fMRI) does not allow for online experimentation. Keeping that in mind the above suggestions draw an idealized picture of how to run experiments and re-use the collected data, nevertheless I would argue that such steps could help to reduce the percentage of WEIRD subjects in research substantially.

References
Hertwig, R., Zangerl, M.A., Biedert, E., & Margraf, J. (2008). The Public’s Probabilistic Numeracy: How Tasks, Education and Exposure to Games of Chance Shape It. Journal of Behavioral Decision Making, 21, 457-570.

Paolacci, G., & Warglien, M. (2009). Experimental turk: A blog on social science experiments on Amazon Mechanical Turk. Accessed on November 17th 2009:

Savage, C.J., & Vickers, A.J. (2009). Empirical Study of Data Sharing by Authors Publishing in PLoS Journals. PLoS ONE 4(9): e7078.doi:10.1371/journal.pone.0007078

Priority Heuristic comment

We (Johnson, Schulte-Mecklenbeck, & Willemsen, 2008) have got a new paper out that comments on the Priority Heuristic as described in Brandstaetter, Gigerenzer and Hertwig, 2006.

Resolution of debates in cognition usually comes from the introduction of constraints in the form of new data about either the process or representation. Decision research, in contrast, has relied predominantly on testing models by examining their fit to choices. The authors examine a recently proposed choice strategy, the priority heuristic, which provides a novel account of how people make risky choices. The authors identify a number of properties that the priority heuristic should have as a process model and illustrate how they may be tested. The results, along with prior research, suggest that although the priority heuristic captures some variability in the attention paid to outcomes, it fails to account for major characteristics of the data, particularly the frequent transitions between outcomes and their probabilities. The article concludes with a discussion of the properties that should be captured by process models of risky choice and the role of process data in theory development.