Publications

Loss aversion is often assumed to be a basic and far-reaching psychological regularity in behavior. Yet empirical evidence is accumulating to challenge the assumption of widespread loss aversion in choice. We suggest that a key reason for the apparently elusive nature of loss aversion may be that its manifestation in choice is state-dependent and distinct from a more state-independent principle of heightened attention to losses relative to gains. Using data from process-tracing studies, we show that people invest more attentional resources when evaluating losses than when evaluating gains, even when their choices do not reflect loss aversion. Our evidence converges with previous findings on how losses influence exploratory search as well as physiological, hormonal, and neural responses. Increased attention to losses relative to gains seems to be a necessary but not a sufficient condition for loss aversion in choice.

The study of cognitive processes is built on a close mapping between three components: overt gaze behavior, overt choice, and covert processes. To validate this overt–covert mapping in the domain of decision‐making, we collected eye‐movement data during decisions between risky gamble problems. Applying a forward inference paradigm, participants were instructed to use specific decision strategies to solve those gamble problems (maximizing expected values or applying different choice heuristics) during which gaze behavior was recorded. We revealed differences between overt behavior, as indicated by eye movements, and covert decision processes, instructed by the experimenter. However, our results show that the overt–covert mapping is for some eye‐movement measures not as close as expected by current decision theory, and hence question reverse inference as being prone to fallacies due to a violation of its prerequisite, that is, a close overt–covert mapping. We propose a framework to rehabilitate reverse inference.

Dijksterhuis and van Knippenberg (1998) reported that participants primed with a category associated with intelligence (“professor”) subsequently performed 13% better on a trivia test than participants primed with a category associated with a lack of intelligence (“soccer hooligans”). In two unpublished replications of this study designed to verify the appropriate testing procedures, Dijksterhuis, van Knippenberg, and Holland observed a smaller difference between conditions (2%–3%) as well as a gender difference: Men showed the effect (9.3% and 7.6%), but women did not (0.3% and −0.3%). The procedure used in those replications served as the basis for this multilab Registered Replication Report. A total of 40 laboratories collected data for this project, and 23 of these laboratories met all inclusion criteria. Here we report the meta-analytic results for those 23 direct replications (total N = 4,493), which tested whether performance on a 30-item general-knowledge trivia task differed between these two priming conditions (results of supplementary analyses of the data from all 40 labs, N = 6,454, are also reported). We observed no overall difference in trivia performance between participants primed with the “professor” category and those primed with the “hooligan” category (0.14%) and no moderation by gender.

There is a disconnect in the literature between analyses of risky choice based on cumulative prospect theory (CPT) and work on predecisional information processing. One likely reason is that for expectation models (e.g., CPT), it is often assumed that people behaved only as if they conducted the computations leading to the predicted choice and that the models are thus mute regarding information processing. We suggest that key psychological constructs in CPT, such as loss aversion and outcome and probability sensitivity, can be interpreted in terms of attention allocation. In two experiments, we tested hypotheses about specific links between CPT parameters and attentional regularities. Experiment 1 used process tracing to monitor partici- pants’ predecisional attention allocation to outcome and probability information. As hypothesized, individual differences in CPT’s loss-aversion, outcome-sensitivity, and probability-sensitivity parameters (estimated from participants’ choices) were systematically associated with individual differences in attention allocation to outcome and probability information. For instance, loss aversion was associated with the relative attention allocated to loss and gain outcomes, and a more strongly curved weighting function was associated with less attention allocated to probabilities. Experiment 2 manipulated participants’ attention to losses or gains, causing systematic differences in CPT’s loss-aversion parameter. This result indicates that attention allocation can to some extent cause choice regularities that are captured by CPT. Our findings demonstrate an as-if model’s capacity to reflect characteristics of information processing. We suggest that the observed CPT–attention links can be harnessed to inform the development of process models of risky choice.

The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial ex- pressions, in iMotions’s software suite. In Study 1, pictures of standardized emotional facial expressions from three data- bases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accura- cy across emotions and databases, with a performance advan- tage for FACET over AFFDEX. In Study 2, 110 participants’ facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emo- tions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss poten- tial sources for limited validity and suggest research directions in the broader context of emotion research.

Worldwide, more than one million people die on the roads each year. A third of these fatal accidents are attributed to speeding, with properties of the individual driver and the environment regarded as key contributing factors. We examine real-world speeding behavior and its interaction with illuminance, an environmental property defined as the luminous flux incident on a surface. Drawing on an analysis of 1.2 million vehicle movements, we show that reduced illuminance levels are associated with increased speeding. This relationship persists when we control for factors known to influence speeding (e.g., fluctuations in traffic volume) and consider proxies of illuminance (e.g., sight distance). Our findings add to a long- standing debate about how the quality of visual conditions affects drivers’ speed perception and driving speed. Policy makers can intervene by educating drivers about the inverse illuminance–speeding relationship and by testing how improved vehicle headlights and smart road lighting can attenuate speeding.

This study investigates decision making in mental health care. Specifically, it compares the diagnostic decision outcomes (i.e., the quality of diagnoses) and the diagnostic decision process (i.e., pre-decisional information acquisition patterns) of novice and experienced clinical psychologists. Participants’ eye movements were recorded while they completed diagnostic tasks, classifying mental disorders. In line with previous research, our findings indicate that diagnosticians’ performance is not related to their clinical experience. Eye-tracking data pro- vide corroborative evidence for this result from the process perspective: experience does not predict changes in cue inspection patterns. For future research into expertise in this domain, it is advisable to track individual differences between clinicians rather than study differences on the group level.

Decision research has experienced a shift from simple algebraic theories of choice to an appreciation of mental processes underlying choice. A variety of process-tracing methods has helped researchers test these process explanations. Here, we provide a survey of these methods, including specific examples for subject reports, movement-based measures, peripheral psychophysiology, and neural techniques. We show how these methods can inform phenomena as varied as attention, emotion, strategy use, and understanding neural correlates. Two important future developments are identified: broadening the number of explicit tests of proposed processes through formal modeling and determining standards and best practices for data collection.

The challenge in inferring cognitive processes from observational data is to correctly align overt behavior with its covert cognitive process. To improve our understanding of the overt–covert mapping in the domain of decision making, we collected eye-movement data during decisions between gamble-problems. Participants were either free to choose or instructed to use a specific choice strategy (maximizing expected value or a choice heuristic). We found large differences in looking patterns between free and instructed choices. Looking patterns provided no support for the common assumption that attention is equally distributed between outcomes and probabilities, even when participants were instructed to maximize expected value. Eye-movement data are to some extent ambiguous with respect to underlying cognitive processes.

The recent introduction of inexpensive eyetrackers has opened up a wealth of opportunities for researchers to study attention in interactive tasks. No software package has previously been available to help researchers exploit those op- portunities. We created Bthe pyeTribe,^ a software package that offers, among others, the following features: first, a com- munication platform between many eyetrackers to allow for simultaneous recording of multiple participants; second, the simultaneous calibration of multiple eyetrackers without the experimenter’s supervision; third, data collection restricted to periods of interest, thus reducing the volume of data and easing analysis. We used a standard economic game (the public goods game) to examine the data quality and demonstrate the poten- tial of our software package. Moreover, we conducted a model- ing analysis, which illustrates how combining process and be- havioral data can improve models of human decision-making behavior in social situations. Our software is open source.

Several studies have demonstrated that in the mental health domain, experience does not always lead to better diagnostic decisions, suggesting that in clinical psychology experience-based intuition might actually not improve performance. The aim of the current study was to investigate differences in preferred reasoning styles of novice and experienced clinical psychologists as possible explanation of this surprising phenomenon. We investigated clinical and control decisions of novice (n = 20) and experienced (n = 20) clinical psychologists as well as age-matched controls (n = 20 and n = 20 respectively) by using vignettes and MouselabWeb matrices. We assessed their reasoning style preferences by using the Rational-Experiential Inventory (Pacini & Epstein, 1999). Results showed that experienced and novice clinical psychologists did not differ in diagnostic accuracy and that experienced psychologists had a higher preference for rational thinking than novices. We also found that in experienced psy- chologists a stronger preference for deliberation was associated with greater accuracy, and in novice psychologists a stronger preference for intuitive reasoning was associated with less accurate decisions. It might be that it is not a question of more experience but of deliberation about the task that could help clinicians perform more accurately.

We investigate whether risky choice framing, i.e., the preference of a sure over an equivalent risky option when choosing among gains, and the reverse when choosing among losses, depends on redundancy and density of information available in a task. Redundancy, the saliency of missing information, and density, the description of options in one or multiple chunks, was manipulated in a matrix setup presented in MouselabWeb. On the choice level we found a framing effect only in setups with non-redundant information. On the process level outcomes attracted more acquisitions than probabilities, irrespective of redundancy. A dissociation between acquisition behavior and choice calls for a critical discussion of the limits of process-tracing measures for understanding and predicting choices in decision making tasks.

The predominant, but largely untested, assumption in research on food choice is that people obey the classic commandments of rational behavior: they carefully look up every piece of relevant information, weight each piece according to subjective importance, and then combine them into a judgment or choice. In real world situations, however, the available time, motivation, and computational resources may simply not suffice to keep these commandments. Indeed, there is a large body of research suggesting that human choice is often better accommodated by heuristics—simple rules that enable decision making on the basis of a few, but important, pieces of information. We investigated the prevalence of such heuristics in a computerized experiment that engaged participants in a series of choices between two lunch dishes. Employing MouselabWeb, a process-tracing technique, we found that simple heuristics described an overwhelmingly large proportion of choices, whereas strategies traditionally deemed rational were barely apparent in our data. Replicating previous findings, we also observed that visual stimulus segments received a much larger proportion of attention than any nutritional values did. Our results suggest that, consistent with human behavior in other domains, people make their food choices on the basis of simple and informationally frugal heuristics.

Flashlight is an open source process-tracing tool that records mouse movements in real time during an information search task (Schulte-Mecklenbeck, Murphy & Hutzler, 2011). Using this tool, acquisition behavior and visual attention can be recorded in an unobtrusive way with a wide variety of different stimuli. Because of the structure of the stimuli in Flashlight, information acquisition behavior can be measured similarly to how eye tracking works, but unlike eye tracking systems, Flashlight can be implemented without any special equipment. The motivation for developing a new process-tracing tool comes from experience with existing process tracing methods and their limitations. Other existing pro- cess tracing tools restrict the structure of information (often in a rigid matrix similar to an information board); require a fixed and confided laboratory setup; and need specialized hardware and software that is both expensive to purchase and operate. Flashlight solves these issues by providing a free open source adaptable software package that can work via a Web browser on any Internet connected personal com- puter. Moreover, the researcher has great flexibility in how stimuli are constructed and presented, and Flashlight also enables easy access to a large number of participants through Internet based experiments.

The aim of this article is to evaluate the contribution of process tracing data to the development and testing of models of judgment and decision making (JDM). We draw on our experience of editing the “Handbook of process tracing methods for decision research” recently published in the SJDM series. After a brief introduction we first describe classic process tracing methods (thinking aloud, Mouselab, eye-tracking). Then we present a series of examples of how each of these techniques has made important contributions to the development and testing of process models of JDM. We discuss the issue of large data volumes resulting from process tracing and remedies for handling those. Finally, we argue for the importance of formulating process hypotheses and opt for a multi-method approach that focuses on the cross-validation of findings.

A flashlight enables a person to see part of the world in the dark. As a person directs a flashlight beam to certain places in the environment, it serves as a manifestation of their attention, interest and focus. In this paper we introduce Flashlight, an open-source (free) web-based software package that can be used to col- lect continuous and non-obtrusive measures of users’ information acquisition behavior. Flashlight offers a cost effective and rapid way to collect data on how long and how often a participant reviews information in different areas of visual stimuli. It provides the functionality of other open source process tracing tools, like MouselabWeb, and adds the capability to present any static visual stimulus. We report the results from three different types of stimuli presented with both the Flashlight tool and a traditional eye-tracker. We found no differences measuring simple outcome data (e.g., choices in gambles or performance on algebraic tasks) between the two methods. However, due to the nature of the more complicated information acquisition, task completion takes longer with Flashlight than with an eye-tracking system. Other differences and commonalities between the two recording methods are reported and discussed. Addi- tionally we provide detailed instructions on the installation and setup of Flashlight, the construction of stimuli, and the analysis of collected data.

We describe WebDiP (Web Decision Processes)–an open-source, online tool-which enables a researcher to track participants while they search for information in a database, available through the Internet. After various instructions on setup and configuration are given, a detailed view of WebDiP explains the system’s technical features. Furthermore, other open-source tools are mentioned that helped in programming WebDiP, running it, or analyzing data gathered with it. We present new approaches of how open-source thinking can be incorporated into a research process and discuss future perspectives of WebDiP.

The focus of this study is the effect of the location (laboratory vs. Web) of experiments on active information search in decision-making tasks. In two experiments, participants were confronted with two different search method versions (list vs. keyword) for acquiring information about a task from a database. The amount and type of information gathered and the time required for task completion were measured. In Experiment 1, significantly more information was searched for in the laboratory than on the Web when the list version was employed, whereas there was no difference between locations in the keyword version. In Experiment 2, the participants were assigned randomly to the Web or the laboratory condition. The results of Experiment 1 were replicated. Whereas location (and the presence or ab- sence of an experimenter) had an effect on the absolute amount of information gathered in both ex- periments, the relative distribution and type of information items did not differ.

This paper addresses the general issue of whether the practice of investigating human decision making in hypothetical choice situations is at all warranted, or under what conditions. A particularly relevant factor that affects the match between real decisions and hypothetical decisions is the importance of a decisionÕs consequences. In the literature experimental gam- bles tend to confound the reality of the decision situation with the size of the payoffs: hypothetical decisions tend to offer large payoffs, and real decisions tend to offer only small payoffs. Using the well-known framing effect (a tendency of risk-aversion for gains and of risk-seeking for losses) we find that the framing effect depends on payoff size but hypothetical choices match real choices for small as well as large payoffs. These results appear paradoxical unless size of incentive is clearly distinguished from the reality status of decision (real versus hypothetical). Since the field lacks a general theory of when hypothetical decisions match real decisions, the discussion presents an outline for developing such a theory.

code{white-space: pre;} pre:not([class]) { background-color: white; } if (window.hljs) { hljs.configure({languages: []}); hljs.initHighlightingOnLoad(); if (document.readyState && document.readyState === "complete") { window.setTimeout(function() { hljs.initHighlighting(); }, 0); } } h1 { font-size: 34px; } h1.title { font-size: 38px; } h2 { font-size: 30px; } h3 { font-size: 24px; } h4 { font-size: 18px; } h5 { font-size: 16px; } h6 { font-size: 12px; } .