Professor priming – or not

This was my first contribution to a Registered Replication Report (RRR). Being one of 40 participating labs was an interesting exercise – it might seem straightforward to run the same study in different labs, but we learned that such small things as ü, ä and ö can generate a huge amount of problems and work (read this if you are into these kind of things).

Here is one of the central results:

So overall not a lot of action … our lab was actually the one with larges effect size (in the predicted direction).

Here is the abstract of the whole paper and here the Commentary by Ap Dijksterhuis naturally, he sees things a bit different …

Dijksterhuis and van Knippenberg (1998) reported that participants primed with an intelligent category (“professor”) subsequently performed 13.1% better on a trivia test than participants primed with an unintelligent category (“soccer hooligans”). Two unpublished replications of this study by the original authors, designed to verify the appropriate testing procedures, observed a smaller difference between conditions (2-3%) as well as a gender difference: men showed the effect (9.3% and 7.6%) but women did not (0.3% and -0.3%). The procedure used in those replications served as the basis for this multi-lab Registered Replication Report (RRR). A total of 40 laboratories collected data for this project, with 23 laboratories meeting all inclusion criteria. Here we report the meta-analytic result of those 23 direct replications (total N = 4,493) of the updated version of the original study, examining the difference between priming with professor and hooligan on a 30-item general knowledge trivia task (a supplementary analysis reports results with all 40 labs, N = 6,454). We observed no overall difference in trivia performance between participants primed with professor and those primed with hooligan (0.14%) and no moderation by gender.

Growing up to be old

Some papers have somewhat weird starting points – this one had an awesome starting point – Lake Louise (Canada):


In a little suite we (Joe Johnson, Ulf Böckenholt, Dan Goldstein, Jay Russo, Nikki Sullivan, Martijn Willemsen) sat down during a conference called the ‘Choice Symposium‘ and started working on an overview paper about the history and current status of different process tracing methods. One central result (why can’t all papers be like that) is the figure below where we try to locate many process tracing methods on the two dimensions: temporal resolution and distortion risk (i.e., how fast can a method measure a process and how destructive is this measurement).

Schulte-Mecklenbeck, M., Johnson, J.G., Böckenholt, U., Goldstein, D., Russo, J., Sullivan, N., &  Willemsen, M. (in press). Process tracing methods in decision making: On growing up in the 70ties. Current Directions in Psychological Science.

Ah – everybody was trying to find a path all the time:

 

 

 

Something about reverse inference

Often, when we run process tracing studies (e.g., eye-tracking, mouse-tracking, thinking-aloud) we talk about cognitive processes (things we can’t observe) in a way that they are actually and directly observable. This is pretty weird – which becomes obvious when looking at the data from the paper below. In this paper we simply instruct participants to follow a strategy when making choices between risky gamble problems. Taking the example of fixation duration we see that there is surprisingly litte difference between calculating an expected value, using a heuristic (priority heuristic) and just making decisions without instructions (no instruction) … maybe we should rethink our mapping of observation to cognitive processes a bit?

Here is the paper:

Schulte-Mecklenbeck, M., Kühberger, A., Gagl, S., & Hutzler, F. (in press). Inducing thought processes: Bringing process measures and cognitive processes closer together. Journal of Behavioral Decision Making. [ PDF ]

 

Abstract:
The challenge in inferring cognitive processes from observational data is to correctly align overt behavior with its covert cognitive process. To improve our understanding of the overt–covert mapping in the domain of decision making, we collected eye-movement data during decisions between gamble-problems. Participants were either free to choose or instructed to use a specific choice strategy (maximizing expected value or a choice heuristic). We found large differences in looking patterns between free and instructed choices. Looking patterns provided no support for the common assumption that attention is equally distributed between outcomes and probabilities, even when participants were instructed to maximize expected value. Eye-movement data are to some extent ambiguous with respect to underlying cognitive processes.

Everything is fucked …

This syllabus of an (obviously) awesome class has a ton of good reads:

Everything is fucked: The syllabus

by Sanjay Srivastava

I would have two additions:

  1. A multi lab replication project on ego-depletion (Hagger & Chatzisarantis, 2016)
  2. And the response from Roy Baumeister and Kathleen D. Vohs

It’s a really good statement of how f… up things are (in addition to all the other good examples above) …

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” – Max Planck

 

New Paper on pychodiagnosis and eye-tracking

Cilia Witteman and Nanon Spaanjaars (my dutch connection) worked together on a piece on whether psychodiagnosticians improve over time (they don’t) in their ability to classify symptoms to DSM categories. This turned out to be a pretty cool paper combining eye-tracking data with a practical, and hopefully, relevant question.

Schulte-Mecklenbeck, M., Spaanjaars, N.L., & Witteman, C.L.M. (in press). The (in)visibility of psychodiagnosticians’ expertise. Journal of Behavioral Decision Making. http://dx.doi.org/10.1002/bdm.1925

Abstract

This study investigates decision making in mental health care. Specifically, it compares the diagnostic decision outcomes (i.e., the quality of diagnoses) and the diagnostic decision process (i.e., pre-decisional information acquisition patterns) of novice and experienced clinical psychologists. Participants’ eye movements were recorded while they completed diagnostic tasks, classifying mental disorders. In line with previous research, our findings indicate that diagnosticians’ performance is not related to their clinical experience. Eye-tracking data pro- vide corroborative evidence for this result from the process perspective: experience does not predict changes in cue inspection patterns. For future research into expertise in this domain, it is advisable to track individual differences between clinicians rather than study differences on the group level.

about illusions

Andrew Gelman talked about a really old paper I did together with Anton Kühberger ages ago. It was actually the first paper / ‘real’ scientific project I was involved in.

It generated quite the buzz over its 20 year lifespan and was cited a whopping 13 times (stats look good without y-axis) …

Stats

 

 

 

Going back to it, I was happy to see that we already talked about replication (and were very reluctant to push the button harder – as we would have not been able to get through the reviews, I guess) … Things have changed.