Growing up to be old

Some papers have somewhat weird starting points – this one had an awesome starting point – Lake Louise (Canada):

In a little suite we (Joe Johnson, Ulf Böckenholt, Dan Goldstein, Jay Russo, Nikki Sullivan, Martijn Willemsen) sat down during a conference called the ‘Choice Symposium‘ and started working on an overview paper about the history and current status of different process tracing methods. One central result (why can’t all papers be like that) is the figure below where we try to locate many process tracing methods on the two dimensions: temporal resolution and distortion risk (i.e., how fast can a method measure a process and how destructive is this measurement).

Schulte-Mecklenbeck, M., Johnson, J.G., Böckenholt, U., Goldstein, D., Russo, J., Sullivan, N., &  Willemsen, M. (in press). Process tracing methods in decision making: On growing up in the 70ties. Current Directions in Psychological Science.

Ah – everybody was trying to find a path all the time:




Something about reverse inference

Often, when we run process tracing studies (e.g., eye-tracking, mouse-tracking, thinking-aloud) we talk about cognitive processes (things we can’t observe) in a way that they are actually and directly observable. This is pretty weird – which becomes obvious when looking at the data from the paper below. In this paper we simply instruct participants to follow a strategy when making choices between risky gamble problems. Taking the example of fixation duration we see that there is surprisingly litte difference between calculating an expected value, using a heuristic (priority heuristic) and just making decisions without instructions (no instruction) … maybe we should rethink our mapping of observation to cognitive processes a bit?

Here is the paper:

Schulte-Mecklenbeck, M., Kühberger, A., Gagl, S., & Hutzler, F. (in press). Inducing thought processes: Bringing process measures and cognitive processes closer together. Journal of Behavioral Decision Making. [ PDF ]


The challenge in inferring cognitive processes from observational data is to correctly align overt behavior with its covert cognitive process. To improve our understanding of the overt–covert mapping in the domain of decision making, we collected eye-movement data during decisions between gamble-problems. Participants were either free to choose or instructed to use a specific choice strategy (maximizing expected value or a choice heuristic). We found large differences in looking patterns between free and instructed choices. Looking patterns provided no support for the common assumption that attention is equally distributed between outcomes and probabilities, even when participants were instructed to maximize expected value. Eye-movement data are to some extent ambiguous with respect to underlying cognitive processes.

Eye-Tracking with N > 1

This is one of the fastest papers I have ever written. It was a great collaboration with Tomás Lejarraga from the Universitat de les Illes Balears. Why was it great? Because it is one of the rare cases (at least in my academic life) where all people involved in a project contribute equally and quickly. Often, the weight of a contribution lies with one person which slows down things – with Tomás this was different – we were often sitting in front of a computer writing together (have never done this before, thought it would not work). Surprisingly this collaborative writing worked out very well and we had the skeleton of the paper within an afternoon. This was followed by many hours of tuning and tacking turns – but in principle we wrote the most important parts together – which was pretty cool.

Even cooler – you can do eye-tracking in groups, using our code.

Here is the [PDF] and abstract:

The recent introduction of inexpensive eye-trackers has opened up a wealth of opportunities for researchers to study attention in interactive tasks. No software package was previously available to help researchers exploit those opportunities. We created “the pyeTribe”, a software package that offers, among others, the following features: First, a communication platform between many eye-trackers to allow simultaneous recording of multiple participants. Second, the simultaneous calibration of multiple eye-trackers without the experimenter’s supervision. Third, data collection restricted to periods of interest, thus reducing the volume of data and easing analysis. We used a standard economic game (the public goods game) to examine data quality and demonstrate the potential of our software package. Moreover, we conducted a modeling analysis, which illustrates how combining process and behavioral data can improve models of human decision making behavior in social situations. Our software is open source and can thus be used and improved by others.


Everything you believe in is wrong – or is it simply terrorism?

The replication crisis has many interesting effects on how people (and scientists) think about Psychology (and, of course, other fields) … Here is a nice summary of effects that are hard to replicate. Among them ‘classics’ like the power pose or the big brother eyes.

A lot is happening because of these new insights in terms of research (e.g., replication studies) and communication (e.g., Fritz Strack on Facebook).

And then this: Susan Fiske in an upcoming piece in the APS Observer … I am really struggling with this rhetoric – Daniel Lakens to the rescue 🙂

Ah – and of course Gelman.

Everything is fucked …

This syllabus of an (obviously) awesome class has a ton of good reads:

Everything is fucked: The syllabus

by Sanjay Srivastava

I would have two additions:

  1. A multi lab replication project on ego-depletion (Hagger & Chatzisarantis, 2016)
  2. And the response from Roy Baumeister and Kathleen D. Vohs

It’s a really good statement of how f… up things are (in addition to all the other good examples above) …

“A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” – Max Planck


The exams package

I gave the R package exams a shot for my decision making lecture. Here is what it does:

“Automatic generation of exams based on exercises in Sweave (R/LaTeX) or R/Markdown format, including multiple-choice questions and arithmetic problems. Exams can be produced in various formats, including PDF, HTML, Moodle XML, QTI 1.2 (for OLAT/OpenOLAT), QTI 2.1, ARSnova, and TCExam. In addition to fully customizable PDF exams, a standardized PDF format is provided that can be printed, scanned, and automatically evaluated.”

After some fiddling and help from one of the authors (the incredible nice Achim Zeileis, Uni Innsbruck)  I got the following setup going:

  • pool of ~ 100 questions in .Rmd format (all multiple choice, 3-6 answer options) grouped into lectures
  • sampling out of the pool (e.g., 5 questions out of each lecture)
  • random order of questions in each version of the exam (while keeping the lecture order, which I think is useful to give student more structure to work from)
  • random order of the answers for each question
  • exam with the correct answers

Screen Shot 2016-06-11 at 11.00.22









There are three parts:

  1. questions[] defining the answers to a question
  2. solutions[] defining the correct answers
  3. in LaTeX the actual question

All of this information goes into an .Rmd file.

Once this is done one has to define the questions to be included (the pool) and set the details for the selection process:

sol <- exams2pdf(myexam, 
n = 2, 
nsamp = 5, 
dir = odir, 
 template = c("my_exam", "solution"), 
 encoding = 'UTF-8',
 header = list(Date = "10.06.2016")

This code would give me 2 exams with a sample of 5 questions out of each block of questions.

Pretty awesome (after some setup work).

Thanks Achim et al. !!


Three weeks without email

I spend a lot of time writing and answering email. Email is, according to timing, the third longest activity on my computer (although I am using three computers and can check this only on one of them – #timing please let us link computers for an overall analysis) … anyway – back to no email – as a holiday treat I decided to shut down all my email accounts 5 days before Dec. 24th and promised myself not to touch them until Jan. 11th. It turns out that I will come short one day of this plan. Nevertheless, I am quite happy with the result and the positive effects of this email absence. Needless to say that reading emails during vacation brings you back into a working mood (or never lets you out of it), not reading email had positive side effects before (I did this twice in the last 20 years of ‘doing’ emails). Many issues that come during such a break often solve themselves without intervention or can be solved quickly within a few hours after being back in the email world.

Well, I will turn on my email accounts now and see how much work has piled up … BRB.

So, 380 emails later – a paper submitted by a co-author, a rejection for a previous submission, a talk accepted, a chapter revised by a co-author – the best part of all this is that dealing with a ton of mails is a very quick thing, with a relatively low threshold for simply deleting out of date emails or replying quickly to urgent matters. What’s left are some longer replies I will do now …

Happy New Year!