Often, when we run process tracing studies (e.g., eye-tracking, mouse-tracking, thinking-aloud) we talk about cognitive processes (things we can’t observe) in a way that they are actually and directly observable. This is pretty weird – which becomes obvious when looking at the data from the paper below. In this paper we simply instruct participants to follow a strategy when making choices between risky gamble problems. Taking the example of fixation duration we see that there is surprisingly litte difference between calculating an expected value, using a heuristic (priority heuristic) and just making decisions without instructions (no instruction) … maybe we should rethink our mapping of observation to cognitive processes a bit?
This is one of the fastest papers I have ever written. It was a great collaboration with Tomás Lejarraga from the Universitat de les Illes Balears. Why was it great? Because it is one of the rare cases (at least in my academic life) where all people involved in a project contribute equally and quickly. Often, the weight of a contribution lies with one person which slows down things – with Tomás this was different – we were often sitting in front of a computer writing together (have never done this before, thought it would not work).
Before there was R, there was S. R was modeled on a language developed at AT&T Bell Labs starting in 1976 by Rick Becker and John Chambers (and, later, Alan Wilks) along with Doug Dunn, Jean McRae, and Judy Schilling. Here is a talk by Rick Becker telling the story of R. Good Stuff!
I gave the R package exams a shot for my decision making lecture. Here is what it does:
“Automatic generation of exams based on exercises in Sweave (R/LaTeX) or R/Markdown format, including multiple-choice questions and arithmetic problems. Exams can be produced in various formats, including PDF, HTML, Moodle XML, QTI 1.2 (for OLAT/OpenOLAT), QTI 2.1, ARSnova, and TCExam. In addition to fully customizable PDF exams, a standardized PDF format is provided that can be printed, scanned, and automatically evaluated.
The friendly people from RStudio recently started a webinar series with talks on the following topics (among others):
Data wrangling with R and RStudio
The Grammar and Graphics of Data Science (both dplyr happiness)
RStudio and Shiny
… and many more.
Our friend Dr. Nathaniel D. Philipps also started a cool R course with videos, shiny apps and many other new goodies.
Here is an excellent stackoverflow post on how *apply in all its variations can be used.
One of the followups points at plyr (from demi-R-god Hadley Wickham) which provides a consistent naming convention for all the *apply variations. I like plyr a lot, because like ggplot, it is easy to grasp and relatively intuitive to find an answer to even tricky problems.
Here is the translation from *apply to plyr …
This is mainly a note to self:
There are several style guides for R out there. I particularly like the one from Google and the somewhat lighter version of Hadley (ggplot god).
All of that style guide thinking started after a question on about R workflow … How do we organize large R projects. Hadley (again) is favoring an Load-Clean-Func-Do approach which looks somewhat like that:
load.R # load data clean.
I had a discussion the other day on the re-appearing topic why one should learn R …
I took the list below from the R-Bloggers which argues why grad students should learn R:
R is free, and lets grad students escape the burdens of commercial license costs. R has really good online documentation; and the community is unparalleled. The command-line interface is perfect for learning by doing. R is on the cutting edge, and expanding rapidly.