eCite Digital Repository

The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models

Citation

Dutilh, G and Annis, J and Brown, SD and Cassey, P and Evans, NJ and Grasman, RPPP and Hawkins, GE and Heathcote, A and Holmes, WR and Krypotos, A-M and Kuptiz, CN and Leite, FP and Lerche, V and Lin, Y and Logan, GD and Palmeri, TJ and Starns, JJ and Trueblood, JS and van Maanen, L and van Ravenzwaaij, D and Vandekerckhove, J and Visser, I and Voss, A and White, CN and Wiecki, TV and Rieskamp, J and Donkin, C, The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models, Psychological Review pp. 1-19. ISSN 0033-295X (2018) [Refereed Article]


Preview
PDF
295Kb
  

Copyright Statement

© 2018 The Authors. The final published version is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/ The author manuscript is © American Psychological Association, 2018 This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author's permission. The final article is available, upon publication, at: 10.3758/s13423-017-1417-2

DOI: doi:10.3758/s13423-017-1417-2

Abstract

Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non–decision time. Inferences about these psychological factors, hinge upon the validity of the models’ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two–condition data sets, we manipulated properties of participants’ behavior in a two–alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these "modeler’s degrees of freedom" did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.

Item Details

Item Type:Refereed Article
Keywords:validity, cognitive modeling, response times, diffusion model, LBA
Research Division:Psychology
Research Group:Cognitive and computational psychology
Research Field:Decision making
Objective Division:Expanding Knowledge
Objective Group:Expanding knowledge
Objective Field:Expanding knowledge in psychology
UTAS Author:Heathcote, A (Professor Andrew Heathcote)
UTAS Author:Lin, Y (Dr Yingru Lin)
ID Code:124374
Year Published:2018
Web of Science® Times Cited:67
Deposited By:Psychology
Deposited On:2018-02-20
Last Modified:2018-12-12
Downloads:34 View Download Statistics

Repository Staff Only: item control page