University of Tasmania
Browse
124374_The Quality of Response Time Data Inference.pdf (294.66 kB)

The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models

Download (294.66 kB)
journal contribution
posted on 2023-05-19, 16:11 authored by Dutilh, G, Annis, J, Brown, SD, Cassey, P, Evans, NJ, Grasman, RPPP, Hawkins, GE, Heathcote, A, Holmes, WR, Krypotos, A-M, Kuptiz, CN, Leite, FP, Lerche, V, Lin, Y, Logan, GD, Palmeri, TJ, Starns, JJ, Trueblood, JS, van Maanen, L, van Ravenzwaaij, D, Vandekerckhove, J, Visser, I, Voss, A, White, CN, Wiecki, TV, Rieskamp, J, Donkin, C
Most data analyses rely on models. To complement statistical models, psychologists have developed cognitive models, which translate observed variables into psychologically interesting constructs. Response time models, in particular, assume that response time and accuracy are the observed expression of latent variables including 1) ease of processing, 2) response caution, 3) response bias, and 4) non–decision time. Inferences about these psychological factors, hinge upon the validity of the models’ parameters. Here, we use a blinded, collaborative approach to assess the validity of such model-based inferences. Seventeen teams of researchers analyzed the same 14 data sets. In each of these two–condition data sets, we manipulated properties of participants’ behavior in a two–alternative forced choice task. The contributing teams were blind to the manipulations, and had to infer what aspect of behavior was changed using their method of choice. The contributors chose to employ a variety of models, estimation methods, and inference procedures. Our results show that, although conclusions were similar across different methods, these “modeler’s degrees of freedom” did affect their inferences. Interestingly, many of the simpler approaches yielded as robust and accurate inferences as the more complex methods. We recommend that, in general, cognitive models become a typical analysis tool for response time data. In particular, we argue that the simpler models and procedures are sufficient for standard experimental designs. We finish by outlining situations in which more complicated models and methods may be necessary, and discuss potential pitfalls when interpreting the output from response time models.

History

Publication title

Psychological Review

Pagination

1-19

ISSN

0033-295X

Department/School

School of Psychological Sciences

Publisher

American Psychological Association

Place of publication

United States

Rights statement

© 2018 The Authors. The final published version is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0) https://creativecommons.org/licenses/by/4.0/ The author manuscript is © American Psychological Association, 2018 This paper is not the copy of record and may not exactly replicate the authoritative document published in the APA journal. Please do not copy or cite without author's permission. The final article is available, upon publication, at: 10.3758/s13423-017-1417-2

Repository Status

  • Open

Socio-economic Objectives

Expanding knowledge in psychology

Usage metrics

    University Of Tasmania

    Categories

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC