Examining Word List Selection and Performance: An Explanatory Item Analysis of the CERAD Word List Learning Test

TitleExamining Word List Selection and Performance: An Explanatory Item Analysis of the CERAD Word List Learning Test
Publication TypeJournal Article
Year of PublicationForthcoming
AuthorsGoette, W
JournalPsyArXiv Preprint
KeywordsCERAD List Learning Test, HCAP, memory performance
Abstract

Objective: Develop and test an explanatory item response theory model (IRT) that examines properties of both the test (e.g., word order, learning over trials) and items (e.g., frequency of words in English) on the CERAD List Learning Test immediate recall trials.
Methods: Item-level response data from 1050 participants (Mage=73.74 [SD=6.89], Medu=13.77 [SD=2.41]) in the Harmonized Cognitive Assessment Protocol were used to construct various IRT models. A Bayesian generalized (non-)linear multilevel modeling framework was utilized to specify the Rasch and two-parameter logistic (2PL) IRT models. Leave-one-out cross-validation information criteria and pseudo-Bayesian model averaging were used to compare models. Posterior predictive checks helped validate model performance in predicting data observations. Fixed effects for learning over trials, serial position of words, and 9 word properties of the words (obtained through the English Lexicon Project) were modeled for their effects on item properties.
Results: A random person, random item 2PL model with an item-specific inter-trial learning effect (i.e., local dependency effect) provided the best fit of any of the models examined. Of the 9 word traits examined, only 4 has highly probable effects on item difficulty such that words became harder to learn with increasing frequency in English, average age of acquisition, and concreteness and lower levels of body-object integration.
Conclusions: Results support that memory performance depends on more than repetition of words across trials. The finding that word traits affect difficulty and predict learning raise interesting potentials for test translation, equating word lists, and extending test interpretation to more nuanced semantic deficits.

DOI10.31234/osf.io/y5urf
Citation Key11669