Comparing the Relative Fit of Categorical and Dimensional Latent Variable Models Using Consistency Tests
A number of recent studies have used Meehl's (1995) taxometric method to determine empirically whether one should model assessment-related constructs as categories or dimensions. The taxometric method includes multiple data-analytic procedures designed to check the consistency of results. The goal is to differentiate between strong evidence of categorical structure, strong evidence of dimensional structure, and ambiguous evidence that suggests withholding judgment. Many taxometric consistency tests have been proposed, but their use has not been operationalized and studied rigorously. What tests should be performed, how should results be combined, and what thresholds should be applied? We present an approach to consistency testing that builds on prior work demonstrating that parallel analyses of categorical and dimensional comparison data provide an accurate index of the relative fit of competing structural models. Using a large simulation study spanning a wide range of data conditions, we examine many critical elements of this approach. The results provide empirical support for what marks the first rigorous operationalization of consistency testing. We discuss and empirically illustrate guidelines for implementing this approach and suggest avenues for future research to extend the practice of consistency testing to other techniques for modeling latent variables in the realm of psychological assessment.
Walters, G. D.,
Marcus, D. K.,
(2010). Comparing the Relative Fit of Categorical and Dimensional Latent Variable Models Using Consistency Tests. Psychological Assessment, 22(1), 5-21.
Available at: https://aquila.usm.edu/fac_pubs/808