RTI uses cookies to offer you the best experience online. By clicking “accept” on this website, you opt in and you agree to the use of cookies. If you would like to know more about how RTI uses cookies and how to manage them please view our Privacy Policy here. You can “opt out” or change your mind by visiting: http://optout.aboutads.info/. Click “accept” to agree.
A Reevaluation of Assessment Center Construct Related Validity
Cahoon, M., Bowler, M., & Bowler, J. L. (2012). A Reevaluation of Assessment Center Construct Related Validity. International Journal of Business and Management, 7(9), 3 - 19. https://doi.org/10.5539/ijbm.v7n9p3
Recent Monte Carlo research (Lance, Woehr, & Meade, 2007) has questioned the primary analytical tool used to
assess the construct-related validity of assessment center post-exercise dimension ratings (PEDRs) – a
confirmatory factor analysis of a multitrait-multimethod (MTMM) matrix. By utilizing a hybrid of Monte Carlo
data generation and univariate generalizability theory, we examined three primary sources of variance (i.e.,
persons, dimensions, and exercises) and their interactions in 23 previously published assessment center MTMM
matrices. Overall, the person, dimension, and person by dimension effects accounted for a combined 34.06% of
variance in assessment center PEDRs (16.83%, 4.02%, and 13.21%, respectively). However, the largest single
effect came from the person by exercise interaction (21.83%). Implications and suggestions for future
assessment center research and design are discussed.