Recent Monte Carlo research (Lance, Woehr, & Meade, 2007) has questioned the primary analytical tool used toassess the construct-related validity of assessment center post-exercise dimension ratings (PEDRs) – aconfirmatory factor analysis of a multitrait-multimethod (MTMM) matrix. By utilizing a hybrid of Monte Carlodata generation and univariate generalizability theory, we examined three primary sources of variance (i.e.,persons, dimensions, and exercises) and their interactions in 23 previously published assessment center MTMMmatrices. Overall, the person, dimension, and person by dimension effects accounted for a combined 34.06% ofvariance in assessment center PEDRs (16.83%, 4.02%, and 13.21%, respectively). However, the largest singleeffect came from the person by exercise interaction (21.83%). Implications and suggestions for futureassessment center research and design are discussed.
A Reevaluation of Assessment Center Construct Related Validity