A Common, Mode-Independent, Approach for Evaluating Interview Quality and Interviewer Performance
Speizer, H., Currivan, D. B., Heman-Ackah, R. K., & Kinsey, S. H. (2010, May). A Common, Mode-Independent, Approach for Evaluating Interview Quality and Interviewer Performance. Presented at AAPOR 2010, .
RTI has developed a common, mode-independent interview quality monitoring evaluation system. In-person and telephone interviewers are evaluated by similar quality metrics and the feedback and coaching processes have been standardized. Interviewers can be evaluated in real-time (live monitoring) or through audio recordings of the survey process.
The new system (QUEST) replaces a more variable set of interviewer quality monitoring activities designed somewhat differently for each RTI survey project. The emphasis in QUEST on recorded interviewer-respondent interactions provides a richer set of data for monitoring quality and assessing individual performance. The audio recordings serve as direct and concrete feedback for interviewers who can hear in their own voices the behaviors noted by quality monitors. The recorded survey interactions also provide tangible evidence of survey instrument performance for our clients. These benefits represent immediate opportunities for improving interviewer performance. In the first part of this paper we describe the challenges related to introducing this new system and some measurable gains related to improving interviewer performance.
Another, more overarching, goal for establishing this common quality assurance system is to use experience gained over a large number and wide variety of projects to focus our quality improvement efforts. The second part of this paper demonstrates how the analysis of these data has influenced the techniques and approaches being used to train interviewers including the wider use of example survey interaction recordings in the training process. The paper presents our strategy for analyzing item-level performance data collected across surveys to develop common guidelines for improving the usability of computer assisted survey instruments. We also report on experiments designed to develop improved interviewer skill profiles and make better project assignments based on these profiles.