• Journal Article

Implementing multilingual touch-screen audio-CASI applications

Citation

Cooley, P., Ganapathi, L., & Li, S. (2004). Implementing multilingual touch-screen audio-CASI applications. Computers in Human Behavior, 20(3), 345-356. DOI: 10.1016/S0747-5632(03)00059-1

Abstract

Audio computer-assisted self-interviewing (audio-CASI) technologies have become standard tools for collecting information on sensitive measures. However, while the method is commonly applied in the English language, its use is less prevalent in non-English language data collection efforts. One reason for the low occurrence of non-English audio-CASI applications is the relatively slow conversion of audio-CASI software to the nonstandard fonts required to display some languages (e.g., Russian, Chinese). Every written language requires an alphabet and a variety of fonts designed to display the special symbols and notations used by that language. However, early versions of PC operating systems were not flexible in their capacity to support nonstandard fonts. That is no longer the case with the development of more advanced operating systems such as Windows XP. This paper describes a versatile method for implementing audio-CASI technologies using any defined font and therefore any alphabet for which a font exists. This method relies on the capabilities of Rich Text Format for representing the visual or screen display component of the survey. Rich Text Format is supported by a variety of text editors and word processing packages, including Word and Word Perfect. It can be implemented in any software system with Rich Text Edit Control (RTEC). We describe a general process that exploits the capabilities of RTEC to efficiently implement multilingual audio-CASI applications, and demonstrate that process using four distinct languages. (C) 2003 Elsevier Ltd. All rights reserved