Multiple series of data were acquired specifically to relate the number of resolvable cycles across an observation of an alphanumeric character to enable humans to accurately read it. This has serious implications for the resolution needed for a surveillance camera to present a ‘‘readable’’ image to a human. We present the theory based both on an extension of the Ratches/Johnson criteria and Fourier analysis. The theory is supported by empirical data based on user identification of random English letters and Arabic numerals. The results strongly indicate that accurate readability (defined as 90% correctness or better) can be accomplished with approximately 2.8 cycles across a block letter. This appears to suggest a lower requirement than that generally accepted for unknown target identification. However, this is consistent due to the limited dataset: there are only 36 alphanumeric characters, and the observer intuitively possesses this a prioriknowledge. Moreover, the ability to read an alphanumeric is a steep function of the resolution at a certain number of cycles, which lies between 2 and 3 cycles. The authors define a new level of discrimination, 'read', which lies between the classical ‘‘recognize’’ and 'identify'. The probability of correct reading can be expressed similarly to that of Detection, Recognition, and Identification by using a postscript such as 'Read90'.