Rating the strength of scientific evidence: relevance for quality improvement programs
Lohr, K. (2004). Rating the strength of scientific evidence: relevance for quality improvement programs. International Journal for Quality in Health Care, 16(1), 9-18.
Objectives. To summarize an extensive review of systems for grading the quality of research articles and rating the strength of bodies of evidence, and to highlight for health professionals and decision-makers concerned with quality measurement and improvement the available 'best practices' tools by which these steps can be accomplished. Design. Drawing on an extensive review of checklists, questionnaires, and other tools in the field of evidence-based practice, this paper discusses clinical, management, and policy rationales for rating strength of evidence in a quality improvement context, and documents best practices methods for these tasks. Results. After review of 121 systems for grading the quality of articles, 19 systems, mostly study design specific, met a priori scientific standards for grading systematic reviews, randomized controlled trials, observational studies, and diagnostic tests; eight systems (of 40 reviewed) met similar standards for rating the overall strength of evidence. All can be used as is or adapted for particular types of evidence reports or systematic reviews. Conclusions. Formally grading study quality and rating overall strength of evidence, using sound instruments and procedures, can produce reasonable levels of confidence about the science base for parts of quality improvement programs. With such information, health care professionals and administrators concerned with quality improvement can understand better the level of science (versus only clinical consensus or opinion) that supports practice guidelines, review criteria, and assessments that feed into quality assurance and improvement programs. New systems are appearing and research is needed to confirm the conceptual and practical underpinnings of these grading and rating systems, but the need for those developing systematic reviews, practice guidelines, and quality or audit criteria to understand and undertake these steps is becoming increasingly clear