Key Points:
- Generative AI can make public health communication more efficient, audience-centered, and accessible.
- The rise of AI technology has implications for ethics and transparency in public health communication.
- Several ongoing projects at RTI International are focused on helping clients leverage AI to enhance formative communication research.
Generative AI tools are becoming increasingly valuable, enabling public health communicators to perform time-consuming, expensive, and error-prone tasks efficiently. As AI capabilities grow, public health professionals must explore how these tools can enhance communication strategies while ensuring ethical and responsible use.
Experts at RTI International are actively leveraging AI to improve formative communication research, tailor health messaging, and enhance virtual design. This blog explores ways health communicators can leverage AI and considerations for AI use in content development.
Using AI to Improve Formative Communication Research
The use of large language models (LLMs) has revolutionized public health communication. LLMs, like ChatGPT, are AI technologies that can quickly analyze text and generate human-like responses. Users can prompt LLMs with tailored instructions, a process known as prompt engineering.
Formative communication research often involves coding and analyzing large volumes of text, a process that is labor-intensive. With this in mind, we evaluated LLMs’ ability to analyze texts in various public health applications, including
- conducting qualitative coding of open-ended responses to surveys, interviews, focus groups, and public comments;
- conducting literature meta-analyses to identify trends and insights from large datasets; and
- monitoring public discourse on social media to assess responses to health messaging.
We found that AI-assisted analyses can match human coders in accuracy while significantly increasing speed. However, human oversight remains crucial. By refining prompt engineering techniques and incorporating systematic reviews, we can mitigate accuracy concerns and maximize the reliability of AI-generated insights.
LLMs have expedited ongoing work to analyze public comments in response to federal policy changes, extract data and information to support meta-analyses, and monitor social media for responses to public health messaging. Additionally, we have used LLMs to develop internal tools like SmartSearch to answer questions based on document collections and help ensure regulatory compliance.
We have identified several areas for further exploration, using LLMs to assist formative research analyses by applying these methods to new contexts and document types.
Using AI to Develop Tailored Public Health Content
Public health messaging must be clear, accessible, and audience-specific. Generative AI streamlines content creation, allowing for efficient adaption to different audiences and their preferences.
For example, plain language is an integral part of public health communication. Creating clear and accessible content for the general public often takes significant expertise and time. Rewriting content for different audiences requires adapting words, sentence length, and formatting for the intended reader. Crafting short summaries (like the one at the top of this post) is another way to optimize content for readers.
Our multidisciplinary team of AI and plain language experts assessed generative AI’s ability to tailor content for different audiences by prompting an LLM (ChatGPT) to apply plain language principles to a wide variety of content—including web pages, technical reports, and manuscripts—for three primary audiences: people with low literacy, the public, and health care providers.
After scoring AI’s outputs for reading level, accuracy, and tone, we found considerable potential in generative AI’s ability to condense large volumes of content, organize it into more readable sections, and rewrite it in active voice. Across materials, ChatGPT successfully decreased reading level and maintained original meaning. However, findings suggested that plain language edits continue to need human review for consistency, accuracy, and audience-centered language.
Overall, we have found that the use of AI can help communicators more efficiently tailor content for specific audiences when used as a tool, rather than a replacement, for public health professionals.
Using AI to Enhance Visual Design
Public health communicators have a critical need for audience-centered, accessible, and eye-catching design. Stock images have limitations in terms of varying skin tones, ethnic and cultural representation, and body types. Generative AI models like StyleGAN can fill these gaps in representation by creating images indistinguishable from human faces.
Additionally, generative AI can recreate public health settings that are otherwise difficult to capture. Any setting can be created to avoid the expensive and inefficient logistical challenges required to represent multiple scenarios and locations like homes, schools, and medical settings. Other potential depictions include time-lapse imagery and environmental impact visualizations that users can quickly adapt for public health campaigns.
Once this imagery is created, however, accessibility remains essential. Thanks to AI tools, we can instantly generate alternative text and closed captioning—reducing the time required to deliver high-quality, audience-centered products for all consumers.
RTI is actively exploring the ways that generative AI tools (Adobe, DALL-E) can be leveraged to facilitate visual storytelling. Our experiments involve the use of AI to create step-by-step health care training scenarios, benefiting from these tools’ ability to create hyper-realistic imagery and communicate comprehensive health narratives.
Another recent application involves our use of predictive eye-tracking tools. These programs scan text and images to make recommendations for improved readability and layouts. AI enables us to maximize the impact of public health communication products with these additional checks.
Finally, generative AI can be used to enhance persona profiles. Personas are a crucial tool in the content development process that depict target audiences for public health campaigns. By leveraging AI tools that can generate multiple realistic personas and utilize audience research, we can create dynamic digital personas that elevate health communication materials.
AI Considerations
Generative AI tools are changing the landscape of public health communication, but the following must be considered when implementing generative AI:
Intellectual Property and Copyright Infringement
Many AI tools are trained on copyrighted materials (Stable Diffusion and MidJourney) or do not disclose the source of their data (DALL-E and Shutterstock). Recent rulings have protected certain tools from copyright disputes, but responsible commercial use of generative AI remains imperative.
Several tools are available for commercial use that comply with copyright standards, including subscription-based options from Getty Images and Adobe.
Transparency and Trust
Communicators must also consider consumers’ reactions to AI-generated content. Additional exploration and research are needed to determine the public’s trust in products that employ these tools. It is also important to consider what transparency means for AI use in public health communication and how public health communicators can be leaders both in innovation and responsible use of AI.
The Future of AI in Public Health Communication
As AI continues to evolve, public health organizations must establish guidelines for ethical AI use, invest in structured evaluation processes, and prioritize human oversight to maximize benefits while minimizing risks. Given the tremendous opportunities AI tools present, it is important to look at the implications for its use:
- Human review of AI. We know that current iterations of generative AI models can create misinformation (hallucination) and may lack consistency in their outputs. It is our duty as communicators to review AI-generated products and verify accuracy when using these tools.
- Responsible AI use. Organizations must lead in responsible AI use and prioritize ethics while using these groundbreaking tools.
- Structured evaluation processes. We must develop processes to evaluate the performance of LLMs and other AI tools to inform the use of generative AI. Monitoring outputs will lead to more credible and efficient use of tools that have the potential to lower costs and increase efficiency for public health communicators.
By leveraging AI’s transformative capabilities, we empower organizations to create more audience-centered and cost-effective health communication, providing tailored solutions that address unique needs and deliver measurable impact.