Webinar Reflection: Reimagining Prevention, The Role of Generative Artificial Intelligence

ISSUP, in partnership with the International Consortium of Universities for Drug Demand Reduction (ICUDDR), recently hosted an engaging and forward-looking webinar titled “Reimagining Prevention: The Role of Generative Artificial Intelligence.” The session explored how rapidly evolving generative AI technologies can support prevention, education, and treatment work within the substance use field.
Our presenter, Brian Klaas, brought both expertise and energy to this important conversation. Brian serves as Assistant Director for Technology at the Johns Hopkins Bloomberg School of Public Health’s Center for Teaching and Learning. He also holds a faculty appointment in the R³ Center for Innovation in Science Education. He teaches graduate-level courses on communications design, data visualisation for non-expert audiences, and the applications of generative AI in public health. Beyond his academic work, Brian leads a team developing AI-enhanced learning tools to support faculty and students, and is a frequent speaker on storytelling and technology at health science conferences across the United States.
The Promise and the Paradox of Generative AI
Brian began by framing the central paradox of generative AI, the delicate balance between its transformative potential and the significant risks it presents. He demonstrated how large language models (LLMs) generate text and images by predicting the next likely word or pixel, learning from vast quantities of human-created data. He explained that this process enables AI to act like a thinking partner, helping to quickly summarise information, spark new ideas, and support creative problem-solving in prevention work.
At the same time, he cautioned that AI outputs are only as sound as the data and intent behind them. He explained that AI tools can sometimes make things up or get facts wrong, repeat biases that exist in the information they were trained on, and use a lot of computer power and energy to run. He noted that generative AI can help people work more efficiently, but only when it is used thoughtfully, with curiosity and a strong sense of ethics.
Through live examples, Brian showed how AI can quickly sum up research papers, help plan prevention programmes, and suggest ways to communicate with different communities. These demonstrations showed that AI tools are easy to use, even for people without a tech background, and that they are meant to support rather than replace human knowledge and expertise.
Questions from the Field: Curiosity Meets Caution
The webinar audience engaged actively, raising thoughtful questions that revealed both enthusiasm and prudence.
One participant asked: “What is the main purpose of generative AI?” Brian described its core purpose as augmenting human cognition, helping us think faster, write better, and explore ideas more widely, while maintaining human judgment at the centre.
Another attendee shared that they use AI to help format journal submissions. Brian affirmed this as a smart use case, encouraging users to harness AI for drafting and editing but, very importantly, to verify every citation and claim.
A particularly important question focused on whether these tools draw from evidence-based prevention (EBP) data. Brian clarified that current models do not inherently distinguish between evidence-based and non-evidence-based sources. This makes human oversight critical to ensure interventions remain scientifically sound and ethically safe.
Educators in the audience also asked about having students use AI to generate case studies before verifying their accuracy. Brian endorsed this approach, saying it builds both AI literacy and academic rigour, provided students critically assess accuracy and source integrity. This does infer that students need to be appropriately capacitated to do so.
Another question focused on transparency, asking whether social media platforms should clearly label images created by AI. Brian agreed that being open about this is important for helping people understand when content is human-made or AI-generated, which in turn strengthens public trust and information integrity.
Participants also reflected on creativity and independence, asking whether using AI risks diminishing human creativity and critical thinking. Brian responded that AI should be seen as a partner rather than a competitor, a tool that can spark creativity rather than suppress it, provided we stay intentional about maintaining our own critical and creative capacities.
Finally, the discussion touched on perceived productivity. Some attendees cited studies suggesting that users often feel more productive with AI than they truly are. Brian agreed, noting that while AI can speed up tasks, it does not automatically improve quality. He urged users to track and measure actual outcomes to distinguish between productivity gains and illusions of efficiency.
From Theory to Practice: AI in Prevention and Treatment
While the field is still developing, a growing body of research supports the potential for AI and machine learning to enhance prevention and treatment in mental health and substance use care.
The literature (see “Further Reading” below) presents encouraging early evidence that conversational agents can improve access to care and increase engagement, although the quality and consistency of research methods vary. Studies using machine learning also show strong potential for predicting relapse and treatment outcomes but emphasise the importance of transparency and reproducibility. Likewise, digital interventions for alcohol and tobacco reduction demonstrate measurable, though sometimes inconsistent, effects that often depend on user engagement and cultural relevance.
Taken together, current evidence suggests that AI can play a valuable supportive role in prevention science, helping with tasks such as screening, summarising research, tailoring interventions to specific needs, and strengthening programme evaluation. However, the key principle remains the same: technology should enhance and inform evidence-based practice, not replace it.
Moving Forward: Ethics, Literacy, and Collaboration
Drawing from both the presentation and the wider literature, several priorities emerged for prevention professionals and systems leaders:
Prevention (and treatment) professionals can begin by experimenting with AI in low-risk, high-value tasks such as summarising reports, drafting communication materials, or exploring intervention ideas. At the same time, it is critical to invest in AI literacy, training the workforce to recognise bias, verify outputs, and maintain a sceptical but creative stance toward machine-generated information.
Transparency also emerged as a key ethical principle. When AI tools are used in developing communication materials, training resources, or program plans, disclosure builds trust and allows for accountability. Collaboration between prevention (and treatment) scientists and AI developers was highlighted as essential to ensure that models are trained on relevant, high-quality data, rather than the open internet alone.
Finally, Brian underscored the need for domain-specific datasets, curated repositories of evidence-based interventions and culturally relevant data, to inform future AI models and ensure their outputs reflect prevention science rather than unvetted content. These themes align closely with emerging WHO and CDC guidance on the ethical and responsible use of AI in health systems.
Further Reading
For those interested in exploring the emerging research landscape around artificial intelligence, digital interventions, and public health, the following peer-reviewed studies and policy papers, though not by any means exhaustive, provide a some foundation. These articles collectively highlight both the opportunities and ethical imperatives of integrating AI into prevention and treatment contexts.
- Lee, S., Yoon, J., Cho, Y., & Chun, J. (2024). A systematic review of chatbot-assisted interventions for substance use. Frontiers in Psychiatry, 15, 1456689. https://doi.org/10.3389/fpsyt.2024.1456689
This PRISMA-based review analysed 28 studies using chatbots for prevention, assessment, and treatment of substance use. Chatbot interventions showed measurable benefits in smoking cessation and substance-use reduction, with personalised, context-sensitive design emerging as a key success factor.
- Johansson, M., Romero, D., Jakobson, M., Heinemans, N., & Lindner, P. (2024). Digital interventions targeting excessive substance use and substance use disorders: A comprehensive and systematic scoping review and bibliometric analysis. Frontiers in Psychiatry, 15, 1233888. https://doi.org/10.3389/fpsyt.2024.1233888
Mapping more than 3,000 studies published between 2015–2022, this review identifies rapid growth and promising results for digital and web-based prevention programs. It also points to key gaps such as limited cost-effectiveness data, inconsistent terminology, and sparse exploration of AI-driven interventions.
- de Mattos, B. P., Mattjie, C., & Ravazio, R. (2024). Craving for a robust methodology: A systematic review of machine learning algorithms on substance-use disorders treatment outcomes. International Journal of Mental Health and Addiction. https://doi.org/10.1007/s11469-024-01403-z
Reviewing 28 studies on machine-learning applications for addiction treatment, the authors found encouraging evidence for AI-assisted relapse prediction and treatment adherence models. However, they emphasise the urgent need for standardised methods, transparency, and external validation to ensure clinical reliability.
- Centers for Disease Control and Prevention. (2025). Vision for the use of artificial intelligence in public health. U.S. Department of Health and Human Services. https://www.cdc.gov/data-modernization/php/ai
The CDC outlines a national strategy to responsibly integrate AI into surveillance, outbreak detection, and program operations. The vision stresses workforce capacity-building, ethical governance, and partnerships to ensure AI strengthens equitable, data-driven public-health action.
- Arora, A., Alderman, J. E., Palmer, J., Ganapathi, S., Laws, E., McCradden, M. D., Oakden-Rayner, L., et al. (2023). The value of standards for health datasets in artificial intelligence-based applications. Nature Medicine, 29(11), 2929–2938. https://doi.org/10.1038/s41591-023-02608-w
This Nature Medicine article analyses global frameworks for ethical dataset curation and transparency in health-related AI. It calls for consensus-based standards to reduce algorithmic bias and promote equitable access to high-quality data which are foundational principles for trustworthy AI in healthcare and prevention.
Closing Reflections
The “Reimagining Prevention” webinar made one point abundantly clear: generative AI is neither a threat nor a cure-all; it is a tool whose value depends entirely on how thoughtfully we use it. When guided by evidence, ethics, and empathy, AI can extend the reach of prevention professionals, amplify creativity, and strengthen the translation of science into practice.
ISSUP and ICUDDR thank Brian Klaas for his insightful and practical presentation, and all participants for their curiosity, caution, and commitment to learning. We look forward to continuing these conversations through future webinars, podcasts, and collaborative projects, as we collectively reimagine the future of prevention in an AI-enabled world.