NIDA International SPR Poster Session: Using an Adapted Delphi-Methodology to Gage Local Consensus on Standards for Evidence in Preventive Interventions in Latin America

This abstract was presented at the 2018 Society for Prevention Research Annual Meeting which was held May 29 – June 1, 2018 in Washington, DC, US.

Raúl Perry Fundacion San Carlos de Maipo

Maria Luisa Correa Fundacion San Carlos de Maipo; Camila Astrain San Carlos de Maipo Foundation; Nicole Eisenberg University of Washington

Introduction: There is growing interest in Latin-American, among policy makers and the general public, in using social programs and preventive interventions that are evidence-based and have been tested and proven to be effective. Local researchers and practitioners have identified the need to design, implement and evaluate effective preventive interventions in order to develop, over time, a database or catalog of programs that can be used in a Latin-American context. Useful examples of criteria for evidence-based programs exist in the U.S. and Europe; however, their application in Latin America is limited because the design and evaluation of preventive interventions in the region is still in early development, and faces more funding and resources limitations. In this context, we aimed to develop a local set of criteria to identify interventions that meet adequate standards for design, implementation and evaluation that are relevant for the Latin- American context. 

Method: We used an adaptation of the Delphi methodology to try to achieve consensus, among a group of expert Chilean researchers and practitioners, on standards of evidence for ranking preventive interventions. First, we conducted a literature review of existing national and international standards of evidence used to classify preventive programs. Based on this, we proposed an initial set of criteria. These criteria were subjected to review by a group of 11 experts who were asked to individually assed each criteria according adequacy and pertinence. Expert assessments were collected, anonymized, synthetized, and presented back to them during a face-to-face group meeting (attended by 7 of the original experts). The expert team then discussed and refined the criteria using feedback from their peers, reaching consensus on a list of indicators. 

Results: A set of 28 criteria were included in the preliminary list created after the literature review, organized into three categories: program design, implementation readiness, and evaluation quality. During the individual assessment stage, experts rated the criteria as mostly relevant and pertinent, on average assigning them a value over 7.5 on a scale from 1 to 9. Only the criteria that scored at or above this average were presented back to the panel during the face-to-face stage. All of the experts’ comments were synthesized and presented back to the expert panel. With this information, the experts reviewed and reformulated the criteria presented, including new indicators and removing others based on consensual agreement. The result was an updated set of 13 criteria. 

Conclusions: The study allowed a group of local experts to agree on a set of culturally relevant and useful indicators that can be used to identify programs that meet high standards in terms of their theoretical design, their implementation readiness, and their evaluation quality. It is an important first step in creating a local dataset of effective preventive interventions. The availability of such a catalog could help promote the implementation of preventive interventions in the region, and foster the use of data and evidence for policy decisions. It may also foster awareness of the need to gather evidence of program effectiveness and a culture of evaluation in the region. 

Share the Knowledge: ISSUP members can post in the Knowledge Share – Sign in or become a member