Innovative Methods and Statistics: Methods for Ensuring Effective and Equitable Predictive Analytics

This abstract was presented at the 2018 Society for Prevention Research Annual Meeting which was held May 29 – June 1, 2018 in Washington, DC, US.

Chris Sharenbroch NCCD

Numerous empirical approaches have been introduced to develop predictive analytical tools, but the measures to evaluate these tools have not always been consistent nor applicable. Implications for use of Predictive Analytics in practice clearly support the need for well-defined, consistent criteria and methods used to evaluate the effectiveness and equity of resulting analytic tools. This paper and presentation will briefly review the various methods used to date in Predictive Analytics, including actuarial methods, random forest analyses, classification tree, and neural networking approaches. Past research suggesting that more simple approaches resulting in less-specific item weights often result in more valid predictive tools, will be reviewed, as well as common limitations of system administration data. 

Despite differences in methodological approaches used to develop predictive analytics, common criteria are needed to ensure results are reliable, valid, equitable and useful. Methods and the reasons for ensuring inter-rater reliability will be discussed, with some discussion of why inter-rater reliability is stressed over inter-item reliability. The importance of predictive validity and methods for testing it will be outlined, followed by a parallel outline of measures and methods of equity in findings. Sensitivity and specificity are ideal measures of validity when decisions informed by predictive analytics are dichotomous (yes or no service decisions), but have limited application when service decisions are multi-faceted with many levels of service provided. Nuances of equity will be addressed, as well as the importance of assessing equity by cultural groups represented in the population, geographic regions and any other relevant subgroups. Lastly, the paper and presentation will outline how to ensure the utility of predictive analytics for practitioners, and how to monitor the use and accuracy of Predictive Analytic tools in practice settings to ensure no unintended consequences. Examples from prevention programs, child protective service and juvenile justice service agencies will be referenced to emphasize and demonstrate the need for these common evaluation approaches. The final discussion will summarize the need and rationale for common, prevention science theory-defined criteria and measures for evaluating Predictive Analytic tools used in practice.

Share the Knowledge: ISSUP members can post in the Knowledge Share – Sign in or become a member