Articles on the limitations of Experimental Program Evaluations:
- Berk, R. (2005). Randomized experiments as the “bronze standard.’ Journal of Experimental Criminology, 1, 416-433.
- Angrist, J. (2005). Instrumental variables methods in experimental criminological research: What, why, and how? Journal of Experimental Criminology, 1, 23-44.
- Goldkamp, J. (2008). Missing the target and missing the point: “Successful” random assignment but misleading results. Journal of Experimental Criminology, 4, 83-115.
- Durlak, J., & DuPre, E. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350.
- Matt, G., & Navarro, A., (1997). What meta-analysis have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review, 17, 1-32.
- Dobash, R. E., & Dobash, R. P. (2000). Evaluating criminal justice interventions for domestic violence. Crime and Delinquency, 46, 252-271.
- Gondolf, E. (2001). Limitation of experimental evaluations of batterer programs. Trauma, Violence, and Abuse, 2, 79-88.
Articles on the limitations of Experimental Program Evaluations:
http://www.upne.com/1555537692.html
A chapter in the book on “The future of batterer programs” has a chapter devoted to the effectiveness debate and the oversimplifications and misinterpretations that have come out of it.
A paragraph that offers directly the counter point: A 2007 meta-analysis from the Cochrane Collaboration11 questions any “doesn’t work” interpretation of the previous meta-analyses more explicitly (Smedslund, Dalsbø, Steiro, Winsvold, & Clench-Aas, 2007): “The methodological quality of the included (experimental) studies was generally low . . . The research evidence is insufficient to draw conclusions about the effectiveness of cognitive behavioral interventions for spouse abusers . . . We simply do not know whether the interventions help, whether they have no effect, or whether they are harmful” (p. 18). An earlier analysis funded by the Centers for Disease Control and Prevention used a broader inclusion criteria of fifty intervention and prevention programs and reached a conclusion similar to the more selective Cochrane Collaboration: “The diversity of data, coupled with the relatively small number of (experimental) studies that met the inclusion criteria for the evidence-based review, precluded a rigorous, ququantitative synthesis of the findings” (Morrison, Lindquist, Hawkins, O’Neil, Nesius, & Mathew, 2003, p. 4).12 Berk, R. (2005). Randomized experiments as the “bronze standard.’ Journal of Experimental Criminology, 1, 416-433.
- Angrist, J. (2005). Instrumental variables methods in experimental criminological research: What, why, and how? Journal of Experimental Criminology, 1, 23-44.
- Goldkamp, J. (2008). Missing the target and missing the point: “Successful” random assignment but misleading results. Journal of Experimental Criminology, 4, 83-115.
- Durlak, J., & DuPre, E. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327-350.
- Matt, G., & Navarro, A., (1997). What meta-analysis have and have not taught us about psychotherapy effects: A review and future directions. Clinical Psychology Review, 17, 1-32.
- Dobash, R. E., & Dobash, R. P. (2000). Evaluating criminal justice interventions for domestic violence. Crime and Delinquency, 46, 252-271.
- Gondolf, E. (2001). Limitation of experimental evaluations of batterer programs. Trauma, Violence, and Abuse, 2, 79-88.
National Institute of Justice, COMPENDIUM OF RESEARCH ON VIOLENCE AGAINST WOMEN: https://www.ojp.gov/pdffiles1/nij/301583.pdf