Event

October 27, 2016

3:00 p.m. – 4:30 p.m.

We are living in a time when there is an unprecedented value in identifying “What Works.” While increasing recognition of the value of evaluation is to be celebrated, there are still obstacles to be navigated to identify optimal approaches. One is defining what constitutes “good” evidence. It is possible, for example, that “good” evidence is not always synonymous with the outcomes derived from a randomized controlled trial (RCT). Moreover, many youth serving providers operate in contexts that are not well-aligned with the requirements of a RCT. Mismatches between evaluation contexts and methodology are problematic because they may lead to erroneous findings due to the inability to sensitively assess program implementation and effectiveness in a program context. This panel will present case studies illustrating the importance of designing (and re-designing) methodology in alignment with evaluation contexts. Emphasizing the wide prevalence of this issue, the panel will describe evaluations across multiple sectors: child welfare systems, school systems, and early childhood education programs. Broad implications of lessons learned will be discussed with the audience.

Presentation 1 Author and Title: Jessica Dym Bartlett; Evaluation is what happens while you’re making other plans: Turning barriers into opportunities in the Massachusetts Child Trauma Project

Presentation 2 Author and Title: Kelly Murphy; One size doesn’t fit all: The importance of fit between design and context in evaluations of system-wide initiatives

Presentation 3 Author and Title: Joy Thompson; Making design work in the face of constraints, challenges, and knowledge gaps: Lessons from an evaluation of school start time change

Presentation 4 Author and Title: Danielle Hegseth; Validating a preschool curriculum approach: Maintaining rigor in complex research context

Authors