Using implementation science to build better programs for children and families
In airplanes, “black boxes” are devices that collect critical information about each flight, including altitude, airspeed, and acceleration. In the case of a plane crash—a decidedly negative outcome for a flight—black box data are examined to figure out how and why the crash happened. In a similar way, a type of research called implementation evaluation can shed light on what is inside the black box of programs and systems that work (or don’t work) for children and families. In other words, while an outcome evaluation tells whether a program produced the desired results, an implementation evaluation can help explain how and why those results were achieved.
Implementation studies set out to answer key questions about program execution. These questions, which directly or indirectly relate to the success of implementation, include the following:
- What factors have contributed to successful implementation? Which factors pose challenges?
- How do stakeholders perceive the program’s value? What is their level of buy-in? How effectively are they engaged in the program?
- What innovative implementation approaches are utilized across different participants (e.g., teachers, principals, and superintendents)?
- What factors facilitate scale-up of new practice(s) within and across a system?
- How are new practices sustained over time?
The field of implementation science has developed several guiding frameworks to help researchers structure their studies and organize the information they collect. One such framework outlines the stages of implementation, which is important for setting reasonable expectations for outcomes that program administrators (and funders) might realistically see during each stage of the program’s development. Another framework addresses the supports for implementation needed to create the necessary conditions for successful implementation.
Child Trends has conducted many implementation evaluations to help organizations refine program content and delivery to improve outcomes. In one example, we partnered with the K–3 Formative Assessment Consortium, which developed and launched an enhanced formative assessment process to be used in kindergarten through third grade classrooms. The implementation evaluation (1) documented the procedures that key stakeholders used to implement the assessment system; (2) evaluated challenges and successes for the pilot and field test of the assessment; and (3) provided recommendations and lessons learned for future administration and scale-up.
Building implementation evaluation into roll-out plans helps programs understand outcomes, place them in context, and improve their work. Looking at how and why a program works can help both researchers and program administrators understand the active ingredients needed to produce the results they desire, and may be a wise investment to inform potential course corrections before investing in a costly outcomes study. In this way, implementation evaluations can unlock the black box of programs and systems and improve our understanding of what elements constitute successful implementation.