A Portfolio Evaluation of Implementation and Outcomes for United Way for Southeastern Michigan’s 2011 Social Innovation Fund Project

Research BriefEarly ChildhoodMar 5 2018

Download PDF

The Detroit-area 2011 Social Innovation Fund project (SIF 2011) was a remarkable initiative that brought together highly invested partners who shared a common purpose: to increase the number of young children prepared to enter school (i.e., ready to learn), with a particular emphasis on promoting early literacy. With leadership, coordination, and funding provided by the United Way for Southeastern Michigan (UWSEM), SIF 2011 broke new ground at multiple levels, starting with the nature of federal requirements and assistance, and extending to the local level with untested financing models, rigorous data collection and evaluation components, new avenues for cooperation, consistently high expectations, and hopes for positive outcomes.

SIF 2011, managed federally by the Corporation for National & Community Service, sought to strengthen the use of evidence in social-change interventions, while also encouraging innovative practice models. UWSEM was among the 2011 cohort of “intermediary” organizations awarded a multi-year SIF grant. Consistent with the terms of the grant, UWSEM made sub-awards to local programs (subgrantees). Each subgrantee contracted with an independent evaluator and developed subgrantee evaluation plans (SEPs), designed to yield at least “moderate” evidence (according to well-accepted scientific standards) of the effectiveness of their selected intervention(s).
This report was prepared by Child Trends to evaluate, at a portfolio level (that is, across all subgrantees), the five-year implementation of the Detroit-area SIF 2011 initiative, and its outcomes. The report complements the independent evaluations contracted for by each SIF subgrantee—the programs responsible for providing services and carrying out other activities associated with the project.

An implementation evaluation, such as the one described here, is critical for understanding the outcomes achieved by subgrantees and participants. For example, why were some outcomes achieved but not others? Why were some participants more successful than others? What changes occurred over the life of the intervention that may have accounted for the degree of its success? What specific challenges were encountered in implementing the intervention, and what knowledge and capacities were gained among those implementing it? These are a sampling of the kinds of information that can emerge from evaluation activities.

An outcomes evaluation, in contrast, focuses on the bottom line: was anyone better off because of their participation in one or more intervention activities? A well-designed evaluation of outcomes must minimize the possibility that the results obtained reflect factors outside the intervention activities—other influences that could account for the results. A portfolio-level outcomes evaluation must further wrestle with the challenge of summarizing results across multiple programs, each of which may have used different outcome measures.

Newsletters