The 2018 Family First Prevention Services Act (Family First Act) provides funding for kinship navigator programs that demonstrate evidence of effectiveness. Many agencies believe their kinship navigator programs (see text box) benefit kinship caregivers and their families; however, to qualify for Family First Act funding, stronger research evidence is needed to understand whether and how families benefit. This brief identifies common challenges agencies face in building this evidence and suggests ways to address these challenges, including defining the program model; selecting a comparison group; ensuring an adequate sample size; selecting appropriate outcomes and reliable and valid measures; and collecting data.


Background

Many children live with kinship caregivers (adults who may or may not be related to them and have a pre-existing relationship with them) because of interaction with the child welfare system. One third of children in foster care live with kin and, according to Generations United, for every child in kinship foster care, about 20 children live with kin outside the foster care system.[1][2] Kinship caregivers may find themselves unexpectedly in this role, often with fewer financial or other resources provided to them than to foster parents, as they navigate multiple systems to meet the needs of the children in their care. Kinship caregivers have reported heightened needs during the COVID-19 pandemic,[3] highlighting the increased importance of connecting kinship families to available resources during the pandemic and in recovery efforts. Kinship navigator programs can serve as a lifeline to kinship caregivers and promote child well-being through maintaining children’s connections to their families and culture.

What are kinship navigator programs?

Kinship navigator programs assist kinship caregivers become familiar with and access supports and services to help meet their families’ needs. There are many different kinship navigator program models that can be administered by public or private child welfare agencies. To qualify for federal funding, a kinship navigator program must:

  • Coordinate with other state/local agencies, including 2-1-1 and 3-1-1 information systems where available
  • Consult with kinship caregivers and organizations, children and youth cared for by kin, and relevant public and private entities
  • Establish information and referral services for kinship caregivers and those who support them
  • Provide outreach to kinship caregivers through websites and/or other materials
  • Promote public-private partnerships to better serve kinship caregivers

Source: 42 U.S.C. 627

Kinship care has deep, historical roots in Black and Indigenous communities. For example, after white slave owners separated families by selling children’s enslaved parents, relatives often cared for the children; in Indigenous communities, many tribal members consider themselves part of an extended family group who support each other in times of need. Currently, relatives continue to provide care when many Black and Indigenous children cannot live with their parents. For example, in one state, Black children in child welfare custody are 1.5 times more likely to be placed with kin compared to children of other races. Indigenous children are more likely than any other racial or ethnic group to live with kinship caregivers within or outside foster care.

Unlicensed, informal kinship care providers often go without the financial supports and oversight that agencies provide to licensed kinship care providers. Thus, any racial/ethnic disparities in the use of unlicensed or informal kinship care[4] could result in inequitable access to supports and, in turn, to inequitable child and family outcomes. However, because kinship navigator programs can serve not only formal, licensed kinship caregivers but also informal, unlicensed caregivers, they have the potential to help engage communities who may traditionally not seek services and to promote increased access to services regardless of the circumstances of kinship care.

While the federal government has demonstrated support for kinship navigator programs through the Family First Act, as of February 2021, no kinship navigator programs have met the Family First Act’s threshold for evidence as articulated by the Title IV-E Prevention Services Clearinghouse (Clearinghouse). Kinship navigator programs must be rated at least “promising,” meaning that at least one study meets the Clearinghouse’s design standards and demonstrates a positive effect on a relevant outcome. The evidence requirements for kinship navigator programs have been waived through September 30, 2021 during the COVID-19 public health emergency through the Consolidated Appropriations Act of 2021, allowing states to draw down federal funding to continue serving kinship families and evaluating their programs.

The limited evidence base and the challenges involved in developing proof of effectiveness for child welfare interventions are not unique to kinship navigator programs. Evaluations of child welfare interventions often fail to identify program impacts. However, limitations in many studies make it difficult to determine if an impact was not detected because of a study’s limitations or because the program was no more effective than services as usual. Below, we identify common challenges agency leaders and program evaluators face in evaluating kinship navigator programs, and we suggest potential solutions that align with the Clearinghouse requirements to address those challenges.


Common Challenges in Evaluating Kinship Navigator Programs

Defining the program models

Kinship navigator programs are usually composed of several components (see text box “What are Kinship Navigator Programs” above). To be considered by the Clearinghouse, the program must have a manual identifying its components and how the components should be implemented. This information helps measure the level of fidelity attained in practice to contextualize evaluation findings. A manual is also important for replicating effective program models in other jurisdictions.

However, manualizing a kinship navigator program can be difficult, since components have different requirements, lengths of service, intensity of service, and ranges of options available to recipients. Implementing each component can also differ depending on the need of the kinship caregiver being served. Some kinship caregivers may need different components of the program—or different doses of a particular component—than others. Further, most kinship navigator programs involve voluntary, open-ended services allowing for ebbs and flows of caregiving and recurring needs, resulting in varying length and intensity of services. Identifying and addressing these variations in caregiver need may require staff time and effort but can increase the effectiveness of the program overall. In addition, tailoring a statewide program to serve rural and urban communities with differing needs and resources requires flexibility. However, the customizability of the program model can make it difficult to design an evaluation, identify appropriate outcomes and metrics, and assess if the program model is implemented with fidelity.

Given the voluntary nature of services, enrollment and engagement among eligible families may be a problem. Families may face a language barrier when information and referral services are offered only in English, and/or when there are no kinship navigators who speak languages other than English. Families of color, including Black and Indigenous families, may be particularly distrustful of health care services. Some caregivers may be reluctant to engage with social services, especially if the service is offered by the public child welfare agency, which they may distrust. There is also a demonstrated long history of racial bias in child welfare services that can lead to distrust particularly among families of color. This distrust can hinder equitable access of, and engagement with, services such as a kinship caregiver program. If many families are unwilling to engage with services, then there may be no impact for an intervention to detect, or the intervention may improve outcomes for white families, but not for families of color. Indeed, in the domain of mental health care, research indicates that families of color typically have less access to and are less likely to obtain the care they need.

Potential solutions:

  • Form an advisory group composed of kinship navigators, kinship caregivers, youth and young adults formerly in kinship care, and other key stakeholders to assist in creating a well-defined theory of change, a logic model, and a clear delineation of services (see the Resources section for related tools).
    • Assess the existing array of services for kinship caregivers. Incorporate program components that have been proven successful or show anecdotal evidence of success.
    • Ensure that your program includes all the required components of a kinship navigator program to qualify for federal funding (see text box “What are Kinship Navigator Programs” above).
    • Articulate your theory of change. The theory of change provides the description and rationale for how and why you expect to reach your desired outcome(s) under the circumstances (see the Annie E. Casey Foundation toolkit in the list of Resources). Identify program outcomes of interest. Develop a logic model as a representation of the theory of change, that outlines the pathways from program activities to expected outcome(s).
    • Refine the program model based on the theory of change and logic model to maximize effectiveness of services and to ensure the appropriate doses of services for each caregiver depending on needs. Evaluating a program that has a strong theoretical base for the activities it implements is more likely to yield evidence of effectiveness.
  • Develop a manual using the theory of change and logic model. A manual is essential for determining if the program was implemented with fidelity and for supporting replication. The manual should be specific enough to clearly identify what is and what is not necessary to replicate the program model.
  • Identify appropriate variations in the kinship navigator model (e.g., length of the service period, optional versus required services, intensity of services) that may be necessary for various subpopulations. Identifying these program variations should make the program more effective for individual subpopulations, thereby increasing overall program effectiveness and guiding the selection of outcome measures that correspond to the services provided (see the section titled “Selecting appropriate reliable and valid measures” below).
  • Work with the kinship advisory group to develop strategies for increasing service enrollment and participation by:
    • Identifying barriers to enrollment and engagement in services and ways to overcome these barriers, such as recruiting kinship navigators with personal experience who understand and can overcome the mistrust and reluctance to seek help, while normalizing help-seeking behavior
    • Creating pathways for equitable access to services by locating services in high-needs neighborhoods, using innovative technologies, and training families how to use these technologies; these approaches can help increase access for rural or geographically disparate families and ensure programs address systemic inequities and biases
  • Develop promotional materials and strategies to advertise services, using local news outlets, social media, and personal testimonials, among other strategies.

Selecting a comparison group

For kinship navigator programs to reach the level of evidence needed to be eligible for ongoing federal reimbursement under the Family First Act,[5] an evaluation must compare two similar groups, one of which (the treatment group) receives the service, and the other of which (the comparison or control group[6]) does not. However, it can be challenging to ensure that the characteristics of the treatment and comparison groups do not differ systematically. Differences in the study groups weaken the level of evidence generated, because it can be difficult to determine if outcomes are a result of the program or are due to differences in the characteristics or contexts of the caregiver at the outset that may also influence the outcomes of interest. The easiest way to ensure comparability of the groups is to randomly assign participants to the study groups. If each study participant has an equal likelihood of receiving or not receiving intervention services, then, with a sufficiently large sample size, the two groups will be statistically similar.

Potential solutions:

  • If possible, use a randomized design to help ensure comparability of the intervention and comparison groups at the outset of the study (baseline equivalence, as required by the Clearinghouse). Randomization helps you feel more confident that any differences in outcomes are due to the program and not some unobserved difference between the two groups. It can be more complicated but still feasible to establish baseline equivalence with other types of study designs. Below are items to consider when deciding if randomization is feasible:
    • Determine if randomization is feasible and ethical. Randomization may be appropriate if resources (either financial or capacity) are limited, and everyone who is eligible cannot be served. When providers lack capacity to serve all eligible families, using a lottery for assignment can help alleviate providers’ discomfort at actively choosing whom to serve and whom to turn away from services. Further, if a provider is uncertain if a particular approach to kinship navigation is beneficial (and therefore withholding the service should not be harmful), a randomized design may be appropriate.
    • Communicate with all stakeholders, including kinship caregivers and youth and young adults formerly in kinship care, about why randomization is appropriate. Address their concerns and incorporate their feedback into the evaluation design. Share with stakeholders the potential long-term value of the study (e.g., having evidence of effectiveness of the service can lead to increased knowledge of what helps families and to additional funding for more services).
    • If possible, randomly assign caregivers to receive either a basic referral service (e.g., handouts, referrals to available services) or an enhanced kinship navigator program to avoid denying services to some families. To be eligible for Clearinghouse review, the services received by the comparison group would need to be considered treatment as usual or minimal treatment.
  • If randomization is not possible, consider alternative approaches to creating comparison groups. The Clearinghouse describes acceptable quasi-experimental designs.
    • Ensure that the intervention and comparison groups are similar at the start of the study (also known as baseline equivalence). Minimizing group differences strengthens your ability to attribute any observed differences in outcomes at the end of the study to the intervention being evaluated.
    • Employ statistical approaches to control for differences between the groups to the degree possible. This requires obtaining data on factors that might be related to the outcomes of interest, so you can try to isolate the effect of the program. The Clearinghouse requires that the study groups be similar at baseline in terms of needs relevant to the intervention. Baseline equivalence related to needs can be examined through a pre-test using the same or similar measures that will be used to examine program outcomes. Alternatively, according to Clearinghouse standards, evaluators can establish baseline equivalence on race/ethnicity and socioeconomic status. Beyond the Clearinghouse requirements, consider if other differences between the two groups might play a role in relevant outcomes, and, if possible, control for those differences.
  • Quasi-experimental designs may include one of the following options: propensity score matching (e.g., selecting from a group of families receiving Temporary Assistance for Needy Families [TANF] child-only grants[8] to identify a matched sample); phased implementation across geographic regions with families served during the pre-implementation phase in one region serving as the comparison group for early implementers in another region; and families on an agency waitlist serving as the comparison group.[9]

Determining sample size

To determine if any observed differences in outcomes between the kinship navigator program group and a comparison group can be attributed to the program rather than to chance (assuming other factors related to outcomes have been controlled), a sufficient number of families must participate in the study.[10] Further, the smaller the anticipated program impact, the larger the sample size that is needed to detect differences due to the program. Kinship caregivers may make up a small share of the overall population, which can make it difficult to recruit enough participants. This is especially challenging in small jurisdictions. It may take time to advertise the program and become known as a useful resource.

Potential solutions:

  • Whenever possible, select a large enough catchment area to help alleviate issues with sample size. Conducting a power analysis can help determine the sample size needed to detect any program effects.
    • Consider partnerships among multiple agencies to implement and evaluate the same program model to increase sample size.
    • States can focus on urban areas with larger populations, while recognizing that there could be inherent differences among urban, rural, and suburban areas (such as the availability of services, transportation to appointments, or employment opportunities).
  • To increase the sample size, consider extending the period of time over which you enroll families into the study. A possible drawback to this approach is that program implementation or other factors (the characteristics of the families or contextual factors) may change over time, and these other factors—in addition to the intervention itself—may separately affect outcomes, and/or may play a role in program effectiveness.

Selecting appropriate reliable and valid measures

The Clearinghouse considers a wide range of outcomes for kinship navigator programs, including child safety, permanency, and well-being; adult well-being; and access to, referral to, and/or satisfaction with services. Implementers of kinship navigator programs may be unsure which outcomes they can realistically expect to impact.

The availability of data (e.g., administrative data from the child welfare system) sometimes drives the outcomes and measures used in a study. While using available data can help minimize the burden for staff and families participating in an evaluation, available data may not align closely with the outcomes targeted by the program. Further, when kinship caregivers and other stakeholders are not included in the design of a program’s evaluation plan, the outcomes selected may be less salient to the families targeted by these services.

Another challenge is that existing measures may not have been validated for the populations in the study. If that were the case, the outcomes may not accurately reflect their experience, making the reliability of evaluation results suspect. For example, some outcome measures have proven reliable and valid for parents generally but not for kinship caregivers, or measures may not have been developed with individuals or families that reflect the racial/ethnic background of families served by kinship navigator programs. Prior research has often tested measures with white families, but many kinship caregivers are Black, Indigenous, or persons of color.

Potential solutions:

  • Use the program’s logic model to guide selecting the outcomes to measure in the short-, intermediate-, and long-term, rather than being guided primarily by available outcome data. For example, measuring a caregiver’s knowledge of available services and supports after receiving kinship navigator program services may be a common short-term outcome of interest.
    • Select outcomes that reasonably occur within the period during which study participants will be assessed.
    • Select outcome measures that are appropriate for the population served (e.g., permanency measures may be of particular interest for kinship families involved with the child welfare system).
    • If the study does not follow participants over an extended time, the study will be unlikely to demonstrate an impact on long-term outcomes such as permanency.
  • Ensure outcomes are specific, measurable, achievable, realistic/relevant, and timely (“SMART”).
  • Whenever possible, use existing measures previously tested in studies with a population similar to the population served by your program to ensure that your findings accurately represent the experiences of families served. Engage kinship caregivers to test measures (e.g., through cognitive interviewing) prior to use in the study.
    • Look to related fields like patient health navigator programs to identify relevant measures and to distinguish between effects on the child, parent, and caregiver.[11]
    • When adapting measures for use with kinship families, identify opportunities to test reliability and validity and refine the measures as needed.
  • Select data sources that are available for the treatment and the comparison groups, such as child welfare administrative data or surveys or assessments that can be used with both groups.
  • If possible, include formal and informal caregivers in your analysis. If you are including both groups, use data sources that are available for kinship caregivers with and without child welfare involvement. Conduct subgroup analyses to examine if and how impacts differ for these two groups; the outcomes that are relevant may differ between the two groups, and/or the program’s impacts may vary.

Collecting data

Once outcome measures are identified for the evaluation, plans must be developed regarding when and how to gather this information. As discussed earlier, many services delivered by kinship navigator programs are open-ended, making it difficult to determine the “end” of treatment. This affects when outcomes should be assessed, especially if families can continue to access services as needed. According to the Clearinghouse, the study must measure outcomes at the end of treatment but does not need to follow participants after the end of services to earn a rating of “promising.”

A common challenge is engaging participants in the comparison group in data collection due to a program’s lack of or limited contact with them. Similarly, some participants in the treatment group may also need and receive less direct engagement with program staff (e.g., when receiving a one-time service), which may make it more difficult to collect data on their outcomes.

Evaluators often turn to administrative data from the child welfare and other family-serving systems to ensure data availability for the treatment and the comparison groups. However, as described above, these systems may not include data that addresses all outcomes of interest or data for both study groups. In addition, evaluators may face challenges accessing administrative data, including negotiating data sharing agreements and partnering with child welfare agency staff, TANF staff, or aging services staff, who face competing priorities for their time, to help with obtaining and contextualizing the data.

With administrative or other forms of data, such as survey data, evaluators may face data quality challenges when they begin analysis. Common challenges can include missing data, data entry errors, and different interpretations of the meaning of data elements among those entering the information.

Potential solutions:

  • If needed and in consultation with kinship caregivers and other stakeholders, anticipate the timing for assessing outcomes by determining the time period within which nearly all participants will have completed services. Consider if you would expect to see change in your outcome(s) of interest at this time point.
  • Partner with kinship caregivers to identify how and when to collect data and strategies for engaging study participants, particularly in the comparison group.
  • Consider study designs that may promote greater engagement with comparison group members, such as:
    • Providing the comparison group with some minimum level of services, keeping in mind the Clearinghouse’s definitions of minimal intervention and treatment as usual (see “Selecting a comparison group”); participants may be more likely to provide contact information, to feel invested in the program, and to participate in an evaluation if they receive some benefit from participating
    • Using a waiting list design in which families in the comparison group receive services after outcome data have been collected; to help keep families in the comparison group engaged in the study, if possible, assure them that they can receive services after the study concludes
  • Assess the feasibility of using administrative data from the child welfare, TANF, or aging services agencies (and other family-service systems if available) to measure outcomes without requiring ongoing data collection from the treatment or comparison groups.
  • Partnering with data providers can help provide useful information for cleaning, analyzing, and interpreting data.

Conclusion

Kinship caregivers and their families deserve services that effectively meet their needs. For states seeking to continue and expand kinship navigator programs, federal Title IV-E funding is a valuable resource, particularly as states face pandemic-related budget shortfalls. However, after the kinship navigator provisions from the Consolidated Appropriations Act of 2021 end, federal funding will depend on demonstrating that a kinship program is at least “promising” when it comes to improving outcomes. Kinship navigator program evaluators will need to address the challenges described above with program administrators before determining rigorous, feasible designs for evaluating their programs.

Several kinship navigator program evaluations are currently underway, and many stakeholders are involved in strengthening the delivery and evaluation of kinship navigator programs. This ongoing work highlights the need for continued conversations across stakeholders and jurisdictions to identify and respond to evidence-building challenges as the field strives to best serve kinship families. Engaging the community, kinship navigators, kinship caregivers, data providers, and youth with lived experience in these conversations and throughout the process of designing and conducting evaluative studies will help ensure the designs yield findings that are relevant, actionable, and promote equitable outcomes for children and families. Open and transparent communication throughout the evaluation process will also keep stakeholders informed about how evaluation findings are used to inform next steps and refine programming. Evaluation partners, such as Child Trends, can help facilitate this process.


Acknowledgements

We are grateful to our colleagues at Child Trends and in the child welfare field who provided insights and feedback during the development of this brief, including Elizabeth Jordan and Karin Malm, at Child Trends; Ana Beltran and Jaia Lent at Generations United; and Lynn Urvina, Kinship Program Manager at Family Education and Support Services, Tumwater WA.


References

[1] The term “kin” refers to relatives and to fictive kin; that is, non-related adults who have pre-existing relationships with children. The official data on children in formal foster care who are living with kin exclude fictive kin.

[2] Kinship care typically falls into one of two types: formal kinship care (also known as relative foster care), which occurs when children are in custody of a public child welfare agency (typically because of abuse or neglect); and informal kinship care, which is arranged between parent and caregiver and may or may not involve child welfare but does not include state custody. Family Court orders that assign custody to a caregiver without child welfare agency involvement are considered informal placements.

[3] In a survey of kinship caregivers during the COVID-19 pandemic, 38 percent reported being unable to pay or concerned about paying their mortgage or rent; 43 percent expressed fear over leaving their home to obtain food; and 30 percent had no caregiving plan for the children in their care if they died).

[4] Practices around informal and formal kinship care vary across jurisdictions, and data are not always available for informal kinship care placements. Child Trends analyzed administrative data from two states and found that Black children in one state were more likely to be placed in formal kinship care compared to informal kinship care (15%, compared to 7%). Referral data were not available in the second state, precluding analysis of racial/ethnic disparities in placement in formal versus informal kinship care.

[5] The Consolidated Appropriations Act of 2021 temporarily waived the evidence requirements for kinship navigator programs during the COVID-19 public health emergency.

[6] When referring to the group not receiving the intervention, we use “comparison group” throughout this brief for simplicity.

[7] The Title IV-E Clearinghouse defines “no or minimal intervention” as receiving handouts, referrals to available services, or similar nominal interventions and defines “treatment as usual” as the usual or typical services available for that population in the study.

[8]TANF child-only grants were designed to consider only the needs and income of the child. A child’s income might include child support payments or a public benefit like Supplemental Security Income (SSI). Because most children have limited income, most relative caregivers can receive a child-only grant on behalf of the children in their care.”

[9] This waitlist approach could be used with an RCT if families are randomly assigned to receive services right away or be waitlisted. This works only if there is not enough capacity to serve all eligible families right away.

[10] This is referred to as the power to tell if the findings are an actual effect of the intervention or just occurring by chance.

[11] For example, one patient health navigator study used a child self-rated health survey at 6-months after completing the intervention. Other outcomes included quality of life, caregiver self-rated health, and satisfaction with healthcare.