A young child works at a computer

Recommendations for Regulating Artificial Intelligence to Minimize Risks to Children and Their Families

BlogEarly Childhood DataApr 3 2024

From January 2023 to March 2024, multiple entities have published guidance on artificial intelligence (AI),[1] underscoring growing public concerns regarding AI governance. Meanwhile, as federal and state legislators weigh the need for AI regulations to safeguard the public from various risks, recent discourse about AI risk has overlooked the use of AI by children and their families or caregivers. This gap is widening as students increasingly turn to AI for homework assistance and interact with AI-generated content (including images and videos), and as caregivers (including both parents and educators) attempt to use AI to foster child engagement. Drawing on lessons from a recent Child Trends study on the capabilities of AI systems,[2] we propose stronger guidance and regulations to ensure rigorous assessment of potential harm by AI systems in contexts involving children and families.

For our study, we created two AI systems—both based on two prominent Large Language Models (LLMs; see Methods note below this blog)—and found that these two models showed strong agreement on simple tasks (e.g., identifying articles on compensating the early childhood workforce) but diverged when handling complex subjects (e.g., analyzing articles on change framework). This divergence in handling complex subjects illustrates a potential risk within AI systems: AI’s interpretation (or misinterpretation) of complex human ideas and values could expose children and caregivers to incorrect information or harmful content.

We propose that federal and state regulators mandate proper assessment of three aspects of AI systems to minimize the potential risks of AI to children and families. First, regulators should mandate AI assessments that are capable of distinguishing between AI systems that can reliably handle both simple and complex subjects and those that cannot. Our experience with the AI systems created for our study illustrates the need for safeguards so that AI tools and systems meant for young people—from chatbots to virtual reality devices—can be trusted to not generate images and suggestions that are harmful, dangerous, or unethical.

Second, we propose that regulators underscore that it’s critical for AI assessment to consider the developmental appropriateness and safety of AI-generated content for different age groups in specific contexts. As explained above, the capability of AI systems to handle complex subjects should be part of the determination of developmental appropriateness and safety. The European Union’s new Artificial Intelligence Act proposes a risk classification system for AI, raising the question of whether AI systems should follow an age rating system akin to motion picture ratings, the Entertainment Software Rating Board ratings used for video games, and the Pan European Game Information ratings (used in 39 countries). In the U.S. context, the California Age-Appropriate Design Code Act would mandate specific design and operation standards for digital services likely to be accessed by children, ensuring that these services are appropriate and safe for various age groups. The need to help the public distinguish age-appropriate AI content has become increasingly pertinent as AI extends beyond text generation to the creation of images, audios, and videos.

Third, we propose that regulators mandate continuous quality improvement for AI systems—using such processes as regular license renewals—because training data evolves over time and can make existing versions of AI systems outdated. During Child Trends’ assessment of the two AI systems in our study, we found that one model couldn’t recognize “Latinx” due to its outdated training data. This limitation has significant real-world consequences for children trying to understand their world. Cultural norms and language are constantly emerging and changing—consider terms such as “woke,” “cisgender,” and “equity”—as communities work to create a more inclusive society for children and families, and as they engage in fierce debates over issues of race, gender, ideology, and religion. Changes in cultural norms and framings underscore the need to monitor and refine AI systems on an ongoing basis to ensure that they remain relevant, accurate, and inclusive of evolving societal and linguistic dynamics. Regulators should consider the authorization of AI systems for market entry as the beginning—not the end—of the oversight process and should mandate regular follow-up evaluations similar to an annual license renewal process.

As we navigate an increasingly AI-integrated world, it becomes ever more imperative to develop robust regulations and guidelines for AI development—especially in contexts impacting children and families. Congress and state legislatures play a pivotal role in shaping the legal framework to ensure that AI systems are deployed responsibly and ethically. This would involve not just a superficial adoption of technology, but a deeper, more informed understanding of how AI impacts child development and well-being that can pave the way for an AI-empowered future that is safe for our younger generations.

Methods note

For the study cited in this blog, Child Trends constructed two AI systems based on two prominent Large Language Models (LLMs) to screen and extract relevant information from a vast repository of over 10,000 articles, factsheets, and reports. We developed and implemented a set of performance metrics to measure the reliability, validity, and accuracy of information extraction.


[1] See, for example, guidance from the National Institute of Standards and Technology, the White House, and the European Union.

[2] We adopt the Organization for Economic Cooperation and Development’s definition of an “AI System” as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Suggested citation

Li, W., & Harper, K. (2024). Recommendations for regulating artificial intelligence to minimize risks to children and their families. Child Trends. DOI: 10.56417/4276f5392x