There is a large body of research pointing to the importance of continuity and coordination in the early years of a child’s education. While high-quality pre-K has been shown to positively impact children’s long-term success, pre-K is not an inoculation. Alignment across the birth-through-age-8 years ensures that children are supported early on and that gains are sustained over time. Kristie Kauerz from the University of Washington and Julia Coffman from the Center for Evaluation Innovation created a framework to guide early learning programs, schools, and districts in their pre-K through third grade alignment. And after the Great Recession, the Beyond Subprime Learning report published by New America described the importance “bridging the continuum” between programs for very young children and programs for elementary school students to ensure quality and avoid duplication.
LaRue Allen and Bridget B. Kelly, eds., Transforming the Workforce for Children Birth Through Age 8: A Unifying Foundation (Washington, DC: National Academies Press, 2015). Available at
Kristie Kauerz and Julia Coffman, Framework for Planning, Implementing, and Evaluating PreK–3rd Grade Approaches (Seattle: College of Education, University of Washington, 2013). Available at
Laura Bornfreund, Clare McCann, Conor P. Williams, and Lisa Guernsey, Beyond Subprime Learning: Accelerating Progress in Early Education (Washington, DC: New America, 2014).
At the end of 2017, the Ounce of Prevention Fund released An Unofficial Guide to the Why and How of State Early Childhood Data Systems to explain “the importance of state early childhood data systems and why they matter to state policy improvement.” The guide (which is written with good humor over the dryness of the topic) not only provides an overview of the different ways that early childhood data systems can help improve pre-K and other early learning programs, it also describes how “unified state data systems could be useful at shining light on what now goes on in the mystery years of kindergarten through 2nd grade.” Other resources for data-systems policy recommendations include a brief released in 2016 by the Data Quality Campaign and Early Childhood Data Collaborative (ECDC). It also makes a case for strengthening the connection between early childhood data and K–12 data systems and focuses on seven key areas around data: state capacity; data governance; privacy, security, and transparency; linking, matching, and sharing; data quality; data access and use; and stakeholder engagement. A 2010 issue brief by New America sheds light on why these improvements are needed, noting the lack of available data in the early childhood education field. Limited data make it difficult for state and local policymakers to make informed decisions and leave teachers and leaders in the dark on student progress.
Elliot Regenstein, An Unofficial Guide to the Why and How of Early Childhood Data Systems (Chicago, IL: Ounce of Prevention Fund, 2017). Available at https://www.theounce.org/wp-content/uploads/2017/08/PolicyPaper_UnofficialGuide.pdf
Roadmap for Early Childhood and K–12 Data Linkages: Key Focus Areas to Ensure Quality Implementation (Washington, DC: Data Quality Campaign, 2016). Available at
Laura Bornfreund and Maggie Severns, Many Missing Pieces: The Difficult Task of Linking Early Childhood Data and School Based Data Systems (Washington, DC: New America, 2010). Available at
At the program level, data capturing instructional quality and child outcomes can be used both to inform educator practice and identify larger programmatic trends. Data can inform policies and ensure that programs are adapting to meet children’s needs. Data on children’s outcomes from multiple studies of Head Start programs, for example, have shown how participation in Head Start has led to both short-term and long-term gains for children, up through adulthood, when certain quality ingredients are in place. However, there are challenges associated with gathering data—particularly assessing young children—that policymakers need to be aware of. Not only can developmentally appropriate assessments be time-intensive and expensive, but the data from those assessments can be misused and misinterpreted if not handled with care. In 2005, the Taking Stock report from a task force of early childhood experts explained the need for caution in using data for high-stakes decisions, such as the funding or closure of early learning centers. That report includes guidance and differing opinions on how to best use data on children’s outcomes when determining accountability measures at the local and state levels. The National Education Goals Panel, an initiative of the 1990s, recommended that child assessments not be applied as accountability measures for students, schools, or educators until the end of third or fourth grade.
The Early Childhood Accountability Task Force, Taking Stock: Assessing and Improving Early Childhood Learning and Program Quality (Philadelphia: Pew Charitable Trusts, 2007).
See also chapter 10 of Transforming the Workforce for Children from Birth Through Age 8: A Unifying Foundation, published in 2015 by the National Academies Press.
The Head Start Advantage: A Research Compendium.(Washington, DC: The National Head Start Association, June 2017). Available at https://www.nhsa.org/files/hsa_compendium.pdf
There is a growing interest in continuous quality improvement (CQI) in the early childhood field. However, CQI depends on sufficient and relevant data that accurately measure quality. Programs not only need to collect the right data, but the workforce (at multiple levels) also needs to be able to interpret and use the data. In 2016, the National Institute for Early Education Research (NIEER) updated the quality benchmarks for state pre-K programs used in its annual State of Preschool yearbook to better align with the current research. NIEER replaced its monitoring benchmark with a benchmark around CQI. According to the report, “to meet this new benchmark, programs must complete structured observations of classroom quality (using a valid and reliable measure) and use this information to inform an improvement plan with teacher feedback.” Twenty states met this benchmark in 2016. Another potential driver for the use of CQI is a state’s Quality Rating and Improvement System.
W. Steven Barnett, Allison H. Friedman-Krauss, G. G. Weisenfeld, Michelle Horowitz, and Richard Kasmin, State Preschool Yearbook (New Brunswick, NJ: National Institute for Early Education Research at Rutgers University, 2017). Available at
Teresa Derrick-Mills, Understanding Data Use for Continuous Quality Improvement in Head Start: Preliminary Findings (Washington, DC: Office of Planning, Research and Evaluation, 2015). Available at
Billie Young, Continuous Quality Improvement in Early Childhood and School Age Programs: An Update from the Field (Boston, MA: BUILD Initiative, 2017). Available at
According to a 2016 research report by RAND that used Cincinnati as a site for investigating quality in pre-K, “effective programs achieve and sustain process and structural quality through ongoing systematic measurement tied to quality improvement.” Other studies, including syntheses of research conducted by academic researchers as well as the Bill & Melinda Gates Foundation, show that implementing quality programs requires the use of feedback loops among teachers and administrators using high-quality data on what is working and what is not.
Lynn A. Karoly and Anamarie Auger, Informing Investments in Preschool Quality and Access in Cincinnati: Evidence of Impacts and Economic Returns from National, State and Local Preschool Programs (Santa Monica, CA: RAND Corporation, 2016).
Jim Minervino, Lessons from Research and the Classroom: Implementing High-Quality Pre-K that Makes a Difference for Young Children (Seattle: Bill & Melinda Gates Foundation, 2014) Available at
Robert C. Pianta, W. Steven Barnett, Margaret Burchinal, and Kathy R. Thornburg, “The Effects of Preschool Education: What We Know, How Public Policy Is or Is Not Aligned with the Evidence Base, and What We Need to Know,” Psychological Science in the Public Interest 10 (August 2009): 49–88. Available at