When student data is siloed within disparate systems and technology tools, just gathering the data takes huge amounts of time and resources. Nevermind the process of analyzing and actually using the information. What should be a simple act—given the wealth of data-analysis and visualization technology available in other sectors—requires a year-round hamster wheel churn for schools and school districts.
Your IT team is forced to continuously field questions, pull reports, maintain systems, and ensure data quality. This is why, at every meeting and conference our team attends, we hear people saying their district or agency is “data rich and information poor.” They aren’t connecting the data dots.
Principals, instructional coaches, and IT teams can and should be working together, asking “why?” repeatedly to get down to the roots of their problems. Data interoperability should allow you to follow Sakichi Toyoda’s renowned “5 Whys” problem-solving method, and this approach should build the connection between the IT team, administrators, and teachers.
Imagine a funnel where you ask the broad, obvious questions first. As you drill down, you narrow in on the sources of the challenge at hand. Just as importantly, when something is working well to improve student outcomes, you can follow this “why” method to uncover the sources of your success. In either case, you’ve got to have connected, high-quality data to do so.
The following questions are a great starting point for any principal or instructional coach to gain deeper insight into student performance and the effectiveness of their technology tools. Our vision is that every administrator and educator in the country will have access to the rich, real-time data to ask each of these questions (and many more) until they hit the causal roots of their schools’ greatest challenges and wins.
Question #1—How do our students’ formative and summative assessment scores compare, and why?
You might start this question thread with, “I’d like to gather and compare all of our 6th-grade math classes’ end-of-year summative assessment scores.” And with Certica Solution’s Academic Benchmarks available through the Ed-Fi ODS/API, you can see how these scores compare to state benchmarks—referencing the largest repository of learning standards in the country.
Then (because your student data is fully connected,) you can dig a layer deeper by comparing your 6th-graders’ summative and formative scores by class. Are certain classes performing better in the classroom than on formal tests? Or is one class scoring higher in both areas? Why?
Maybe you find that students in Mr. Jones’ math class during second period are performing far better than Ms. Meyers’ math class at the end of the day. And the attendance rate in Mr. Jones’ class is significantly higher. Why? And you can dig on from here.
At the individual student level, educators can be equipped with visualization tools that throw up a red flag when a student performs far better on their summative versus formative assessments. These students may require testing accommodations they aren’t receiving. Then educators, coaches, and families can work together to close these gaps.
Question #2—Which of our students are at risk of dropping out of school, and why?
As more districts work toward data interoperability, early warning systems are becoming more advanced. For example, the state of Delaware has partnered with us on an early warning system (EWS) plugin based on Pennsylvania’s comprehensive model that was surprisingly cost-effective to implement. The underlying research, conducted for the Pennsylvania Department of Education, identified the ABCs of student dropout risk: attendance, behavior, and course performance.
Without collecting any new data, Delaware can now analyze these ABCs both individually and in a bigger context. Students are automatically red-flagged when deemed at-risk by the system. This is an extremely powerful and low-cost win for schools and districts. These systems not only answer the question, “Which of our students are at risk of dropping out?” but also answer the “why?” in a way educators can easily interpret and act on.
Question #3—Are we getting the most out of our technology tools? Why or why not?
When you understand how your educators and students are using apps and other technology tools in the classroom, you can correlate the use of those tools with students’ performance in the subject areas the tools should be strengthening. Then, you can assess which technologies are paying off in terms of student growth.
LearnPlatform is an example of a tool that partners with Ed-Fi to help educators evaluate their technology tools. LearnPlatform captures students’ time-on-task by app or tool to measure which technologies are being used effectively and determine how their use correlates to assessment scores. The beautiful thing about interoperability is that you can support the best of breed technology tools educators prefer with a plug-and-play approach, without ever disrupting the flow or security of student data.
These questions are only the beginning.
The questions outlined here can be answered fairly quickly depending on the quality of data and the number of data sources your school or district is working with. The more advanced your systems become—with more mapping tool and benchmark integrations—the more you can ask “why?” and get to the roots of your schools’ most pressing challenges. Here’s to leveraging data to answer the hard questions, and better supporting our educators and students with the answers.