Connecting EdTech Product Use to Student Outcomes: Understanding How Research Studies Apply to Your District

August 12, 2021
Back to Blog

Welcome to Student Outcomes: Connect the Dots With Usage Data. This post is the third in a three-part series designed to support education leaders in making better decisions about digital resources. The series will focus on using data to ensure equitable digital learning access and using research to increase the instructional impact of digital resources. Here’s where you can read part one and part two of this series.

In part two, we offered districts guidance on finding research-backed edtech products using the Evidence for ESSA and What Works Clearinghouse (WWC) websites.

Here, we dive into tools you can use to interpret the significance of studies and judge their applicability for your district. For interpreting significance, we’ll use the Evidence for ESSA Rating, Evidence for ESSA Average Effect Size, and the What Works Clearinghouse Improvement Index.

For applying study results to your district, we’ll look at how to compare sample size, student and school characteristics, and implementation.

Interpreting the Study Results: How Effective Is the Product?

Evidence for ESSA Rating

You can use the Evidence Provisions of the Every Student Succeeds Act (ESSA) to measure a resource’s effectiveness in improving student outcomes. Here is a brief overview of the provisions.

  • Strong: At least 1 well-designed and well-implemented experimental study (i.e., randomized).
  • Moderate: At least 1 well-designed and well-implemented quasi-experimental study (i.e., matched).
  • Promising: At least 1 well-designed and well-implemented correlational study with statistical controls for selection bias.

Results from a study with a higher rating indicate more rigorous evidence of the effectiveness of the resource. A similar population of students in a similar educational setting using the resource at similar dosages is likely to see similar results found in the study.

Evidence for ESSA Average Effect Size

To compare the effectiveness of two applications, you can use Evidence for ESSA’s Average Effect size measurement. If the average effect size of application A is 0.1 and application B is 0.2, then application B improved student outcomes by twice as much as application A.

This metric can be found on an application’s page through the Evidence for ESSA website. (For a step-by-step guide to finding applications and related studies on the Evidence for ESSA website, head over to the second post in this series.)

  • Effect sizes quantify the relationship between a program or application and student learning. A larger effect size indicates more learning gains associated with the usage of a program/application.
  • An effect size of 0.2 or higher is generally considered substantial for an edtech intervention in an experimental or randomized study. Non-experimental studies such as matched-pairs and correlational designs require a higher effect size benchmark to be considered substantial. The type of application also matters: tutoring-based platforms typically yield higher effect sizes than class-based platforms.

What Works Clearinghouse Improvement Index

This measure, similar to the average effect size, compares the effectiveness of two applications. A higher number indicates that the application improved student outcomes by a larger amount.

You can find this metric on an application’s page through the What Works Clearinghouse website. (For a step-by-step guide to finding applications and related studies on the WWC website, head over to the second post in this series.)

  • This value represents the percentile improvement for the average student in the control group if they used the edtech product. A score of 10 means the average student in the control group would have scored 10 percentile points higher if they had used the edtech product.
  • An effect size of 0.2 is the same as an improvement index of about 8.

Applying Results to Students in Your District

Just using the effect size or improvement index of a product can be misleading. It is essential to consider how closely the studies model the environment of your district, including the number of students and characteristics of schools and students.

Sample Size

  • Most edtech studies have sample sizes ranging from a few dozen to a few hundred, while school districts have thousands of students. Where possible, focus on studies with large sample sizes.
  • The number of participants is displayed as “No. Students” on Evidence for ESSA  and “Students” on What Works Clearinghouse. Notably, sample size influences effect size, and it is generally more difficult to register a large effect size for studies with large sample sizes.

Student and School Characteristics

  • On the Evidence for ESSA website, take a look at the “Grades Studied” and “Groups Studied” sections on the right side of a product page to see if the students who participated in the study represent those in your district.
  • To find similar information on the What Works Clearinghouse website, click on a study and navigate to the “Sample Characteristics” tab.

Implementation

  • Studies typically meet the vendor’s recommended product use, which can be as high as several hours per week. In practice, students rarely approach the recommended benchmark. Also, studies often include several hours of professional development training for teachers, which can substantially impact the efficacy of use. Evidence for ESSA does not display product usage information, but you can still find the information by opening a study and searching for those details. (Ctrl+F for “minute” or “hour” is a good shortcut).
  • What Works Clearinghouse provides information on average usage and other implementation specifics under a study’s “Study Details” tab.

Next Steps: Once You Choose an App, How Do You Implement the Product Successfully?

Evidence for ESSA developed a framework that guides districts through the selection, implementation, and evaluation processes.

How Can ClassLink Help?

With the recent release of Analytics+, ClassLink districts using this new product can now view all edtech usage data, whether an application is opened through LaunchPad or accessed directly through the weblink or mobile application. Analyzing this data is crucial in “evaluation,” the last step of Evidence for ESSA’s framework.

Administrators at ClassLink districts can pinpoint how often apps are being used at the district, school, or classroom level and evaluate whether or not they are hitting the benchmarks seen in studies and recommended by vendors. The findings can inform changes, including additional professional development, modifying the number of licenses, or purchasing another supplemental app.

Visit the Analytics+ product page to learn more.

Additional Resources

  1. The Evidence Provisions of the Every Student Succeeds Act (ESSA)
  2. Interpreting Effect Sizes of Education Interventions
  3. Translating the Statistical Representation of the Effects of Education Interventions Into More Readily Interpretable Forms
  4. How Features of Educational Technology Applications Affect Student Reading Outcomes: A Meta-Analysis
  5. K–12 Professional Development Is Critical, So Make It Count

Categories:

ClassLink

About the Author

About the Authors

Zach Friedman

Analytics Intern

,

ClassLink

Mary Batiwalla

Director of Evaluation Analytics

,

ClassLink

For over a decade Mary has dedicated her career to education, serving as a practitioner, researcher, and executive leader. In her most recent role as Assistant Commissioner at the Tennessee Department of Education, she led assessments, accountability, and data governance.