disordered arrows on blackboard focused via a magnification glass to straight arrows

Project: Rethinking Hattie's list of student achievement

Hattie's meta-study of student achievement has been impactful, but also heavily critiqued. In this project we will assess the quality of Hattie's list, as well as launch a new, interactive platform for both researchers and school professionals.

Project information

Project name
Rethinking Hattie's list of student achievement: Meaningful effects from standardized meta-analyses
Project manager
Rickard Carlsson
Other project members
André Kalmendal, Thomas Nordström, Viktor Kaldo and Idor Svensson, Linnaeus University
Henrik Danielsson, Linköping University, Sweden
Participating organizations
Linnaeus University, Linköping University, Sweden
The Swedish Research Council (Vetenskapsrådet)
1 Jan 2021–31 Dec 2024
Psychology (Department of Psychology, Faculty of Health and Life Sciences)

More about the project

Hattie (2009) synthesized 800 meta-analyses on 136 effects on student achievement and ranked them based on effect size. The potential of this work cannot be overstated. Finally, educators could learn what improves achievement in a simple way. For example, that some things have strong effects on learning (e.g., Response to intervention), about neutral (e.g., background music) or even negative (e.g., detention).

Unfortunately, Hattie’s approach is severely flawed. It combines and ranks existing meta-analyses never intended to be compared. They differ in several ways (e.g., calculation of effects), making the ranking invalid. Further, because the effect sizes are calculated from different outcomes, the same effect may correspond to a practically meaningful effect (a year in school) or a trivial one (a few points on a test).

In the project, we will go through Hattie’s list and check it for quality and relevance. We will also launch an alternative platform with two interfaces – one for researchers and one for school professionals. On this platform, it will be possible to view and compare evidence in meaningful ways. For example, two different types of reading interventions can be compared to each other, but not to an intervention in mathematics.

Users on the platform can help contribute to the reviews based on the crowd-source model, like how Wikipedia works. Open science is an important foundation for the project. We will make sure that everything is open, transparent, reproducible, and reusable.

The end-goal of the project is to provide school professionals with the tools to make informed and evidence-based decisions on how to improve learning and didactics. Our platform will have fewer effects than Hattie’s list. In return, we promise the highest level of quality, relevance, transparency and openness.

The project is part of the research in the Methods and Meta-science research group.