
Most conversations about improving maths instruction begin with assumptions. We assume certain routines are in place, that modelling is clear or that students are getting enough practice. In reality, improvement starts when we stop guessing and describe what is actually happening in classrooms. Without a shared way to measure instruction, planning quickly becomes opinion driven rather than evidence-informed.
For leaders, this distinction matters. Strategic improvement depends less on ambition and more on diagnostic clarity.
Why description matters before decision-making
One of the most persistent problems in school improvement is that decisions are often made before the problem is properly described. Professional learning is selected, programs are adopted and new practices are introduced without a clear understanding of what is actually happening in classrooms.
This pattern is not new. Decades of research into teacher effectiveness and professional learning show that traditional professional development often starts at the wrong point. It begins with the delivery of new knowledge or strategies identified by someone external to the classroom, rather than with an analysis of what students and teachers actually need to improve learning (Muijs et al., 2014).

When the need is externally defined, goals belong to others. Teachers are asked to engage with ideas without first understanding why they matter or how they connect to their own practice. Engagement becomes optional, resistance becomes understandable and change remains shallow.
A second issue is lack of specificity. Much professional learning is too general to be useful. It is often divorced from the realities of teaching particular students in particular contexts (Muijs et al., 2014). Research consistently shows that professional learning has the greatest impact when it is tightly focused on specific challenges, grounded in classroom evidence and linked directly to student outcomes rather than abstract principles (Antoniou et al., 2015; Timperley et al., 2008).
There is also a deeper inconsistency at play. We know a great deal about how people learn, yet we often ignore these principles when designing learning for teachers. Effective instruction for students relies on clarity, modelling, feedback and deliberate practice, but professional learning frequently fails to mirror these same conditions. As a result, teachers are expected to adopt practices they have never experienced as learners themselves.
Finally, schools are regularly encouraged to adopt new approaches or innovations without robust evidence of impact. Whether driven by policy, technology, or trends, this cycle of adoption without evaluation leads to wasted time, wasted resources and initiative fatigue. What is missing is disciplined diagnosis and ongoing evaluation, rather than enthusiasm for the next solution.
All of this points to the same conclusion. Improvement does not start with choosing what to do next. It starts with accurately describing what is happening now. Without that clarity, even well intentioned decisions are unlikely to lead to meaningful change.
The purpose of the Effective Maths Instruction Scorecard

The Effective Maths Instruction Scorecard was designed to give leaders a clear and shared language for discussing maths instruction.
In less than fifteen minutes, leadership teams can rate nine core elements of instruction, including retrieval practice, modelling, guided practice, independent practice, opportunities to respond, lesson structure, and alignment to the stage of learning. Each element is described across four levels, from emerging through to highly effective.
What matters is not the total score itself.
What matters is what the score makes visible.
The descriptors provide a common reference point. Instead of debating whether modelling is “good enough”, leaders can point to specific indicators that describe what modelling actually looks like at different levels of quality.
This moves conversations away from opinion and toward evidence.
Linking the scorecard to explicit instruction
I recently released The Essential Guide to Explicit Instruction in Mathematics and if you’ve read that, the scorecard will feel immediately familiar.
Clear modelling.
Purposeful guided practice.
Delayed release to independent work.

The guide focuses on building understanding. It explains why explicit instruction matters, how it reduces cognitive load and what it looks like during the acquisition stage of learning.
The scorecard serves a complementary leadership function.
It answers a different question.
Are these practices visible, consistent and reliable across classrooms?
For leaders, this distinction is important. Knowing what good instruction looks like does not guarantee that it is happening consistently. Measurement bridges that gap.
Why the end of the year is the ideal time to use it
The end of the year is rarely the right moment to introduce new initiatives.
It is, however, the ideal time to take stock.
Completing the scorecard now allows leadership teams to step back from day to day pressures and look for patterns. Where are expectations clear. Where does practice vary. Which routines are secure and which are fragile.
Most schools find that their results cluster in predictable ways. Modelling may be relatively strong, while guided practice varies. Retrieval might be present, but inconsistent. Students may be released for independent practice too early in some classrooms and too late in others.
These patterns are far more useful than a long list of unrelated improvement goals.
Turning measurement into focused action
The final section of the scorecard provides suggested next steps based on whether a school sits in the emerging, developing, proficient or exemplary range.
Importantly, these recommendations do not call for new programs. They focus on tightening routines, clarifying expectations and aligning teaching more closely with the stage of learning.
For leaders, this creates a clear bridge between diagnosis and action.
The scorecard can be used to
- guide walkthrough focus
- prioritise coaching conversations
- shape professional learning plans
- monitor growth across terms
Rather than asking “What should we work on next year?”, leaders can ask a more productive question.
“What does the scorecard tell us matters most right now?”
When reflection needs support
For some leadership teams, diagnosing the problem is the easy part. The harder work is supporting teachers to change practice in a sustainable way.
That is where Implementing Effective Primary Maths Instruction sits.
This two day online course is designed for school leaders and middle leaders who are responsible for turning reflection into consistent classroom practice. It focuses on curriculum and lesson design, assessment aligned to the stage of learning and the leadership moves that support teacher growth without overload.
Leaders learn how to use tools like the scorecard not as accountability instruments, but as guides for coaching, modelling and shared improvement.
A disciplined place to start
If you are planning for next year, start with restraint.
Measure before you modify.
Describe before you decide.
Choose one priority that genuinely matters.
Improvement does not begin with new initiatives. It begins with clarity.
References
Antoniou, P., Kyriakides, L., & Creemers, B. P. (2015). The dynamic integrated approach to teacher professional development: Rationale and main characteristics. Teacher Development, 19(4), 535–552.
Muijs, D., Kyriakides, L., Van der Werf, G., Creemers, B., Timperley, H., & Earl, L. (2014). State of the art–teacher effectiveness and professional learning. School effectiveness and school improvement, 25(2), 231-256.
Timperley, H., Wilson, A., Barrar, H., & Fung, I. (2008). Teacher professional learning and development (Vol. 18). International Adacemy of Education.

2 Responses
Thanks for sharing this helpful post — clear, actionable, and practical advice.
Practical advice that’s easy to implement. Much appreciated.