- What is data
- Where we go wrong with data
- Before collecting data
- We don’t plan for what we’re going to do with the data once we’ve got it
- We don’t know what teachers are currently doing, so how can we know what to assess?
- We know what the assessment is, so we just teach to it
- We design the summative assessment after we have taught, to ensure it only includes what has been taught
- Our data needs have us wanting a certain outcome
- We don’t set up the way data is collected
- We don’t use an assessment that is accessible for the student/s
- After the collection of data
- 5 Rules For Using Data Effectively
- 1. Know the purpose
- 2. Be painstakingly meticulous with what data is going to be collected
- 3. Designate data entry, review and action points when designing the curriculum
- 4. Ask ourselves, “Are we collecting data in the most effective and efficient way?
- 5. Look at data to gain knowledge, not to support a belief
- References
If you’ve ever been in a professional learning session on data, you’d be familiar with the collective groans that are commonly let out from staff. Fancy people in suits like data because it allows them to make predictions, conclusions, see trends and design pretty graphs. They are then able to set targets, budgets and allocate resources accordingly. The problem in education is that we have tried to replicate this focus on numbers without truly understanding how to look at them properly.
So much is invested into schools based on data, yet I will argue in this article, a lot of school data isn’t reliable or valid. I will also analyse some of the mistakes many schools make when looking at data. Finally, I will offer 5 rules to follow in order to use data effectively.
Recently, there was an article in the Hechinger Report by Chris Minnich that argues, “there is no strong evidence that analysing data leads to improvements in teaching.” In Intelligent Accountability, David Didau writes about how in schools, too many decisions are bets made with little or no understanding of the odds. They estimate the best and ignore the worst outcome.
We clearly have a data dilemma!
What is data
Firstly, data is just information that has been collected. It can be anything from observational, to the results of a test. Secondly, it’s important to understand some key terms when talking about data.
- Reliability: Can it be repeated? If we’re collecting data from an assessment, would multiple teachers record the same results from the same test?
- Validity: Do you record accurate data that measures what was intended?
- Accuracy: How reliable is the data?
- Quantitative: anything number related, such as test scores and attendance rates.
- Qualitative: non-numerical, such as observations and interviews.
- Proximity: In, Tools for teachers, Ollie Lovell describes this as, “how targeted the test is to the content taught.”
- Distal: data from a more general assessment e.g. a standardised test.
Where we go wrong with data
Before collecting data
We don’t plan for what we’re going to do with the data once we’ve got it
Many schools perform summative assessment tasks (performed at the end of a course to indicate student achievement against outcomes) at various stages throughout the year without any particular purpose for it. They might be standardised ones like the National Assessment Program — Literacy and Numeracy (NAPLAN) or Acer’s Progressive Achievement Test (PAT) or an end of year English exam.
Firstly, if no time has been structured in to look at these results, then what is the point of them? Secondly, if we are just going to look at the number at face value, how can we ensure that the results are reliable?
In “What every teacher needs to know about assessment” (video below) Dylan Wiliam talks about how change scores are inherently unreliable due to the random variation of things like whether a student happened to study the particular areas that were in an exam. Due to this variance, summative assessment scores need to be looked at from the view that the students actual scores are + or – 10 of what they received.
Another example, a student might record a low score for algebra. However, when we look into it, the two questions that they got wrong had not been covered in class yet.
Also, if we have structured in time to analyse the data, do we actually have the resources and systems to support students who are displaying characteristics of being below their grade level expectations? If you’re using a Response to Intervention model, who will provide support for those Tier 2 and 3 students?
Another one is when we set pre-tests with the intention of finding out students’ prior-knowledge, but then we don’t actually adjust our teaching programs based on the information.
If the intention of the data collection is to look at growth and trends, then who is going to analyse this and what will be done afterwards?
We don’t know what teachers are currently doing, so how can we know what to assess?
We’ve all been in a meeting where teachers are trying to work out what outcomes to assess before report writing time. This is the first mistake. We should know what outcomes are being assessed throughout the year, before the year begins!
The second mistake is deciding on the outcomes despite not knowing what teachers are currently doing. There is a belief amongst some educators that teachers should be allowed to use their expert judgement to work out what to teach their students. While this might work out for those lucky enough to be taught by teachers who have that expertise, it only widens the gap for those who aren’t. It also increases the workload for all teachers, rather than sharing it and leads to many teachers producing poor lessons.
Coming back to the need for valid and reliable data, how can we get this if teachers are teaching very differently? The data we’re collecting isn’t necessarily a depiction of student learning, but rather shows us which teacher’s teaching methods are more effective (and that’s only if they’ve actually covered the same topics).
We know what the assessment is, so we just teach to it
When assessments have already been pre-organised, we can still run into problems because teachers know exactly what the task is, so they target their teaching towards it. There’s two ways to go around this, first develop a culture where learning is prioritised over results. This needs to be ingrained within staff, students and the community. Secondly, look at the overall design of the assessment. Does it give students an opportunity to show what they have really learnt? e.g. are we only asking students to show us their knowledge of facts and procedures or can they elaborate and showcase their deeper knowledge?
We design the summative assessment after we have taught, to ensure it only includes what has been taught
In a “each teacher is the expert” curriculum, this is particularly dangerous. We trick ourselves into thinking that we have created a fair task, because it keeps all the teachers happy. They all get to have their say and can add in or take away questions as they see fit. However, what if we miss out on assessing major concepts just because not everyone has taught it?
Our data needs have us wanting a certain outcome
Ethically and morally this isn’t right, but even subconsciously this can happen. We might “need” certain results for funding purposes or we face pressure to perform (something which might increase if teacher pay is determined by results) and so focus our teaching on attaining certain results, rather than on the cumulative effect of learning.
We don’t set up the way data is collected
If one class performs a test where everyone enters silently and sits down in their individual spots and another class enters wildly and are allowed to sit wherever they want – in chairs, on the floor, next to someone else. How can we say that the data collected from this test would be valid and reliable?
Or we ask students to participate in a survey (e.g. Tell them from me) and then allow them to chat to each other. They might then decide to make up a story about bullying or even worse target a particular teacher.
We don’t use an assessment that is accessible for the student/s
So, this might be having them perform a Mathematics task where they don’t have the vocabulary or background knowledge to understand what they are trying to read. We then make the assumption that they don’t have Mathematics knowledge, when actually it’s the reading and comprehension skills that have let them down.
Or asking them to perform an online test and they don’t have the computer skills to enter their responses properly. With the trend moving towards more online testing, the lack of computer skills could greatly affect the results.
After the collection of data
Not enough student work samples are taken into consideration
We might have a class of 20 students where the majority appear to be reading fluently and so confirmation bias (we search for ways to link information to our prior beliefs) leads us to believing that our teaching is effective. However, upon closer inspection we can see that the 4 students who aren’t reading fluently should be able to and it’s probably our ineffective teacher instruction that is letting them down.
The assessment doesn’t link to what has been taught
This happens when we purchase externally developed exams or look at data from a standardised test. This is okay if we take this into account, but if we have students sit the test just for the sake of it, is this a good use of time? A further consequence is a decrease in students’ self-efficacy and motivation due to them feeling inadequate.
We look at data too long after the assessment was made
For example, the NAPLAN test:
- Students sit the test in May
- Results are then released in August
- School leaders might not analyse them until September
- Present the findings to staff in October
This means that there could be a 5 month gap between when the test was sat and the results are in front of classroom teachers. If teachers wanted to use the results in a diagnostic way, then it wouldn’t be reliable information.
However, that’s not to say there isn’t a place for standardised tests like NAPLAN. We just need to be understanding that it is just a snapshot in time and can be used as a way of seeing trends in groups of students. Hopefully, we are not relying on it to provide us with new information on individual students. If this is the case, then it won’t be accurate or reliable because there has been too much space in time between when students completed the assessment and the present.
Not seeing the stories behind the numbers
“Teaching takes place in time, but learning takes place over time.”
John Mason (Griffin, 1989)
How do we know that the results we have collected are the direct result of effective teaching from the child’s current teacher? It could have been a previous teacher, their parents or from reading a book! The results from a test might also show that a student performed very poorly, yet doesn’t take into account that they have been dealing with their parents going through a breakdown in their marriage.
Or we compare current data to previous years without taking into account the impact that the COVID-19 pandemic would have had on our students’ learning.
If you pull out the right data, we can get it to tell us anything we want
When entering discussions from a certain viewpoint we can be influenced to present information in a certain way. For example, we may have instigated a new program to be implemented and have a meeting with senior leaders who want to see evidence of progress. We might feel a need to only show favourable data, so we don’t disclose the full story.
* Disclaimer: I’ve probably made all of these mistakes at some stage in my career.
5 Rules For Using Data Effectively
Now, there’s no doubt that we need data. Without it, how can we measure whether things are working or not? However, there’s also no doubt that most of us can use data better. Here are 5 ways to do that:
1. Know the purpose
If the main goal of education is to maximise student learning, then what data will actually help us improve student learning outcomes? In Back on Track, Mary Myatt wants us to ask ourselves, “How much difference have I made to the children I’ve been teaching.”
Know what data you’re going to collect, why it is important and how the new information will guide your teaching.
Do we want to find out if students are learning what has been taught or where individuals sit when compared to others? Do we want to know how they compare across their grade at their school or how they are doing from a systems-wide level?
Not all assessments have to do everything. In fact, they just need to do what you wanted them to do – give you the information that you need.
2. Be painstakingly meticulous with what data is going to be collected
Building on from the previous point, we know that data is just information, so we could gather data about everything if we really wanted to (especially with all our spare time). Questions to consider:
- Is some data more accurate than others?
- What information would cause you to change your mind/practice?
- What do we want to know and will this data give me the answer?
- If you didn’t have the data, how much of a difference would it make to your teaching and students’ learning?
3. Designate data entry, review and action points when designing the curriculum
Time needs to be allocated for when the data is going to be analysed and who will be doing it BEFORE it has been collected. Any professional learning that is needed around the implementation also needs to be factored in.
This is another reason why having a low variance, coherent, sequential curriculum is so important. If there is a whole school curriculum, then this also gives us the opportunity to be intentional with planning in this time to look at data.
4. Ask ourselves, “Are we collecting data in the most effective and efficient way?
Make the way data is going to be collected clear and explicit. Do you need another end of topic exam or can you gather enough information about the students through formative assessment?
Does your assessment directly link to what you want to find out? If you’re using a systematic synthetic phonics program and want to measure their decoding skills, why are you still assessing through running records (which doesn’t correlate with how you are teaching).
5. Look at data to gain knowledge, not to support a belief
We need to look at data from the perspective of wanting information to increase our understanding of how to improve the learning outcomes of our students. Anything less means that we are approaching it from a bias perspective or it’s not giving us any new information.
Caveat on using data
As Nick Hart writes about in The tyranny of school metrics, not everything important in schools is measurable. He uses the example of wellbeing and how often we rely on staff surveys – in which staff may be reluctant to tell the full story. Wellbeing is also something which can fluctuate from day-to-day. So, we can’t just rely on data for all of our decision-making.
References
Didau, David. (2020) Intelligent Accountability: Creating the Conditions for Teachers to Thrive. John Catt Ed. Ltd
Griffin, P. (1989). Mathematics Teaching 126, 12- 13.
Lovell, O. (2022). Tools for Teachers: How to teach, lead and learn like the world’s best educators. John Catt Ed. Ltd
Minnich, C. (2022). OPINION: Data matters, but only if it leads to effective teaching action. The Hechinger Report.
Myatt, M. (2020). Back on Track: Fewer things, greater depth. John Catt Ed. Ltd