Teach Like a Researcher

Hello everyone!  Recently, I was invited to guest on the Novemberpanel of the Education Assessment Group at the Institute of Education, part of University College London.  It was a fantastic event and the other speakers were so interesting.  The whole thing was filmed, so I'll update this post with a link to a YouTube video as soon as it's available (you can see how terrible I am without a live audience giving me non-verbal feedback cues!).

In the meantime, this is the original transcript of my presentation - I had to edit a few bits out on the day for time constraints.  

Also, the wonderful folks over at Did Teach have recently been voted 'Top Education Blog' by New York City Tutoring.  You can check out my contribution to their blog, concerning teacher workload, here.  Massive congratulations to them, I am so proud to be even a little part of such a fantastic bunch of teachers and former teachers!

Anyway, here's what I had to say to a bunch of academics and Education  Masters students on Monday, November 8th, 2020...

Teach like a Researcher
(How an MA in Assessment Changed my Mind)

It is my belief that a research degree can and does turn you from a teacher - someone who delivers the curriculum - into an educationalist - someone who understands why the curriculum exists and uses that knowledge to help everybody progress.  Here’s why:

As a teacher, the progress of the children directly suggests your ability as a professional educator, especially in Primary school where teachers see the same children all day every day.  

This progress is usually measured through large-scale, national testing (the SATs), the resulting statistics of which are made public.  But there are many professional teachers (I was one of them) who resent or distrust the tests, especially when end-of-year papers are used for mid-year judgements. This resentment is possibly due to a lack of understanding of the purpose of assessment. 

Recently, a tweet caught my attention:

(Downing, A, 2021)

This was part of a thread about administering ‘mock’ SATs in the first half term of the year as a means to assess progress so far, and many teachers were not happy with this.  The complaints boiled down to the three main points:

  • Terminal tests, such as the SATs, are summative assessment tools and therefore generate data that results in unfair judgements of children and teachers alike (Pells, 2017)
  • They create a ‘blame’ culture among school staff (Addressing Blame Culture in Your School Is Vital…for Pupils and Teachers, n.d.)
  • They reduce children to statistics (Evidence - More Than a Score, 2018).

It is easy to agree with this viewpoint when you’re looking at it from a class teacher point of view.  It’s what I used to think.  But this argument only holds up if you consider summative assessments as unyielding, immalleable things.

Teacher training can be quite myopic, being mostly micro-focused within the world of the classroom.  Rarely do newly qualified teachers have the time or opportunity to extend their gaze beyond their own class.  Research forces you to step back to see the bigger picture of the full education process. Research can help connect the dots between teaching children until they sit for their KS2 SATS, to preparing them for the challenges of secondary school, further education and beyond. 

Let’s look at each of those three arguments against assessments in turn.

Summative assessment generates data that results in unfair judgements of children and teachers alike

Assessment can be split into two strands: formative and summative.  Formative assessment is assessment over time.  It is assessment for learning.  Summative assessment is more of a snapshot judgement based on a stand-alone test.  Essentially, it is assessment of learning (Black & Wiliam, 2018).  The assessment at the end of Key Stage 2 (the SATs) is summative.  The data they create are intended to inform schools, parents and the government about the progression and attainment of Primary school children at the end of that stage of their education.  In this sense, I would have to agree with @DowningAndy and the rest of the Twitterati.  These SATs papers should certainly not be used summatively for an end-of-term assessment, especially when there are still at least six months of education to go.  It’s absurd.  

But if you approach the papers more formatively, there is a wealth of information to be found within the data they create.  This is where thinking like a researcher comes in.  Instead of seeing a high proportion of children scoring below a pass mark as a summative ‘waste of time’ (Whi, L, 2021), you start to see it as a formative signpost of where to focus your efforts moving forwards.  You begin to use the results as a basis of the next term’s learning.  Instead of resentfully ploughing on with tests because that’s what you’ve been told to do (Chloe, 2021), you start to recognise skews in results and use them to open discussions with the entire teaching staff.  Progress meetings cease to feel like court hearings and start to be open dialogues about specific education needs.  Your teaching world starts to expand beyond the boundaries of your immediate control.  You start to see beyond the children who just don’t test well (Chloe, 2021) to the Construct Irrelevant Variances that might be the cause, and, more importantly, you can take steps to remedy them, essentially allowing for better equity during the actual tests.  But you can’t do any of this if you don’t know how.

Summative assessments create a ‘blame’ culture among school staff

Before I did my MA, I was one of those teachers who would immediately baseline my class to assess gaps.  During the pupil progress meetings that followed, I would be asked why x% of the class hadn’t scored above 70% on the test.  The blame for the children’s past learning was placed at my feet so naturally, I would point out that, since I had only had the class for a number of weeks, it must be the fault of the previous teachers.  And this pathway was not unique to me or my school (Addressing Blame Culture in Your School Is Vital…for Pupils and Teachers, n.d.; Black & Wiliam, 2018).

It doesn’t have to be this way though.  If assessments were approached more like research into how each child is progressing (which is, after all, the aim of assessment (Black & Wiliam, 2018)), then the anomalies highlighted by the data would be cause to investigate and fix, not blame and avoid.  Cooperative matrices of support could be established and better, earlier success engendered.  However, this research-led thinking, like everything, takes training.

Summative assessments reduce children to statistics

When the children have completed an assessment, it is marked and the results are often compiled into a spreadsheet, from which comparative judgements can be made (Cronbach, L. J., 1971). This is where the concern about children becoming mere statistics arises. It stems from exam creators needing to know if certain questions are unintentionally biased (established through differential item function analyses (Martinkov√° et al., 2017)); local authorities needing to know if key targets are being met; and governments needing to evaluate curricula (About Us, 2013) Except, again, this is only the case if they are treated as terminal, summative assessments that are externally marked.

In truth, when used part way through the academic year, those spreadsheets don’t leave the school.  Instead, they are used, or should be used, to identify particular weaknesses on a school, class and even individual level.  If an entire school has an issue with, say fractions or inferential thinking, then some intervention needs to take place!  

If a single class is anomalous in its inability to accurately perform long division, then perhaps the teacher needs support in that area, or the maths lead for the school needs to amend the calculation policy to ensure better success in the future.  As for the individual level, children absolutely should know where their strengths and weaknesses lie!  The end of the first term is exactly the time to point out these areas and a summative test, used formatively, is a useful tool to do it.  But again, I think that you need training in assessment to fully appreciate this.

Research education, especially for teachers, allows you to look behind the curtain.  Through understanding why and how assessment is planned, reviewed, interpreted and used, you move from what I’m calling ‘impotent knowledge’ - that is, knowing what you have to do but not knowing (or not understanding) why, to more empowered knowledge.  You stop merely administering tests and start using them.  You move from being a teacher with no agency, who complains on Twitter, to being a positive force for change.  

Having a deeper knowledge of education research, particularly in assessment, allows you to stop taking the results of tests personally and start using them professionally.  Ultimately, it won’t matter if you are asked to administer terminal assessments after only seven weeks because you will have the knowledge of how to use whatever data they provide in the most formative and helpful way possible; moreover, you will have the vocabulary to eloquently and pointedly state your case for structured support to your school managers, which will only serve to help the children make progress.  And that, after all, is the whole point of teaching.

Thanks for reading! Like I said, I will update with a YouTube link as soon as it has been made available to me.  Until next time, help each other out, be kind, and stay healthy!

Carl Headley-Morris

tweet me!          email me!          visit my website!         book me!

References for this post:

About us. (2013, August 22). GOV.UK. https://www.gov.uk/government/organisations/standards-and-testing-agency/about
Addressing Blame Culture in your School is Vital…for Pupils and Teachers. (n.d.). Retrieved October 22, 2021, from https://www.teachwire.net/news/addressing-blame-culture-in-your-school-is-vital...for-pupils-and-teachers
Bagana, E., Raciu, A., & Lupu, L. (2011). Self esteem, optimism and exams’ anxiety among high school students. In Procedia - Social and Behavioral Sciences (Vol. 30, pp. 1331–1338). https://doi.org/10.1016/j.sbspro.2011.10.258
Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25(6), 551–575.
Capewell, D., Kranat, J., & Mullarkey, P. (2004). Framework Maths: National Curriculum. Revision. Oxford University Press, USA.
Chloe (2021), 19 October. Available at
https://twitter.com/Mr_M_Musings/status/1448886192997556231 (accessed: 19 October 2021)
Cronbach, L. J. (1971). Test validation. In R. L. Thorndike (Ed.), Educational measurement (2nd ed., pp. 443–507). Washington, DC: American Council on Education.
Downing, A. (2021), 19 October. Available at twitter.com/DowningAndy (accessed: 19
October 2021)
Evidence - More Than a Score. (2018, August 31). https://www.morethanascore.org.uk/evidence/
Great Britain. Department for Children, Schools and Families, & Great Britain. (2008). Education and Training Statistics for the United Kingdom.
Jerrim, J. (2021). National tests and the wellbeing of primary school pupils: new evidence from the UK. In Assessment in Education: Principles, Policy & Practice (pp. 1–38). https://doi.org/10.1080/0969594x.2021.1929829
Kelly, R. (2017). Fixed-term Parliaments Act 2011.
Kitty C (2021), 19 October. Available at https://twitter.com/Mr_M_Musings/status/1448889216818110493 (accessed 19 October, 2021)
Martinkov√°, P., Drabinov√°, A., Liaw, Y.-L., Sanders, E. A., McFarland, J. L., & Price, R. M. (2017). Checking Equity: Why Differential Item Functioning Analysis Should Be a Routine Part of Developing Conceptual Assessments. CBE Life Sciences Education, 16(2). https://doi.org/10.1187/cbe.16-10-0307
Ofsted. (2019). Education inspection framework (EIF). https://www.gov.uk/government/publications/education-inspection-framework
Pells, R. (2017, June 26). SATs having “damaging consequences” for both children and schools, teachers warn. The Independent. https://www.independent.co.uk/news/education/education-news/sats-having-damaging-consequences-children-schools-teachers-nut-survey-a7806571.html
Standards, & Testing Agency. (2014). Key stage 2: mathematics test framework. https://www.gov.uk/government/publications/key-stage-2-mathematics-test-framework
Who, L (2021), 19 October. Available at https://twitter.com/Mr_M_Musings/status/1448893374824648706 (accessed: 19 October 2021)