Why a lady named Sharon still haunts me...

 Hello everyone!


Before we get going with this week's discussion, I have an update to my report writing software post. I had a really nice chat with Ross from Twinkl.com, who gave me some information on why the only gender options available on their report writers are male or female. 


Apparently, there was a non-binary option a while back but it completely broke the syntax of the comment banks. I'm guessing that the comments had been written as strings. For those who don't know, a string is a chain of text and computers are stupid so they don't know that those words have meanings based on context. Why should they? Computers don't speak in words, they speak in zeros and ones.



Anyway, the gender will have been saved as a variable. You can think of this as an 'empty box' into which something is placed. Anything. When told to, the computer will go and get that thing out of the box. So, in the case of the report writer, the thing placed in the box is either the word 'male' or the word 'female'. Again, computers, being stupid, have no concept of what these words mean; they just know that they have to grab this particular group of letters and put them in the appropriate place when they're told to do so.


If the variable in the box is 'female', then the string needs to be taken from the bank of comments that contain female pronouns (she, her, hers etc.) and because of this, it's not a simple case of adding 'they' to the list of available pronouns in the box; you'll end up with sentences like:


"They have tried really hard with his work and they is always asking intriguing questions."


Ross told me that the coders over at Twinkl Towers are now beavering away to fix the issue, which may well involve breaking up strings, adding yet more variables and ensuring that everything makes sense when the comment is returned to the user.


So they are aware of the flaw and are looking to correct it.


Again, Linden (Lindon? I forgot to check) over at schoolreportwriter.com already has this covered. 


Right, on with this week's exploration of education...

The SATs results have (finally) been released this week and Primary schools up and down the country seem to be chanting the same Randle McMurphy quote from One Flew Over the Cuckoo's Nest:



The SATs (Statutory Assessment Tests) are the exams that 10- and 11-year-old children sit in England at the end of their Primary education. They are a formative assessment of the previous four years of maths reasoning, arithmetic, reading, spelling and punctuation. They generally take place over four mornings and have a mixed fan-base. Some teachers love them, some hate them, some could take them or leave them. The same goes for the children who sit them.


They are designed to assess the teaching and learning capability of the school, not the child[1]. In fact, as far as government policy goes, the children need not be concerned with the results at all, it's all about accountability and comparison of schools and teaching staff; it's actually the media that have made it the competitive environment it is (yes, the government holds average tests results for reasons of comparison, but the schools are not ranked).


During my MA, I had the opportunity to dig pretty deeply into several aspects of the SATs from a policy point of view. As a classroom teacher, I never had this luxury because I was so concerned with getting at least 89.6% of the children through with a passing grade (here's the kicker, there is no passing grade. There is 'ready for Secondary school' and 'not ready for Secondary school'. We'll look at these categories a little later). It turns out, the SATs are actually a pretty reasonable way to assess children. They are not the best way; nor are they the only way; nor indeed are they the optimal way, but I do think they are reasonable. Again, there is a lot more on this theme here.


The company responsible for the SATs, Capita, is (in their own words) "a consulting, transformation and digital services business"[2] . Capita have been responsible for the printing, distribution, collation, marking of over 4 million Key stage 2 test papers since 2019. They're also responsible for the marker training. It is fair to say, judging by the teacher and media response, that they didn't do a very good job.




I actually took part in the training and I have to be a bit careful about what I say because I signed various NDA contracts, so I can't talk too much about the pretty abysmal way they handled things (missing tests scripts; poor training; contradictory answer materials... I can't mention any of that. It wouldn't be right, so make sure you didn't read that bit). But I've gone through said contracts and I'm pretty sure I'm within my rights to mention  my personal experience with the papers themselves and the philosophy behind the markschemes, especially the questions that were and were not acceptable. 


I have to say, while I personally wasn't locked out of any sessions, I know people who were. Also, for some reason, despite knowing the exact number of expected participants, there was a gateway problem and people could only be admitted to the video-link software a few at a time, making me 30 minutes late for one session even though I was ready fifteen minutes before it was due to start. It's okay though, I was told that I could easily catch up and hadn't really missed anything.


It turns out that this was more true than I realised. On more than one occasion, the live training sessions involved a supervisor presenting a video of a Capita employee reading the provided material word for word. 


It was a bit of a mess, if I'm honest. Not that it affected anyone's ability to do the marking (although that, at times was a challenge in itself. More on that later), but it was stretched over two four-hour afternoon slots and the whole thing could have been easily accomplished in one day. I do want to make it perfectly clear that this is in no way the fault of the supervisors themselves. Presenting remotely is a horrible and thankless task and I think they did a good job. It's just that the material itself was flawed. 


But what did I learn?

The first surprise was that there was many, many more answer sheets for official markers beyond the published set that accompany past papers. We were given the official mark scheme, a specialist guidance scheme and, for certain questions, a 'Themed Response Table', which contained actual answers received from pilot tests, tabulated in 0-, 1-, and 2- or 3-mark sections.


That was very helpful. Except, it was until one of the answers in the 0-mark themed response table was exactly the same as one of the answers in the 1-mark section of the specialist guidance. But I couldn't possibly mention that.


What was even more, perhaps not helpful, but enlightening, was the advice we were given that we, as markers, were to 'mark positively.'  This meant that, if an answer could be interpreted to fit the mark scheme, then it got the mark. Even if the answer clearly suggested that the child had not understood the meaning of the text or the question. In short, they had not comprehended the english in the English Comprehension test. In many cases, provided they had used one key word in their answer, they got the mark.


It was great for us markers as it meant you could get through literally hundreds of sections (you're not sent full scripts, or even complete pages, just sections) in a matter of minutes. For example, let's say that the answer to a questions was: 'things had to be perfect'; if you saw the word 'perfect', then it got the mark. The rest of the sentence didn't matter.


This meant that, sometimes, a child could write an answer that completely missed the point, but use a keyword within that answer and still get the mark.



So that's positive marking. 


The second interesting thing I took away with me from marking this year - and something I really wish someone had told me when I was teaching - is going to sound so obvious when I write it down.


The answer is assumed to directly relate, and be continuous to, the question. What do I mean by this? I mean that all pronouns in the answer are assumed to refer to the final noun mentioned in the question. So, let's imagine the text refers to a ghost smiling at a little girl named Lizzie as it passes, if the related question asked (made-up question):

"How do we know that the ghost saw Lizzie?"

And the child answered:

"She smiled" 

They would get zero points because Lizzie didn't smile at the ghost. However, if they had written "She smiled at Lizzie", the point would have been awarded because the child has made it clear that 'she' in the answer refers to the ghost.


That seems to go against the 'positive marking' in my opinion but what do I know? I'm not a company that was awarded £165 million to run these things. I'm sure it makes sense to them.


Okay, but where does Sharon come into it?

Having been a Year 6 (the final year of Primary education in England) for over a decade, I have had my fair share of SATs experience and, initially, I thought they were cruel things designed to catch children out and make them feel bad about themselves. This opinion stemmed mainly from a single question in the 2005 English Comprehension paper. It was the very first question based on the first text and concerned a certain lorry driver named Sharon:




Now, based on the information provided, what would your answer be? A lot of the children I was teaching (I wasn't a Year 6 teacher at the time, so I was running an afterschool revision session - more on the success of these can be found here) wrote 'Lorry Driver'. Now, the ones who just wrote 'driver', I had no issue refusing a mark; Sharon very clearly wanted to be a lorry driver (for American readers, that's a truck, it's all to do with whether you think the vehicle is being pulled by a motor - lorry from the verb 'lurry' - or that is simply a wheeled vehicle in its own right, pulling itself along - truck from the verb 'trochus'. So now you know!). Anyway, Sharon wanted to be a lorry driver specifically, so I was happy to accept any form of that.

Now take a look at the official answer sheet; pay particular attention to the DO NO ACCEPT part at the bottom:



If you are confused as to why 'lorry driving' is acceptable but 'lorry driver' is not; so was I. So were the children, many of whom had written 'lorry driver' in a bid to save time and move on.


For years and years this question stuck with me. This year, 17 years later, I think I get it. I don't necessarily agree with it, but I get it.



How come 'lorry driving' is okay but 'lorry driver' is a clear DO NOT ACCEPT? Well, I've gone over it and over it and I think the reason is that the acceptable answers all contain a verb: to be a lorry driver; lorry driving; drive a lorry. Whereas, simply 'lorry driver' is all noun. Since 'ambition' implies 'what do you want to be', the question is seeking a verb for an answer. 


That's the only way I can make sense of it. Is it frustrating that there is clearly an implied infinitive opener in a child writing simply 'lorry driver' as in: [to be] a lorry driver? You bet it is. Would it be helpful f more teachers knew about the depth of specificity required for some answers? Absolutely. Do I think the rubric of the SATs should be more widely available, and that the full answer packages should be made public after the papers are released for revision? Yes I do.


And I'll tell you why...

As I mentioned at the beginning of this post, the SATs are initially intended to judge a school's ability to deliver the curriculum. This delivery, however, is subject to hidden rules and some very niche decisions. And I'm not ignorant to the possibility of school's teaching even more closely to the test if the full rationale was released but many schools teach the test anyway - it's one of the main reasons that formal exams are considered a terrible way assess in the first place.


A couple of years ago, I wrote a piece for the Times Education Supplement suggesting a 'fix' for the reading SATs. My suggestion was to keep everything the same but send out the reading booklets one month early to allow schools to use them as guided reading materials. Knowing what I know now, I stand by my fix but I would add to it that SATs marker training should be made available for all teaching staff, for free (they can pay for it out of the £165 million). It wouldn't be difficult - part of the training I received involved a live video chat with someone who then introduced a pre-recorded training segment! Talk about an necessary middle-man (sorry, middle-person)! So these resources exist. Give them to schools. They can even be a year old, the intention behind the answers is the important thing to learn. Teachers, and children, need to be made aware that:


  • full sentences are not required (at all)
  • bullet point lists are fine provided they answer the question
  • a table with 'point' and 'evidence' as columns is the easiest way to get full marks
  • positive marking is a thing
  • check if the question is seeking a verb (I'm not letting it go)


If they did that, maybe the news stories about children experiencing ridiculous levels of high stress would be a thing of the past. Maybe schools would stop teaching to the test because there would be no benefit to that. Maybe, just maybe, we would have a more generally accepted way to assess children's learning at Primary school.


That's what I think anyway. I'd love to hear your thoughts on the SATs this year,  or any other year for that matter. Do you think they're fair? Do you have an alternative solution? Do you even think we should be testing children if it is the effectiveness of the schools and staff that are being assessed? Let me know either in the comments below or send me an email or tweet.


In the meantime, remember, you can do this: you're awesome!



Carl Headley-Morris


Email me!          Tweet me!              Visit my website!          Listen to the Podcast!



References for this Post

[1] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/628816/2016_KS2_Englishreading_framework_PDFA_V3.pdf

[2] https://www.capita.com/our-company/about-capita