Hands up if you’ve ever had a similar train of thought as the following when doing medium term planning: “Okay, middle set year 9, fractions… They can probably simplify fractions, so I won’t bother with that too much, but I know, just know, that a fair few will be a bit ropey with equivalent fractions, so we’ll do some practice on that for sure. If they can already do it then they can overlearn it, that’s fine. Surely they can shade fractions of an amount, not going to bother with it. Argh, how is it already they weekend before term, how hard would it be to invent a time machine? Wait maybe I could!*”
Well, even if it’s just me, the point is that I really never used to take much stock of prior knowledge when planning for lessons. I pretty much just assumed that most of my students came to me as tiny baby mathematicians and then planned to ramp them up when I got going with them. Of course, I changed my baseline depending on their set (egregiously, I effectively used to assume that bottom sets had made little to no practice in their secondary career to date, and thus planned very conservatively), but it was generally sweeping statements with relatively little evidence to back them up.
On the topic of evidence, one could argue that actually, especially for Y8 onwards, there is plenty of evidence about prior competence. It may be through internal test data or Hegarty Maths-like systems, but students are incredibly likely to have some sort of paper trail a few months into Y7. Once you get going with a class, it may be you just know your class very well. You don’t need to spend time grappling with old spreadsheets because you know Jamal and Hurain are at the top end and will smash through everything and that Jasmine and Bob are likely to look at you with sad confused eyes at every turn.
However, there are problems with both these lines of logic. Regarding prior data, it is likely correct that there is data but there is a massive challenge in making any sense of it: data collection amongst classes or years may be fragmented and students are likely to have moved sets between years making collating data a nightmare. Even if you have all the data, actually analysing it (assuming its broken down in the detail you would want) is also hugely time consuming and likely to be beyond a reasonable ask of a teacher’s preparation. As for knowing your class, you may well know that James’ favourite band is Enter Shikari and that he can find the perimeter of any rectilinear shape between here and the moon, but it doesn’t necessarily tell you much about James’ relationship with algebra. Even the best teachers aren’t psychic.
All that said, let’s assume you’re in the position where you do, in fact know what the students know about the upcoming topic. The problem is that data is likely rather outdated – you don’t know if students crammed for that test or if they had just broken up with their boy/girlfriend that morning. Further, you don’t know if that knowledge has any degree of retrieval strength – have they been tested on it recently? Was the last time they saw it a full year ago? This doesn’t mean the information is useless – far from it – but you still need to assess whether or not it needs to be covered.
When I started out teaching, my department used to give all students a test at the start and end of every topic. The tests would be identical and it would help us see progress across a unit and also plan to take account of their weakness. I’m not sure if the pre-testing effect was also a rationale for these tests as well, but it is at least a happy coincidence that it may have spurred that sort of thinking in a selection of students.
As with any approach, this came with its negatives as well. First, we would give this test the first lesson into a new topic, and given the time involved in marking, analysing and then planning this meant that it was often difficult to use the information to its full extent. Secondly, and this is a sentiment I’ve seen on twitter recently, it’s no fun for any student to take an extended test (maybe 10-20 minutes) where you are, by definition, unable to do most of the questions. Lastly, it doesn’t tell you about relevant knowledge recall outside of the topic (for example, if you’re pre-testing percentages but don’t include any decimal or fractional questions, then you’re still going to run into significant difficulties).
During my reading, I stumbled upon the concept of Atomisation and, at first I really did not like it. The general idea is that you break down a new topic into its individual ‘atoms’ of knowledge – for example, Craig Barton identifies being able to read decimal scales as an atom of histograms (you could ask me for a hundred years and I don’t think I would have made that link!). You then focus attention and assess each atom in turn, and if the class are mixed or totally unable to do it, then you teach that atom explicitly before re-assessing and potentially moving onto the next atom.
Why did I not like it? A gut instinct screamed out that it would take too long, it would take far too long to plan, that it seemed patronising and you just need to get to the good stuff.
However, it actually makes a lot of sense. I believe Craig Barton says something along the lines of ‘you’ll have to deal with these atoms anyway’, which makes total sense. How many times have you been modelling or conducting some process questioning and discovered that a student/class can’t do a procedure that is just a single step in the new idea? And then you quickly tell the class what to do and carry on? As if that 20 second half-mumbled and rushed explanation suddenly kicks open the doors of knowledge, rips away the shutters of misconceptions and leaves them truly enlightened?
“How many times have you been modelling or conducting some process questioning and discovered that a student/class can’t do a procedure that is just a single step in the new idea? And then you quickly tell the class what to do and carry on? As if that 20 second half-mumbled and rushed explanation suddenly kicks open the doors of knowledge, rips away the shutters of misconceptions and leaves them truly enlightened?”
The idea of atomisation makes sense. One of my big reflections from my reading is the idea that we need to make sure prior knowledge is secure and if not, we take the time to secure it before building on it. Building prior knowledge alongside novel knowledge will rarely end with the success rate that you want to see, and will only get worse over time, like mathematical subsidence. Taking the time to assess prior knowledge is an investment we must make in our teaching.
I really like the model that Craig Barton uses for assessing prior knowledge via Diagnostic Questions. It seems quick to plan, efficient in its use and it also lends itself nicely to relatively well-defined (and therefore plan-able) actions. If the class basically do not understand it at all, then you can simply reteach it (at least using example problem pairs). If the class are split, then you could reteach it while giving out an extension problem for those already secure, or use a peer-teaching system where students pair up – although this is dependent on class dynamics and your comfort zone as a teacher.
That said, it definitely does take time. In his book Reflect, Expect, Check, Explain, Craig relates an anecdote whereby he took around 2 lessons just assessing and dealing with atoms ahead of a histograms topic and the main class teacher was unimpressed with the pace. I think a degree of pragmatism is needed – as much as Mark McCourt hates it, the reality is that most schools in the UK operate some form of conveyor-belt system in which there is a constraint in terms of time spent on a topic area. While it is possible to be flexible in that system, too much slack risks substantive divergence that the general assessment framework in the school may not be able to withstand (for example, if one class fall significantly behind, two different tests may need to be written, which minimises chances for comparison). In reality, we need to keep some level of pace, which means we may need to make a sub-optimal compromise on something – maybe the number of atoms we assess, or the depth in which we reteach.
Regardless of the issues, assessing understanding and using that information is something I will definitely be doing going forward with my classes. I’ve always known it was important, in a theoretical way, but never sure how to actually do it. Thanks to the efforts of a number of maths teachers, I now have that practical roadmap.