Where Do Classroom Tasks Fail? Part Three

Part one looked at the constructivist teaching fallacy and poor proxies for learning.

Part two looked at the twin sins of curriculum design and mathemathantic effects.

Part three will look at challenge-by-choice, anachronistic tasks and tasks that do not match their instructional intentions.


For those unfamiliar, challenge-by-choice refers to task-based differentiation, whereby the learner chooses which task they do from a selection (usually of three or more tasks). Seemingly, the most popular version of this is referred to as a ‘chilli challenge’, whereby the learner picks the difficulty of their task based on how ‘spicy’ it is.

Akin to fun tasks, this method may be utilised by teachers to secure the engagement of learners. However, challenge-by-choice presents issues. It often leads to learners not being challenged appropriately because they are not self-aware enough of their current level of understanding, resulting in them picking a task that is either too easy or too difficult. Notwithstanding, creating three or more different tasks creates a formative assessment nightmare for us as the teacher, making it increasingly difficult for us to provide instant feedback to the learner.

Anachronistic Tasks

Perhaps most commonly seen in history lessons, anachronistic tasks present another area where classroom tasks can fail. Throughout the design of a task, we must ask ourselves what it is that we want the learner to think about when they attempt it. Anachronistic tasks contradict this, presenting the opportunity for the learner to be distracted from what should be the focus. Inevitably, this can result in an ineffective tasks and learners’ knowledge not being secure.

In history, we are concerned with contemporaneous accounts from the time and this has led to tasks such as writing newspaper reports about the Roman invasion of Britain. The issue here is twofold:

  • First, that if the learner is focusing on the features of a newspaper report and how to write in a journalistic style, they are not thinking as deeply as we would like about the historical content itself. This was something OFSTED noticed in their inspection of history in outstanding primary schools, stating that there were often tasks which “distracted from the history content pupils needed to learn”.
  • Second, such anachronistic tasks could embed misconceptions that things existed outside of the time periods in which they were created (the first newspaper was believed to have been written in 1605 – well after the Romans conquered Britain).

NB: This is not to say anachronism is useless in the study of history. It can be used effectively. For example, the presence of an anachronistic object in a historical photo could tell us a source is not reliable.

Tasks not matching the instructional intention

This links to my first guiding principle of task design, which will be the topic of a future blog. I have also hinted at this issue in a recent blog on KWL grids as an assessment tool, but indulge me as I make a similar point using another common task.

‘Look, say, cover, write, check’ is a common task used to promote spelling in primary schools. It is done in a table format like the one displayed below:

The instructional intention is for pupils to remember the common grapheme (letters representing a sound) used to represent a phoneme (the sound) in a group of spellings – e.g. /ā(r)/ made by air in fair, hair, chair etc.

However, ‘look, say, cover, write, check’ does not get pupils to think about the common graphemes that can be used to make a phoneme. Instead, it gets the learner to focus on the word as an entire unit, rather than breaking it down into parts. It does not get the learner to consider the grapheme-phoneme correspondence. It also gets them to store a word in working memory while it is hidden from view and then write the word onto a piece of paper in front of them. This presents us as teachers with the illusion that a learner is able to spell all the words correctly, but doesn’t tell us if they have understood the learning behind the spellings themselves. In other words, it demonstrates to us that they can store something in short-term memory, but not that they have retrieved knowledge from long-term memory (unless, of course, the child does already know how to spell the word).

This is just one of many tasks that fail to match our instructional intention. This idea will be explored further in the next blog in this series.

Why KWL Grids Are Not Fit For Purpose

If you are not familiar with KWL grids, let me explain. They are an assessment tool of three stages. What the learner already Knows (K), what the learner Wants to know (W) and then finally what the learner has Learnt (L).

So, they usually look something like this:

Teachers give them to pupils at the start of a unit of learning (e.g. Ancient Egypt) and pupils would fill in the first column. However, there is no retrieval cue for the learner, just the empty column as you saw above.

So, as a means of finding out prior knowledge and gaps in learning between students, this column is extremely limited in its use. We would be better placed as teachers to ask questions that link specifically to our curriculum:

e.g. “What do you know about the use of the River Nile in Ancient Egypt?”

This allows learners to retrieve specific knowledge related to what they will learn, enabling them to potentially see connections between other units, such as rivers studied in geography or other history units.

The middle column is often wasted time. It gets learners to write down what they would like to know about. This leads to learners writing questions about things you won’t cover (as they’re not relevant) or oddly specific questions you likely do not have the subject knowledge to answer.

The final column suffers the same issue as the first. There is no retrieval cue for learners to respond to. They are met with an empty column and expected to dump all the knowledge learnt into it. This, inevitably, leads to learners not writing down all that they truly remember.

Often, the L part of the grid is completed by students flicking back through books. The issue here is that it does not require the learner to retrieve from long-term memory. The learner is just storing content in working memory momentarily, while they copy it across to the grid.

Consider which is more effective:

– Write down everything you have learnt about Ancient Egypt.

– Tell me what you know about the Ancient Egyptian belief of the ‘afterlife’.

The former is likely to elicit some factual knowledge with perhaps no depth or thought given to connections between the facts – at least for the majority of pupils. The latter requires the learner to think harder, to think of specific facts and then consider the relation between them.

The latter is by no means a perfect assessment question, but serves the purpose of assessment far better than an empty L column. Ideally, a series of questions similar to the ‘afterlife’ one are given – perhaps even facilitating links between it and prior knowledge:

e.g. We learnt about the Norse belief of the afterlife when we studied the Vikings and Anglo-Saxons in Year 3. What similarities and differences do you see in their beliefs and the beliefs of the Ancient Egyptians on the afterlife?

I used KWL grids myself. It is only through using them for a while that I discovered their inadequacy. I fell for the illusion that it was an engaging task because I was using the middle column to engage learners and to let them take control of what they learnt.

But assessment is essential. Essential to teaching, essential to curriculum and essential to sequencing learning over time. We do ourselves and the learners we teach a disservice if we don’t assess as accurately as we possibly can.

I do not claim any one type of assessment is the *best* in foundation subjects. However, there are many that serve the purpose more successfully than KWL grids (such as retrieval quizzes, multiple choice Qs, essays and short paragraphs in response to Qs).

Where Do Classroom Tasks Fail? Part Two

This is a blog in a series on task design. The others can be found here.

Part one looked at the constructivist teaching fallacy and poor proxies for learning. This part will look at the twin sins of curriculum design and mathemathantic effects.

The Twin Sins of Curriculum Design

Wiggins and McTighe posit that curriculum design (and therefore indirectly task design) often falls victim to these twin sins:

  1. Activity-focused teaching

“Here, teachers plan and conduct various activities, worrying only about whether they are engaging and kid-friendly.” – Wiggins and McTighe

Activity-focused teaching results in tasks that have been designed to secure engagement, often at the expense of linking appropriately to what has been taught or, more generally, curriculum goals. Consequently, these tasks are often designed in isolation, separate from the necessary sequencing of learning throughout a unit or curriculum. Tasks designed within an activity-focused framework struggle to meet the intended instructional purpose and are therefore redundant in any assessment of learning the teacher seeks to pursue. A common example recognised from English primary schools would be ‘Biscuit Stonehenge’. After learning about Stonehenge, pupils are provided biscuits to create a model of Stonehenge. The task has been designed to secure pupil engagement, but holds little-to-no educational value past that.

Example of Biscuit Stonehenge. Source.

NB: There is absolutely nothing wrong with designing tasks that are fun. Learners, especially young children, should build an enthusiasm towards learning through fun tasks when appropriate. Such fun tasks are very common at the end of learning units and understandably so. However, when fun tasks do not align with curriculum intentions, they are unlikely to build memory and should not be used *if* this is the primary aim. As Wiggins and McTighe dictate, “such activities are like cotton candy – pleasant enough in the moment, but lacking long-term substance”. As alluded to in part one with both Coe and Mayer’s thinking, we must not misconstrue engagement with learning.

  • Coverage-based teaching

Coverage-based teaching refers to covering large amounts of curriculum content at speed and at the expense of any depth of understanding for the learner.

It therefore results in tasks that only allow the learner to create a shallow understanding of knowledge and prevents the building of automaticity or fluency, as not enough time is devoted to building this up through tasks of regular practice. Coverage-based teaching flies in the face of everything we know about how memory is established and maintained over time (e.g. spacing effect, retrieval practice). By rushing through content with superficial and shallow tasks, we operate under the illusion that pupils have learnt it simply because it has been ‘covered’.

Mathemathantic Effects

Clark (1989) argues that poorly designed tasks can exacerbate ‘mathemathantic effects’ (manthanein = learning + Thanatos = death).

Clarke states that, “Whenever an instructional treatment encourages students to replace an existing, effective learning strategy with a dissimilar alternative, learning is depressed.”

Mathemathantic effects can occur when certain areas of learning go through a substitution: learning strategies, motivational goals and student control.

I have taken the examples Clarke produces and made them specific to task design below:

Examples of mathemathantic effects on learning strategies:

  • Learners have little prior knowledge but task assumes the learner has automated strategies, knowledge and skills available
  • Learners have much prior knowledge but task requires them to use strategies which interfere with their automated strategies, knowledge and skills

Examples of mathemathantic effects on motivational goals:

  • Learners are afraid of failing but tasks provide minimal guidance or structure
  • Learners want to achieve success but are given a task that is highly structured and provides too much support and guidance

Example of mathemathantic effects on student control:

  • Learners need a lot of support and guidance but are made to do tasks that are open-ended and ask a lot of them
  • Learners need little support and guidance but are made to do tasks that are highly structured and controlled

The third part will look at challenge-by-choice, anachronistic tasks and when tasks fail to match instructional intentions.

Where Do Classroom Tasks Fail? Part One

This blog is part of a series on task design. The previous blogs can be found here.

It seems obvious that to design tasks effectively, we need to know what can make tasks ineffective. By knowing these pitfalls, we can circumvent them and consequently design more effective tasks.

I defined Constructivism in a previous blog, as believing that a learner ‘constructs’ their own understanding. Constructivism therefore supposes that tasks should facilitate the opportunity to generate such an understanding for the learner. This has led to exploratory learning in the classroom, such as through inquiry-based learning. The belief being that the learner should build the knowledge themselves and therefore need to discover the knowledge in order to do so.

In critique of this theory, Mayer (2004) offers up the ‘Constructivist Teaching Fallacy’, whereby teachers may believe that a learner being ‘cognitively active’ “translates into a constructivist theory of teaching in which the learner is behaviourally active” also.  

Mayer has depicted this through a 2×2 grid below:

This grid outlines that a Constructivist view of teaching believes learning only occurs, or is at the very least most effective, when the bottom right quadrant is satisfied. Learners have to be behaviourally (interpreted to mean physically) active for pupils to be able to construct knowledge within their minds. We of course know this to be untrue from our daily practice, where learners sit at desks for lengthy periods and still learn quite effectively.

Poor Proxies

When learners are engaging independently with the learning task, we can observe certain behaviours that lead us to believe the task is working effectively. Here we can refer to Rob Coe’s (2014) ‘Poor Proxies for Learning’:

  • Students are busy: lots of work is done (especially written work)
  • Students are engaged, interested, motivated
  • Students are getting attention: feedback, explanations
  • Classroom is ordered, calm, under control (or noisy)
  • Curriculum has been ‘covered’
  • Students have supplied correct answers (even if they have not really understood them, cannot reproduce them independently, will have forgotten them soon, already knew it)
  • Task completion (especially quickly)

*The emboldened parts are my own thinking around poor proxies.

The poor proxies above create an illusion for us as teachers – they lead us to believe learning is happening, when of course we know that learning is invisible (Didau, 2015) and can take place across a series of lessons, and not necessarily in just a single lesson. We must be conscious of these proxies as teachers, and as leaders observing tasks in lessons, as it can mislead our assessment of pupils’ learning. If we believe in these poor proxies, then ineffective tasks mask themselves as effective.

Part two will look at the twin sins of curriculum design and mathemathantic effects.


Coe (2014) – What Makes Great Teaching?

Didau (2015) – Slides from London Festival of Education.

Mayer (2004) – Should There Be a Three-Strikes Rule Against Pure Discovery Learning?

Designing Tasks to Support Long-Term Memory

This is blog 6 in a series on Task Design. The other blogs can be found here – Task Design Series.

“Learning is defined as an alteration in long-term memory. If nothing has altered in long-term memory, nothing has been learned.” – Sweller (2011)

This definition of learning as a change in long-term memory (LTM) has become common parlance over the past few years. If we are to take Sweller’s comments as the accepted truth, we must consider how tasks are designed to facilitate the building of LTM.

In order to do that, we have to look at LTM with greater precision. LTM is often divided into two types: declarative memory and procedural (non-declarative) memory.

Declarative memory is characterised as ‘knowing what’ – it is the storage of facts and events. For example, knowing that WW2 lasted from 1939-1945. Forming this type of memory can be rapid, with possibly even just one instance of attending to knowledge being enough. As Ullman (2004) intimates, declarative memory “is important for the very rapid learning of arbitrarily-related information – that is, for the associative binding of information”.

Declarative memory is based on recall and retrieval, because of this, it is also known as ‘explicit’ memory, as we can consciously remember it and recall it. Declarative memory is said to have ‘representational flexibility’ – that is, it can be recalled independent of the circumstances in which it was learnt.

Declarative memory is also believed to have the property of compositionality (Cohen et al, 1997) – the ability to represent the whole and its constituent parts simultaneously – e.g. democracy as people having power, but also as elections, voting, government, representation etc. Cohen et al believe it is this compositionality that allows us to manipulate representations and bind information in our heads, therefore, declarative memory is “a fundamentally relational representation system supporting memory for the relationships among perceptually distinct objects”.

In contrast, procedural memory is characterised as ‘knowing how’ – it is the storage of how to do things. For example, performing the steps of long division. Procedural learning aids the performance of a task without conscious involvement and that is why it is also referred to as ‘implicit’ or ‘non-declarative’ memory, as we cannot always articulate these memories, which are formed from habit.

It is also called implicit memory because previous experience in performing a task helps you to perform a task better, without conscious or explicit awareness of it. Forming this type of memory happens through slow, incremental learning – as such, one instance is not deemed enough for good performance of the procedure (in contrast to declarative memory). The ability to perform the procedure develops from experience-based tuning, where random or conscious adjustments build your ability to perform the procedure.

Koziol and Budding (2009) summarise the two types of LTM here:

“Declarative learning and memory lends itself to explicit, conscious recollection. Procedural learning and memory are implicit; the actual learning is inferred from an individual’s improvement in performing the task.”

So, we believe that learning is when long-term memory is altered, and that there are two types of long-term memory: declarative and procedural. It would be fitting therefore to consider that there are two types of task also: declarative and procedural tasks.

Declarative tasks seek to build memory around facts and events.

Procedural tasks seek to build memory around skills and procedures.

These two types of tasks are not a dichotomy, but actually closely intertwined. Serving a ball in tennis is a procedural act, but a pupil must first learn the declarative knowledge required to perform the serve (i.e., the height to throw the ball, position of the feet, where to strike the ball on the racquet etc). As Daniel Willingham (2009) posits, “Factual knowledge must precede skill”.

What are the takeaways if we are to pursue these two types of tasks?

Declarative memory tasks:

  • Design tasks to enable the learner to bind information together
  • Design tasks to facilitate spreading activation in the learner’s brain
  • Revisit declarative knowledge in a variety of tasks to facilitate representational flexibility
  • Consider task dependency – how one task builds or relies on tasks that have preceded it

Procedural memory tasks:

  • Design tasks that allow for identical procedural practice until the procedure is learnt


Cohen, Neal J.; Poldrack, Russell A.; Eichenbaum, Howard  (1997). Memory for Items and Memory for Relations in the Procedural/Declarative Memory Framework. Memory, 5(1-2), 131–178.        

Koziol, L. F., & Budding, D. E. (2009). Subcortical structures and cognition: Implications for neuropsychological assessment. New York: Springer.

Ullman, M. T. (2004) Contributions of memory circuits to language: The declarative/procedural model.

Willingham, D. (2009) Why Don’t Students Like School?

Leading Teacher Development

Teacher development, and the leadership of it, is a hot topic at the moment.

It is therefore worth pausing to ask ourselves, ‘what is teacher development’? And ‘how should we lead it’?

Teacher Development

The NPQ Framework for Leading Teacher Development states that teacher development, “is likely to involve a lasting change in teachers’ capabilities or understanding”. This is an agreeable definition, but why should we change teacher understanding?

Josh Goodrich puts forth a ‘change sequence’ that shows the knock-on effect that can occur through improving teachers’ understanding.

Teacher knowledge >>> teacher action >>> student knowledge >>> student action

So, improving teacher knowledge can impact on student outcomes, but do we know this to be actually true? Well, yes.

  • Expert teachers can help pupils to learn up to 4x faster (Wiliam, 2016)
  • More experienced teachers help pupils to achieve more than their novice peers (Kraft and Papay, 2014)*
  • The difference between an expert teacher and a ‘bad’ teacher could be as high as whole year’s learning (Sutton Trust, 2011).

*NB: experience does not equate to expertise.

Expert teachers appear to have a noticeable impact on student outcomes. Therefore, the goal of teacher development should be not only to have a ‘lasting change’ on capability and understanding, but to also support teachers in the journey from novice to expert.

What is an expert teacher?

Again, hard to define. It is an interplay between talent and expertise with the scales heavily tipped in expertise’s favour. So, what does the literature say makes a teacher an expert?

There are many characteristics posited in the literature, yet three recur frequently:

  1. A knowledge bank built up over thousands of hours
  2. The ability to respond to situations based off their familiarity
  3. A degree of automaticity

Consequently, teacher development should focus on developing these three characteristics within every teacher. How do we do that?

Glaser (1993) talks of expertise as a ‘change in agency over time’ in three stages.

  • Stage 1 – ‘externally supported’

Here, the teacher is a novice. They require a highly structured teacher development programme, highly specific coaching, plenty of deliberate practice and short, regular feedback cycles. (The Early Career Framework will serve to better support novice teachers during this stage.)

  • Stage 2 – ‘Transitional’

The teacher has now gained some experience in the classroom and is starting to gradually build their bank of knowledge, familiarity of situations and their automaticity of response.

They require the same support as in stage 1 but the level of which should be reduced to align with their growing capability and understanding.

  • Stage 3 – ‘Self-regulatory’

Now, the teacher is an expert. They can regulate themselves and take greater ownership of their professional development.

These three stages are complemented by what we can infer from the Expertise Reversal Effect (Kalyuga, 2007).

‘It takes 10 years to become an expert’ is often banded around, yet there is no set amount of time that this takes – certainly not that any research has measured or possibly could measure. What we do know, however, is that teacher development can speed up the journey from novice to expert.

How should we lead teacher development?

There are certain conditions that every leader of teacher development should consider: culture, bias, priorities, expectations, systems, to name but a few. Effective teacher development can still occur if one of these conditions is ignored, but it is more likely to be effective when they are all considered in conjunction.

The science of learning has taken the educational world by storm in recent years. It is important to recognise that this should apply to teaching teachers and not just teaching children. Everything we have learnt about cognitive load, working memory and the like, should factor into any course of teacher development.

As such, teacher development should have its own curriculum. It should be sequenced well, build on prior knowledge, and allow for plenty of practice. We should then move away from the traditional staff meeting model picture on the left below, and move towards the model on the right that Matt Swain and Lloyd Williams-Jones recommend:

It would be foolish of me to think that I could present something better than what the EEF have come up with their ‘Effective Professional Development’ document. They have outlined 4 groupings, each with individual mechanisms. These have been condensed down from decades of research and allow us to design courses of effective professional development more easily.

To end, what does leading teacher development look like in practice?

In a previous blog, I wrote about using the EEF mechanisms to implement a behaviour for learning strategy. You can find that blog here.

One-Page Report Formats

Reports are something we often overcomplicate. With the desire of personalising each and every one, we inadvertently add to teacher workload. My argument is that parents are already well aware of how their child has progressed this year because of parents’ evenings, mini reports, informal conversations and however else a school chooses to communicate regularly with parents.

With that in mind, I aimed to reduce our reports to include the bare minimum, while still retaining the personal element we want for every pupil. Apart from Reception, all year groups could be condensed down to fit on a single page, as shown below. All year groups can be downloaded from my resources page for free.

Example of a one-pager for Y4:

Find all of the one-page formats here – morgsedu.uk/resources

Here are the requirements for reporting to parents – https://www.gov.uk/guidance/reporting-to-parents-at-the-end-of-key-stages-1-and-2

My template was inspired in part by the format made by Michael Tidd (https://michaelt1979.wordpress.com/2019/04/24/annual-reporting-to-parents-our-approach/).

FREE maths resources to help prepare for SATs

This blogpost is to share the great free maths resources that I use to help children prepare for the SATs that you may not be aware of.

PDF booklets split into each maths topic and by content domain (which appears at the top of the file). It also comes with answers that explain the methods needed to solve!


Mr S has created an editable arithmetic test that is up-to-date with the 2018 format. When you download it, it has the 2018 questions but you can change them very easily like any other word document.


Made by @filtered_k on Twitter.  It is broken down by topic and year group and each has a hyperlink to a question from the SATs to answer.


Great for quick practice here and there. Can be done each day and comes with answers too!

  • White Rose – old and new – google ‘White Rose resources’ and search for them on TES

The holy grail. Very popular around the country. The older files are still useful, however, the new ones come with a more detailed commentary as well as answers. Broken down into fluency, reasoning and problem solving questions to ensure children are challenged across all areas.


These are superb to send home so that children can practise and revise over Easter. They are differentiated 3 ways and also come with answers. The idea being that child do them for 10 minutes every day for 10 days.

  • SATs one page mark schemes – search ‘supersophiee’ on TES (@_MissieBee)

These make marking SATs papers from previous years infinitely quicker, as all the answers have been put onto one sheet. At the time of posting, Sophie has made them for all of the previous papers, from the 2016 Sample paper up to last year’s.


Another great resource from @_MissieBee, this document puts all the key information needed for the 2 reasoning papers into one handy document. A fantastic revision resource for children to take home with them and use in class!


@LittleMiss_Reed has made a knowledge organiser just on the knowledge needed around the arithmetic paper and has kindly shared it for free in the link above. It includes a second page that touches on written calculation methods too.


I created my own arithmetic paper that I completed myself, purposefully getting some questions right and some wrong. This is handy as a revision session, where children work their way through the paper and find the errors and discuss the use of fluency. It comes with a children’s copy, a teacher’s copy and guidance on which answers are correct, incorrect, fluent, not fluent etc. Great for class discussion!


This powerpoint breaks down all the common language thatt has appeared throughout the past papers. It provides pupils with example questions and then gives them questions to answer once they have learnt what each term wants them to do.

Resource queen Sarah Farrell has shared these simple and clear concept guides for children to refer to during their maths work. For more information on them, read her blog on this topic here – https://mrsfclassroom.wordpress.com/2019/06/18/maths-vocabulary-and-steps-to-success/


Find me on Twitter – @MorgsEd

Why not plan forwards?

In my previous blog, I suggested that we should plan backwards to enable us to design tasks more effectively.

Why not plan forwards?

I think this is best summarised by the thinking of Shirley Clarke and Dylan Wiliam.

Clarke argues that planning forwards can often lead to a conflation of the learning objective and the context in which the objective is being taught.

This argument is laid out in this picture below, taken from Wiliam’s Embedded Formative Assessment book:

What’s the issue with this conflation?

The issue is that such conflation could result in the learner only attaching the learning to a specific context and not being able to apply it in either similar or different contexts.

So, using the last example in the table above, the implication is that the learner may not be able to design fair tests for scientific questions if presented with a different context of learning (i.e., outside of the preferred habitats of pill bugs).

The issue may not present itself in the immediate, because when we assess learners for their understanding of what has been taught, the learners are likely to appear successful. This is because we tend to assess them with the same context they were taught, hence their success in demonstrating their understanding.

However, when assessing their understanding in a new context, the knowledge may not transfer and learners may not do as well.

As Wiliam posits, “We are not interested in our students’ ability to do what we have taught them to do. We are only interested in their ability to apply their newly acquired knowledge to a similar but different context.”

Wiliam offers a suggestion as to how we can use this idea of transfer to ensure challenge for pupils:

“All students should be able to transfer what they have learned to very similar contexts, while others can be challenged by assessing how far they can transfer what they have learned.”

Planning forwards can therefore not only lead to a confusing conflation for learners, but can potentially prevent us from challenging pupils with the transfer of knowledge, if we rely heavily on singular contexts.

If we are aware of these issues and plan to prevent them, then planning forwards can of course be successful. However, planning backwards helps to circumvent these issues altogether by providing a clearer structure for our thinking.


Clarke. (2005) Formative Assessment in the Secondary Classroom.

Wiliam. (2011) Embedded Formative Assessment.