What Did Cognitive Science Ever Do For Me? : A view from an English Classroom

The EEF report, much like the P4C (Philosophy for Children) report that came a few months ago, landed into the Twitter sphere to much fanfare and lots of conflicting ideas. Starting with a webinar, which many were unable to attend, the information was already framed in terms of what did and did not work. Some were waiting for the opportunity find that the research was flawed. Others steeled themselves for the ensuing battering that was to come as they were already invested in promoting the ideas. Still others waited for the inevitable game of tennis that would follow and see where it all landed.

Unsurprisingly the review said it did not have all the answers. The research base is limited. There are boundary conditions. Application in the classroom is hard and therefore it is difficult to say exactly what does or doesn’t work. This was something Mark and I were keen to emphasise when writing about Generative Learning, wanting to discuss the bridge between what has happened in research, the conditions it was undertaken in and where the boundary conditions lay. We were also keen to give voice to those who had used some of those approaches in their classrooms to good effect and so drew on the experiences of practicing teachers in different contexts and phases. This is the whole point of an ‘in action’ book as far as we are concerned, and it is what we do and don’t do with the research that will make the difference to our students.

The same is true of Cognitive Science, and the review provides a perfect opportunity to reflect on that, encouraging a measured approach, where we look at what the evidence might say for our setting and how we might consider what it may add.

This is what I aim to do here, in what has become a mammoth blog post, reflecting on what I have used successfully and less successfully in my own English classroom from cognitive science, as well as drawing on other approaches and evidence to refine this further.

So, following the same structure as the report summary, here is what I’ve found.

Spaced Learning

The EEF review defines it as this:

‘distributing learning and retrieval opportunities over a longer period of time rather than concentrating them in ‘massed’ practice’.

They looked at both spacing throughout the day and lessons.

It sounds simple and assumes everyone has always done something like this in English. After all, way back in 2005 we had units of work which went from the writing skill to the reading skill. However, this in practice was not especially effective, and spending six weeks reading a myriad of non-fiction or fiction extracts, and then six weeks writing lots of different text types, unsurprisingly did not suddenly lead to students who could read or write more effectively, despite the best efforts of their teachers.

This was then followed by an intense period of study of a play or a novel and then at the end of the year an assessment on reading and writing. This blocking of ‘skills’ (don’t start me off), was something I found unhelpful to learning and, both students and staff, paid little mind to what was actually learnt in those periods, tending to be driven by assessment outcomes and little attention was given to returning to key concepts. How could you expect students to remember anything from September in Year 8 when they were in Year 10 and, did it really matter?

The research in the review found ‘a significant number of studies showing that spacing across days and lessons can have a small positive impact on learning outcomes’. There are a number of caveats to this when looking at the research, including the interventions being delivered by researchers (often used to try to isolate this from other variables and hard to do in general classroom practice). Often spaced practice is coupled with retrieval practice, therefore muddying the research waters and a reason why researchers delivering it may have had greater success as they were more adept at this isolation.

However, this was exactly the combination I started to use in my classroom when looking at spacing. I stripped back the curriculum, no longer interested in making things fit into nice six-week blocks to generate data for spreadsheets and started to focus more on the learning I wanted to revisit and students to retain. This could happen in single lessons, a week, a term or a year. I planned more consciously for where we would return to a key element and practise it and how I could check progress. It became more of a curriculum issue that one of individual lesson or pedagogy.

Students increasingly began to see the connections and progress in their learning with this kind of revisiting and layering, gaining confidence, not in isolated topics or skills, but English as a body of knowledge and as a subject in its own right. We might try something at the start of a learning cycle which they had encountered before and revisit it multiple times, until they were confident in doing it. This was adapted for different classrooms and my teaching was becoming more responsive to the needs of the students in front of me instead of blocks of learning which may or may not be retained or revisited.

Interleaving

The EEF review states this is:

 ‘switching between different types of problem or different ideas within the same lesson or study session’. This ‘involves sequencing tasks so that learning material is interspersed with slightly (but not completely) different content or activities, as opposed to undertaking tasks through a blocked and consecutive approach. While similar to spaced practice, interleaving involves sequencing tasks or learning content that share some likeness whereas a spaced practice approach uses intervals that are filled with unrelated activities.’

The evidence says, ‘when compared to a blocked or sequential approach, there is moderate positive evidence that interleaving can better support Key Stage 2 and Key Stage 3 pupils to select appropriate solutions when solving mathematical problems.’

This is the one which I found worked when we were making comparisons of poems or shorter texts but needed more careful consideration in a wider context. It is also the one which so many people get confused about and there are many mutations of interleaving as it gets translated into practice. They often think it is the same as a spaced curriculum, spaced practice, and revisiting topics over time. However, the intricacy here is placing two ideas together which will emphasise learning points from each.

In my experience, in English, this works well with poems, quotes, and shorter texts, but careful selection is really important if you want this to have the desired effect as opposed to just more of the same. There needs to be enough similarities and differences and sometimes the balance is hard to find. Equally I have seen some examples where the pieces are so completely unrelated, in the name of interleaving, the point has been lost for either piece.

Comparison is often our bread and butter in English and so I could see how some are using this really well. The evidence base around subjects other than Maths is an issue and shoe-horning this in for the sake of it in subjects where there isn’t an obvious use, will always be problematic.

Retrieval Practice

The focus the EEF takes on this was exploring:

  • ‘Short, low-stakes tests or ‘quizzes’ in various formats can be a cheap, easy-to-implement way of recapping material that might strengthen pupils’ long-term ability to remember key concepts or information.
  •  Planning test difficulty is particularly important— pupils should be able to retrieve at least some of the content they are tested on.
  • Quizzing or low-stakes testing may also reveal misconceptions. How will you ensure that where these emerge pupils are supported to overcome them?

This is an area I was particularly interested in, having seem some real empirical evidence in my classroom as well as looking at Self-Testing as a Generative Learning strategy which had a strong evidence base in this research.

The EEF found:

  • When compared to no recap activity at all, the evidence for using quizzes is moderate and generally positive.
  • Most studies that compare quizzing to forms of re-study or recap have a positive impact—though there are high levels of variation in the evidence and some negative results.
  • One of the weaknesses of the evidence is that many of the approaches are designed and delivered by researchers rather than classroom teachers. There are examples of teacher-delivered quizzing having a positive impact but given the lack of studies, a firm conclusion is not possible.

Recapping, building links and weaving concepts throughout my curriculum and student learning had caused a significant shift, as did considering what those concepts are, I really needed students to understand in order to move on to a range of conceptual thinking.

As suggested above, it is not as simple as just ‘do quizzes’ at the start of the class, and has little relationship with rote learning,whilst it has its place in English for some things, it is much more about creating fluent retrieval so that students can apply knowledge in an agile and flexible way. This is really important when we think about building to analysis.  The research of Karpicke and Roediger around the ‘Testing Effect’ was especially influential, even more significantly when considering how to use this as a way to empower students to work independently to retrieve information in the longer term.

The metacognitive elements of this (another contentious area for some) seem to particularly relate to my experiences with retrieval, and students being able to see progress, discuss ways they can study more effectively and embed knowledge they can build on is key for me and really embodies the Generative Learning principle. It was also important to get the conditions for this right as well as the questions chosen, something I reflected on in this TES piece here.

What this looked like in my classroom varied greatly, from low stakes quizzes to the use of an image or a piece of text, or simply a conversation around a question, with a sharp focus on key knowledge and employing the long-term memory as opposed to overly relying on prompts or simple familiarity.

This tied in well to with my study on the Assessment Lead Programme, with the Evidence Based Education team, especially in considering how to ensure the validity of the questions and how multiple-choice questions can reveal a great deal about learning if well designed. These questions were helpful in identifying misconception and the need to reteach as appropriate. Again, this is something which made my teaching far more responsive than it would have been if I had continued to rely on noisy assessments which attempted to cover everything.

Opportunities to refine my teaching are something I was glad I didn’t overlook, especially in this area.

Managing Cognitive Load

This is often the area people assume pretty much sums up all of cognitive science, associating it with the work of John Sweller. The EEF say:

‘A key challenge for educators is that working memory is limited. There are lots of things that can cause it to be overwhelmed.’

The mention of working memory and long-term memory, especially in relation to models which attempt to make a metaphor concrete, is again contested and there still remains a lot of discussion around what this actually means.

Ultimately what it means to myself and my students is ensuring neither of us are overwhelmed, both by the difficulty of the task (the intrinsic elements) and distracted or overwhelmed by the extraneous element. This meant me really considering carefully what was inherent in the task, focusing again on what prior knowledge students held, something which is imperative to cognitive load- the more you know the easier it is to use this to support the processing of new information- and how I might support them navigating this. It also led to me exploring the work of Bjork and Bjork on Desirable Difficulties and how making sure we are providing challenge and support to reach the desired outcome matter.

It led me to step back a lot and consider what they were bringing to the learning as well as whether the outcomes would be in the grasp of all of them. It also made me reflect carefully on what stepping-stones or scaffolds would need putting in place to help them all acquire this learning and to be able to make use of it.

It also made me think more deeply about attention and ensuring students were able to attend to what they needed to examine. This meant stripping away some of my, frankly rather flowery, explanation, as well as distractions around displays, power points and activities themselves. What was it that students really needed to focus on? How could I help them to do this to get the most out of it?

That doesn’t mean the lesson became formulaic or stepped in approach, and often scaffolds were included on the hoof – a little more explanation here, a prompt there. However, this saw lessons become much more productive.

It also made me consider the role of group work. Whilst a good group or paired task can be effective in supporting working memory, it can equally become a distraction and add to the load. This was an important area of consideration, allowing me again to reflect further on how I could support students to get the most from these learning activities. Collaboration can be important, but as with anything, not having clarity of purpose and support for it, is unlikely to end well.

Scaffolds, guidance and schema theory

Everything in relation to cognitive science needs to be explored alongside these areas. As I said above, scaffolds and prompts, are key in many ways. A retrieval task where knowledge hasn’t been delivered well in the first place, or there are no scaffolds to support students to retrieve, is unlikely to have a benefit. Considering how these elements fit within my overall lessons has really changed my approach to teaching and building schemas, where knowledge sticks and we pay close attention to prior knowledge, made a significant difference to my students, many of whom performed well above their ‘expected’ outcomes.

My areas of concern and where it hasn’t gone well for me

As noted previously, there are a number of boundary conditions to consider. Phase, subject and point of learning are key factors for me, but so are the points where I have had less success. This may be down to my implementation, as in the case of dual coding. Whilst multi modal models of learning have formed part of my teaching for many years, using visuals to add to explanation, it is not something I have seen done especially well in English nor is it an area I am confident with. It certainly seems to lend itself more readily to subjects such as science and geography where diagrams are key elements of the input.

Equally, the use of embodied learning and enactment seems to have greater success with students who are finding it difficult to conceptualise the abstract. Again, in maths the use of manipulatives or the use of gesture with younger students seems to be successful. It may be that certain students with SEN might benefit from this, but I have struggled to have effective outcomes in my secondary classroom with it. That is not to say I am closed to the possibility, but there also needs to be a degree of cost/ outcomes under consideration. What would be lost if I focused on this aspect? Am I sure the gains would be significant enough to make that investment?

The same is true for all of these approaches and should be a key part of our evaluation before we implement anything. As does the issue we are trying to solve. What is it that our students currently can’t do which bringing in this element will change? There is little point in investing in something if our students are doing it already.

The other consideration when we look at research like this is that it doesn’t sit in isolation in the classroom. Research works hard to try and control the variables, to bring us findings that are reliable for us to consider. It needs to be reliable in the same conditions and it needs to be scalable. It also needs to be able to happen in the real world, with real students. My development of cognitive science in my classroom sits alongside metacognition, motivation, ideas of self-efficacy, collaboration, dialogue, relationships and creating a positive learning environment. The reality of schools is that they are complex places. They need therefore quite complex solutions. There is never one thing that will work across the board in isolation.

As the EEF review says:

‘Mixed strategy programmes were amongst the strongest examples of applied research, often testing the strategies in realistic conditions and at scale. Such programmes are of interest to teacher educators and school leaders as potential vehicles for aligning teaching and learning with multiple principles from cognitive science across many teachers and schools.’.

Dylan Wiliam reminds us that ‘everything works somewhere, and nothing works everywhere,’ and before we throw babies out with the bathwater, we need to reflect on what research brings to our party. What might we want to focus on and what might not work in our context?

Equally though, he reminds us of the need for teachers to improve, ‘not because they are not good enough, but because they can be even better.’ Our students deserve teachers who continue to engage, reflect, review, refine, question and challenge. Many, many teachers do just this on a daily basis, but there is always more to explore. Cognitive science is an area which may allow us to make some positive changes for our students and keep on providing them with the best we can whilst not chasing ideas which may have a stronger lure, but weaker evidence base.

One of the main findings of the EEF review was ‘Cognitive science principles of learning can have a real impact on rates of learning in the classroom. There is value in teachers having working knowledge of cognitive science principles’. If this is the case, I think we need to continue to be cautious by all means, but also be curious and open to possibilities. Mostly never forget to dig beneath the surface of reviews, blogs and books to see what might lie beneath.

Bjork and Bjork, https://bjorklab.psych.ucla.edu/wp-content/uploads/sites/13/2016/04/EBjork_RBjork_2011.pdf

EEF review of Cognitive Science https://educationendowmentfoundation.org.uk/public/files/Publications/Cognitive_science_approaches_in_the_classroom_-_A_review_of_the_evidence.pdf

Karpicke and Roediger, https://journals.sagepub.com/doi/10.1111/j.1467-9280.2006.01693.x

Sweller, https://www.springer.com/gp/book/9781441981257

Wiliam, https://www.youtube.com/watch?v=eqRcpA5rYTE and https://www.dylanwiliam.org/Dylan_Wiliams_website/Presentations_files/2014-09-06%20ResearchED.pptx

Fiorella and Mayer, Generative Learning, https://bootcampmilitaryfitnessinstitute.com/wp-content/uploads/2016/01/eight-ways-to-promote-generative-learning-fiorella-mayer-2015.pdf

The CPD Curriculum: Creating Conditions for Growth is out now – order here and join the revolution.

2 thoughts on “What Did Cognitive Science Ever Do For Me? : A view from an English Classroom

  1. Pingback: Cognitive Science Approaches in the Classroom | Class Teaching

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s