The Need to Embrace Complexity
Last week I was lucky enough to attend the ResearchEd national conference in London. I have spent the last week mulling over the sessions I attended and writing up my notes. Any errors are most certainly my own but this is what I took away from my day.
His entire presentation is available to watch here – Why school leaders and education researchers need to embrace complexity (and how)
His key point is that different interventions interact and should therefore be considered as multi-modal interventions. It is therefore difficult to evaluate the impact of any one part of the intervention. An analogy would be pain management which works on a number of different systems and interacts with each other. In education we could look at learning to learn metacognition programs as a form of multi-modal intervention (You can read his paper on closing the attainment gap with learning to learn here.)
This is something often missed by the EEF toolkit who try (but fail) to conduct randomised controlled trials. It isn’t possible to conduct rigorous double blind trials and therefore results can be unreliable, In the real world there are too many confounding factors which can’t be controlled for.
An example given was the EEF finding that feedback has a very strong positive effect on pupil progress. However more detailed analysis shows that 40% of interventions in feedback led to a decline in progress. “Doing more feedback because the EEF says it works” could lead to pupils making worse progress than they would have done.
The EEF toolkit misses the art of implementation in schools – it is often about the details.
We need to be rigorous evaluators of the interventions we put in place in our own schools. This made me think of a conversation I’ve had with Dr Gary Jones who suggested it is difficult to evaluate the success of any intervention a school was putting in place as so much would be down to chance and changes in things outside a schools control. He also pointed me towards a fascinating paper on Contribution Analysis (John Mayne, 2012)
These are the thoughts I came away with from the session and further influenced by reading John Mayne’s work.
- How reliant are we on the headlines on research? Do we take the time to truly read and reflect on the original research or are we swayed by easy to digest reports?
- Do we remember Dylan William’s warning that “everything works somewhere?”. How can we identify what actually makes a difference and what is just noise? Are we to quick to assume that the positive changes must be due to things we have changed?
- Is educational research really that good for us?
That last thought was a bit of a kick in the teeth. With so much seemingly duff information out there are we actually better off ignoring David Didau’s advice to think “what is everything you know about education is wrong” and just go on instinct after all?
Having pondered on this over the last fortnight I think I should keep my optimism in the benefits of what good educational research can do. Instead of running from the complexity we can embrace it and become more research literate and so avoid the misleading headlines and snake oil salesman.