I’ve been blogging recently about the discourse generated by cognitive load theory. In this one, I want to discuss research associated with cognitive load theory, which spans the fields of psychology and cognitive science but concerns educational practice.
Again, I am going to reference a blog by Greg Ashman, entitled Anarchic minds, and refer to a paper mentioned by him The Role of Executive Functions in Socioeconomic Attainment Gaps: Results From a Randomized Controlled Trial. I would re-iterate that Ashman blogs powerfully about these issues and generates points to debate.
In his blog, Greg Ashman states:
“Cognitive load theory takes an unusual stance on executive functions. It gives working memory capacity – our raw processing power – primacy and even doubts the existence of a central executive coordinating the other executive functions, suggesting instead that they are a function of schema-building in long-term memory. You may not agree with this unusual view, but at least the model appears to be broadly consistent with the data.”
Ashman’s argument is that cognitive load theory offers a highly unusual interpretation of executive function, but the interpretation is justified by experimental data. To make his point about data Ashman suggests that models are conceptual representations of issues, while experimental data is real:
“In any discussion of cognitive science, it is important to remember that it is the experiments that are real and any mental structures or processes that we suggest to account for the results of these experiments are just models, ripe for replacement as new evidence becomes available.”
In this blog, I aim to dispute Ashman’s view that experimental data represents the reality of educational practice citing 5 problems.
1. Conceptual representations as proxies for biological processes
A replicable law in the natural world may be said to be independent of a researcher’s mental representations.
The type of research Ashman cites is not causal or replicable but involves conceptual representations of executive function. Ironically, he makes the point “it is particularly important to remember that the models we propose will colour the way we think about the evidence”.
The question is, if a model is not real, then can data generated by an experiment on a model be real?
2. Contested conceptual representations as ontology
The most important aspect of any experiment is the ontology, or nature, of the object to be studied. Of course, defining the nature of social conceptualisations of research objects may never be as straight forward as natural objects, but you would presume that this would be taken into account.
In this passage, Ashman accepts that the research he cites is based upon a model:
“A good example of this issue is the difference between Baddeley’s model of working memory, which contains something called a ‘central executive’ which performs such functions as directing and inhibiting attention, and research into ‘executive functions‘ which typically include working memory, control of attention and behaviour, and cognitive flexibility. Both of these models cannot be true at the same time.”
Unfortunately, it’s not clear what model it is based upon. The research cited by the study is Myake 2012; however, that study contests Baddeley’s original model. In other words, it is a model based on the contestation of an already contested model.
Moreover, cognitive load theorists like Ashman don’t even think executive function exists. Even if they accepted its existence, cognitive load theorists, don’t think it does what this research paper purports it to do:
“(…) these models assume that there are ‘domain general cognitive skills that exert top-down control over attention and behaviour‘. It is this assumption that, to some extent at least, cognitive load theory challenges.”
Again, I wonder how it can be argued data generated on a model considered, either, (a) to be wrong or (b) not to exist; can be real?
3. Partial definitions of conceptual representations
It gets less clear! The researchers define executive function as:
Executive functions are domain‐general cognitive skills that exert top‐down control over attention and behavior (Diamond, 2013). Executive functions include working memory, which allows us to maintain and process information; inhibitory control, which allows us to suppress automatic but incorrect responses; and cognitive flexibility, which allows us to adjust our behavior according to changes in the environment or our goals (Miyake et al., 2000).
The researchers having defined executive function as having three properties only consider two of them: working memory and inhibitory control. They recommend that cognitive flexibility be considered in future studies; there seem to be few rules as to how you define the objects of a study in experimental cognitive science.
In social research, positions have to be declared and objects carefully defined using established theory. In the natural sciences RCTs are subject to the natural laws of causality and replicability. RCTs in social practice seem to lack the rigour of either approach.
It would probably be acceptable if the focus is specific areas of executive function in the field of psychology, but if you are experimenting on executive function in educational practice then surely partially defining the object of the research is a problem?
4. The unreality of the research objects
The cited objective of the study is “whether executive functions mediate the relation between (socio-economic status) SES and mathematical skills in preschoolers”. It is an impressively ambitious research question.
The activities undertaken by the children included:
“Two tasks involved working memory: The Six Boxes task (Diamond, Prevor, Callender, & Druin, 1997) and the One‐back task (Tsujimoto, Kuwajima, & Sawaguchi, 2007); and two tasks involved inhibitory control: interference control (the Flanker task, Rueda, Posner, & Rothbart, 2005) and response inhibition (the Go‐No‐Go task, Simpson & Riggs, 2006). Children completed all four tasks in a single session, and each task lasted approximately 5 min.
The biological processes under experimentation bear little resemblance to the research objects used as proxies for those processes.
It is certainly true, researchers found a link between SES and the research activities undertaken; whether those activities represented the complex biological processes under scrutiny is a matter for conjecture. There could be several explanations of why activities undertaken by four-year-olds generate data correlated to SES.
Regardless, the research claims:
“Executive functions mediated the relation between socioeconomic status and mathematical skills. Children improved over training, but this did not transfer to untrained executive functions or mathematics. Executive functions may explain socioeconomic attainment gaps, but cognitive training directly targeting executive functions is not an effective way to narrow this gap.”
I remain unconvinced that a study, into a contested model, using a partial definition that includes working memory and inhibitory control but not cognitive flexibility, which also adopts activities enacted by four-years-olds as proxies for biological processes, can be considered in any way “real.”
The conflation of experimentation and discourse
The consequence in my view of an experiment that overshoots its limits is the conflation of experimentation and discourse. In the associated discussion based on the experiment the researchers say things like:
“Firstly, SES may be associated with executive functions due to differences in parental scaffolding and responsiveness. The fact that links between SES and executive functions are apparent early in development suggests that parenting may be a key mechanism through which social inequality influences development.”
Fine, but what aspect of the experiment tested parental scaffolding? Again, the experiment generates discourse and not strong correlational or causal evidence.
Am I being harsh? Perhaps, but the research Ashman cites does seem highly speculative. It is hard to argue that this kind of RCT is the same as those studying causal or strongly correlational relations of objects in the natural world. Even so, the study may contribute to a body of knowledge, which may unlock some future door with regards to our understanding of executive function.
The problem is that educators use this kind of research to conflate knowledge, thinking-skills and creativity to drop the bombshell that there is no point teaching students to “think”.
As Ashman states:
“(…) it seems that if executive functions are indeed general, they cannot be generally improved by training.”
Is Ashman wrong to do that? After all this is a gold standard RCT much vaunted by policy-makers and gurus?
To be fair, the researchers also highlight quite a number of serious reservations, which I haven’t included such as:
A further possible explanation of our results is that brief computerized cognitive training is more generally not an effective way to promote executive functions and mathematical skills. It is possible that particularly for preschoolers, for whom executive functions are not yet fully developed, brief computerized interventions that involve children completing specific tasks is not enough to improve executive function capacity. Interventions may need to be more sustained, or more importantly, they may need to be embedded within the learning tasks we wish to nurture. This is particularly pertinent to early mathematical skills where children may need practice while learning to apply executive functions strategies, and furthermore, may need instruction from others who can scaffold their learning and demonstrate learning principles.
Another limitation is that the intervention only focused on a single domain and did not intervene more broadly on factors such as classroom quality or family functioning. In order to narrow the social attainment gap, it is likely that sustained and broad interventions are needed that address inequalities at all levels, including structural barriers, the family and the broader learning environment.
Quite, but you do wonder at what point is a methodological approach unsuitable or simply not purposeful. If you can’t define something; how can you experiment on it?
The particular research team is not responsible for the discourse generated by a piece of research, but the arguments are increasingly working their way into the discourse and practice of education.
As I’ve written before, I worry about the impact of experimental cognitive science on education. I wonder, if research intends to inform practice should it adopt methods, and ways, of writing up with practice in mind. It might save education from having discursive bombs dropped into its midst.