I’ve been pondering whether multiple choice questions assess “higher order thinking“.
It is true that many selected-response questions do measure only shallow learning, but well-designed selected-response items can probe student understanding in some depth.’
Citing Dylan Wiliam:
Wiliam gives some good examples of these types of question
This example is from Dylan Wiliam’s site:
What can you say about the means of the following two data sets?
Set 1: 10 12 13 15
Set 2: 10 12 13 15 0
A. The two sets have the same mean.
B. The two sets have different means.
C. It depends on whether you choose to count the zero.
Daisy Christodoulou says this about it:
As he says, ‘this latter option goes well beyond assessing students’ facility with calculating the mean and probes their understanding of the definition of the mean, including whether a zero counts as a data point or not.’ I would add that these types of questions can offer better feedback than open-response question. If you deliberately design a question to include a common misconception as a distractor, and the pupil selects that common misconception, then you have learnt something really valuable – far more valuable than if they simply don’t answer an open-response question.
Don’t get me wrong attempting to improve multiple choice questions for “learning” is no bad thing. Mention “higher order thinking” and the alarm bells start ringing.
In my view, this type of assessment encourages teaching to the test. A good teacher knows that questions are oft repeated, tricks used “year in / year out”. Exam boards get lazy and the cost of maintaining quality must be high.
The question above is a good example. Knowing whether a zero is included in the data set, or otherwise, is an “erroneous proposition”. If you know the definition of “mean” you do not need to know whether a zero is included, it is implicit to the definition.
Even so, teachers will have to teach the “erroneous proposition” when it crops up on a past paper, just in case. Even good students can simply think themselves into errors. Students who are insecure in their knowledge or simply prone to challenging their own beliefs could start to try and game the question. Look for complexity that isn’t there.
I think we can disregard the notion of a “misconception”. If someone believes something incorrectly (as opposed to being confronted by an erroneous proposition) then they will get it wrong. They may know the definition of the mean, and how to do it, but the misconception is their undoing. Testing incorrect beliefs (misconceptions) does not test higher order thinking. It just tests for the misconception.
Another example is this:
15. How did the Soviet totalitarian system under Stalin differ from that of Hitler and Mussolini?
A. It built up armed forces.
B. It took away human rights.
C. It made trade unions illegal.
D. It abolished private land ownership.
Daisy Christodoulou says this:
I think this is an excellent question which very definitely asks for higher order thinking. The reason why I think this question is so good is that it tests a finer gradation of understanding. Everyone knows the Nazis and Soviets were evil, and because they were evil, it is easy for pupils to just think that their regimes were the same. And of course the regimes were very similar. But they were different in interesting ways too, and this question probes that. A pupil who got this question right would have understood something important. A pupil who didn’t would have misunderstood something quite important.
This question seems to me to be entirely dependent upon knowledge propositions not higher order thinking. I know both did “A” and “B”. Mussolini did “C” (arguably so did Soviet Russia) and I suspect Soviet Russia did “D” based on Marx’s view on private ownership so I presume the answer is “D”. Even so, I’m not sure that any higher order thinking is going on. I know Marx’s view on private ownership, I don’t need to know why he had that view, in order, to answer the question. A more sophisticated answer would suggest that all totalitarian regimes do all four but use different discursive structures to justify what they are doing.
More problematic a teacher, without the accompanying thought process, cannot know how, or why, a student arrived at an answer. Multiple choice questions are blunt instruments. They do not differentiate between those who don’t know the definition or proposition of “things”, those who hold common misconceptions, those who are insecure in their knowledge and those who get lucky. The ability to explain how an answer was derived is surely valuable. A teacher cannot rely on the assumption that someone got it wrong based on the misconception. It would be necessary to find out why someone got it wrong, which essentially means reverting back to open questions.
Multiple choice questions are a quick and dirty form of assessment. A very useful, and cheap, way to triangulate data generated by qualifications that are largely institutionally assessed such as BTEC’s. As a teacher I like them because they can be gamed.
The problem for me is that multiple choice questions seem symptomatic of the neo-traditionalist desire to reduce knowledge to simple formulaic propositions. Assessment easily replicated by computers. Purposeful to check that “the best that has been said and done”, in the past, has been transmitted faithfully. “Higher order thinking”? I remain somewhat unconvinced.