Multiple Choice Questions as an Integral Part of an Effective Assessment Regime

Not Only in Defense Of, but Advocating for Multiple Choice: More Reliable Student Performance Data and Greater Assessment Coverage

January 09, 2019
By: Jack Graves, JD, Touro Law Center

Last month, Harry Ballan and Dylan Wiliam (hereinafter B&W) wrote here “In Defense of Multiple Choice.” This post seeks to build upon a few of the threads they began.

B&W first point out the value of short items in terms of reliability. Short questions typically mean more questions, and more questions tend to produce more reliable student performance data (reducing the effect of luck). This same phenomenon also provides an ancillary benefit of greater assessment coverage, which is particularly important in the context of formative assessment (hereinafter FA). Like its summative cousin, FA often includes an evaluative component. However, the primary purpose of FA is to improve the learning process. To the extent that such assessment fails to address any aspect of course content, that content is deprived of the intended learning enhancement. By focusing primarily on short item questions such as multiple choice, the value of FA can be employed broadly across the entire spectrum of course content. This breadth can be further expanded by using effective distractors (see B&W Part II) to teach multiple concepts within a single question, leveraging not only the correct answer, but one or more of the distractors as well. If one truly believes in the value of FA, this increased breadth of coverage is no small benefit. However, the value of short item multiple choice in FA extends far beyond coverage.

The key to effective FA is feedback, whether provided directly to the student or provided to the instructor for purposes of adaptation or remediation. And the key to effective feedback is timing and frequency. The best feedback is virtually instantaneous and continuous throughout the course. Multiple choice assessment provides the quickest feedback, in many cases, built directly into the assessment platform. The speed of this feedback means that prompt follow up is also possible, allowing for mid-course correction and further ongoing FA. Not only is the feedback quicker and more frequent, but it is more precise.

One of the challenges in providing feedback on a student’s essay is that the student often fails to understand where she went wrong. The student often reads your comment or your model answer and says (or at least thinks), “Yeah, that’s pretty much what I said, so I really had this one nailed,” when in fact the student was miles off point. Unlike essays, where students will often read their answers as they wish they had written them (ignoring the actual words they wrote), a multiple-choice answer is either right or wrong—with no shades of gray. The student is forced to confront, often in some detail, exactly why the correct answer was better than the one chosen by the student. This often means the student learns something from the feedback he would not have learned from feedback on an essay.

This sort of prompt, precise and continuous feedback loop is simply unrealistic with long item questions, such as essays. This is not to suggest that essay questions have no place in an effective assessment regime—quite the contrary. In fact, the use of short item multiple choice as the main staple of an assessment regime allows much more effective and targeted use of long item essay questions.

While I would argue (as would B&W, I believe) that short item multiple choice can provide solid opportunities for higher level analysis, I believe we would all acknowledge the unique value of essay questions in requiring students to construct their own analyses on a blank slate with minimal, if any, of the sort of analytical scaffolding provided by four discrete multiple choice answers. Because such a question provides uniquely valuable feedback, it should be constructed to maximize the structured writing exercise, without necessarily focusing on course coverage. Having left course coverage to short item multiple choice, the instructor can do exactly that, writing a question that largely targets core concepts (thereby avoiding at least some of the luck noted by B&W) while requiring the sort of “build it from the ground up analysis” that most of us are looking for in a good law school essay. An excellent example can be found in the summer 2017 Uniform Bar Exam essay question on third-party contract law issues. This was the only contract law essay question on the bar exam, and by focusing on third-party issues alone it ignored most of contract law. However, it did not need to address contract law broadly, as that task was amply handled by the contract law multiple-choice questions on that same bar exam. Instead, the bar examiners wrote a very good essay question touching on each of three very basic points involving third-parties to contracts. This same sort of combination (a few well-constructed essays, combined with far more multiple-choice questions) can be very effective in law school.

B&W do a nice job of debunking some of the stereotypical mythology about the value of multiple choice. In the current post, I hope I’ve provided some additional thoughts on the broad and frequent use of multiple-choice questions in combination with selected and well-targeted use of essay questions in creating an overall regime of effective formative and summative assessment.