Assessment in the age of AI
An applied, authentic, and AI-aware assessment framework for MCQs
DOI:
https://doi.org/10.65106/apubs.2025.2767Keywords:
authentic assessment, AI-aware, MCQsAbstract
Generative artificial intelligence (AI) is reshaping assessment strategies in higher education. In units with online content especially, educators face increasing pressure to assess foundational knowledge with both authenticity and integrity. Multiple-choice questions (MCQs), long valued for their scalability and efficiency, are now perceived as inauthentic and vulnerable to AI-enabled academic misconduct. A common response has been the abandonment of online MCQs. However, this risks leaving foundational knowledge unassessed, creating knowledge gaps and weakening the cognitive scaffolding essential for higher-order thinking and complex skills (Sweller, 2011).
We present a strategy to reframe MCQs as both authentic and AI-aware instruments that support sustainable educator workload and motivate deeper learner engagement. Built on Action Mapping (Moore, 2017)—well-known in corporate instructional design, but under-utilised in higher education—we first shifted questions from rote recall to decisions anchored in real-world scenarios where foundational knowledge informs action. This results in authentic questions that help learners navigate real-world scenarios, outperforming typical people who do not possess knowledge or skills gained in a unit.
Next, we added a novel layer to the framework to make the questions AI-resistant. This layer relies on inherent shortcomings of AI trained on large language databases, as well as entrenched reasoning biases common across LLM AIs. The result is a suite of questions that top-performing AIs consistently and confidently answer incorrectly.
This design process led to what we call A+ MCQs—making questions that are:
- Applied: Rooted in discipline-specific, real-world decisions requiring knowledge-based judgement.
- Authentic: Reflective of lived or professional realities to increase relevance and engagement.
- AI-aware: Iteratively refined until AI responses consistently falter, ensuring human cognition drives success.
A+MCQs were piloted in PSY2014: Mental Health in the Digital Age, a second-year undergraduate psychology unit. The result was a robust bank of A+ MCQs serving multiple purposes: reinforcing essential knowledge for complex tasks later in the learner’s journey, connecting knowledge with practical choices, and allowing for efficient automated marking without compromising assessment integrity or quality. These questions also clarify the current relationship between professionals and AI by illustrating to learners concrete instances where learners who retain and apply knowledge will outperform AI.
The session will showcase how traditional MCQs were transformed into A+ questions, provide side-by-side comparisons, and share practical co-developing strategies using the TPACK lens. Attendees will leave with a design guide and tips to create authentic, AI-resilient assessments that maintain both pedagogical rigor and operational efficiency in a world increasingly shaped by AI.
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2025 Jeremy Stothers, Yogita Ahuja, Prerna Varma

This work is licensed under a Creative Commons Attribution 4.0 International License.