Assessment after Artificial Intelligence: The Research We Should Be Doing
DOI:
https://doi.org/10.53761/w3x5y804Keywords:
Assessment, AI-integrated assessment, artificial intelligence, programmatic assessment, assessment integrityAbstract
The emergence of widely available artificial intelligence (AI) tools has made assessment in higher education increasingly uncertain. Familiar (if problematic) assumptions about what assessment does or should measure, who or what is being assessed, and how judgments are made are all being reexamined. Educators and researchers are experimenting with new assessment designs, but the emerging research landscape is fragmented and difficult to navigate. There is little shared sense of what kinds of studies are most needed or how their findings might connect. To address this, a group of leading assessment scholars met in Melbourne, Australia in September of 2025 to develop a collective research agenda to help guide and connect future inquiry. This paper presents the outcomes of that collaboration, a set of guiding principles and framing questions – why, who, what, how, and where we assess – that together offer a structure for guiding and supporting the research we should be doing on assessment after AI.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Dr Thomas Corbin, Professor Margaret Bearman, Professor David Boud, Dr Nicole Crawford, Professor Phillip Dawson, Associate Professor Tim Fawn, Professor Michael Henderson, Professor Jason Lodge, Assistant Professor Jiahui (Jess) Luo, Professor Kelly Matthews, Associate Professor Kelli Nicola-Richmond, Dr Juuso Henrik Nieminen, Associate Professor Nicole Pepperell, Dr Zachari Swiecki, Associate Professor Joanna Tai, Dr Jack Walton

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.