Assessment after Artificial Intelligence: The Research We Should Be Doing

Authors

DOI:

https://doi.org/10.53761/w3x5y804

Keywords:

Assessment, AI-integrated assessment, artificial intelligence, programmatic assessment, assessment integrity

Abstract

The emergence of widely available artificial intelligence (AI) tools has made assessment in higher education increasingly uncertain. Familiar (if problematic) assumptions about what assessment does or should measure, who or what is being assessed, and how judgments are made are all being reexamined. Educators and researchers are experimenting with new assessment designs, but the emerging research landscape is fragmented and difficult to navigate. There is little shared sense of what kinds of studies are most needed or how their findings might connect. To address this, a group of leading assessment scholars met in Melbourne, Australia in September of 2025 to develop a collective research agenda to help guide and connect future inquiry. This paper presents the outcomes of that collaboration, a set of guiding principles and framing questions – why, who, what, how, and where we assess – that together offer a structure for guiding and supporting the research we should be doing on assessment after AI. 

Downloads

Download data is not yet available.

Downloads

Published

2025-12-03

How to Cite

Assessment after Artificial Intelligence: The Research We Should Be Doing. (2025). Journal of University Teaching and Learning Practice, 22(7). https://doi.org/10.53761/w3x5y804

Most read articles by the same author(s)