Flagged by design?
The intersection of gender, racial and neurodiversity bias in AI proctoring and academic judgment
DOI:
https://doi.org/10.65106/apubs.2025.2687Keywords:
online exam proctoring, algorithmic bias, racial/gender bias, neurodiverse students, meta-analysis, case vignette, mixed-methodAbstract
While academic misconduct prevention has traditionally focused on student behaviour, limited attention has been paid to the role of educator judgment—particularly under the influence of implicit bias and AI-generated suspicion. As online proctoring software becomes more prevalent, concerns arise regarding systemic disadvantages experienced by specific student groups, particularly female students, students of colour, and neurodiverse learners. This positioning paper explores how AI-driven proctoring technologies, combined with the cognitive demands on academics, may inadvertently amplify reliance on bias and heuristic judgment in academic misconduct decisions. Emerging evidence suggests that certain student groups are disproportionately flagged by proctoring systems and subjected to harsher scrutiny, raising concerns about procedural fairness and equity in online assessments. Rather than reporting empirical findings, this paper outlines a research agenda to investigate how identity-related cues influence both AI flagging and academic judgment. We propose a mixed-method approach—combining meta-analysis with vignette-based quasi-experiments—to critically examine the intersection of bias, surveillance, and academic integrity.
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2025 Mark Gorringe, Karen Williams, Duncan Murray

This work is licensed under a Creative Commons Attribution 4.0 International License.