Towards scalable curriculum mapping
Comparing human and GenAI alignment of CLOs to professional standards
DOI:
https://doi.org/10.65106/apubs.2025.2661Keywords:
Curriculum mapping, Generative AI, Engineers Australia, Accreditation, EngineeringAbstract
Curriculum mapping is a critical component of accreditation and continuous improvement in education. However, aligning Course Learning Outcomes (CLOs) to professional standards such as Engineers Australia’s Stage 1 Competency Standard remains time-consuming and labour-intensive. This study evaluates the potential of generative AI (GenAI) to support curriculum mapping by comparing automated outputs with expert human judgement. A stratified sample of 141 (10%) first-year CLOs from a national dataset was analysed using both manual review and GenAI (ChatGPT 4o). Each CLO was mapped to the 16 EA Stage 1 Competencies, and outcomes were classified as Match, Manual Only, GenAI Only, or Neither. Overall agreement was 81.2% with particularly strong alignment in professional and personal attributes. Mismatches were most common in technical competencies, where GenAI over-mapped based on keywords or under-mapped due to limited contextual understanding. These findings suggest GenAI can support curriculum review at scale but requires expert oversight for nuanced or discipline-specific outcomes. The work in progress study contributes to the growing literature on AI in education and offers practical insights into hybrid approaches for accreditation and curriculum design.
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2025 Zachery Quince

This work is licensed under a Creative Commons Attribution 4.0 International License.