Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI)

Authors

  • Joseph Crawford University of Tasmania, Australia
  • Michael Cowling Central Queensland University, Australia
  • Kelly-Ann Allen Monash University, Australia

DOI:

https://doi.org/10.53761/1.20.3.02

Keywords:

ChatGPT, OpenAI, artificial intelligence, large language model, student character, academic integrity.

Abstract

The OpenAI’s ChatGPT-3, or Chat Generative Pre-Trained Transformer was released in November 2022 without significant warning, and has taken higher education by storm since. The artificial intelligence (AI)-powered chatbot has caused alarm for practitioners seeking to detect authenticity of student work. Whereas some educational doomsayers predict the end of education in its current form, we propose an alternate early view. We identify in this commentary a position where educators can leverage AI like ChatGPT to build supportive learning environments for students who have cultivated good character. Such students know how to use ChatGPT for good, and can engage effectively with the ChatGPT application. In building our ChatGPT argument, we acknowledge the existing literature on plagiarism and academic integrity, and consider leadership as a root support mechanism, character development as an antidote, and authentic assessment as an enabler. In doing so, we highlight that while ChatGPT – like papermills, and degree factories before it – can be used to cheat on university exams, it can also be used to support deeper learning and better learning outcomes for students. In doing so, we offer a commentary that offers opportunities for practitioners, and research potential for scholars.

Downloads

Download data is not yet available.

Downloads

Published

2023-04-02

Issue

Section

Articles

How to Cite

Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). (2023). Journal of University Teaching and Learning Practice, 20(3). https://doi.org/10.53761/1.20.3.02