Reading between the GenAI lines

Twenty authors’ perceptions of GenAI-summaries of their research and the implications for learning and teaching

Authors

  • Kay Hammond Auckland University of Technology
  • Samantha Newell Adelaide University

DOI:

https://doi.org/10.65106/apubs.2025.2719

Keywords:

Generative AI summary, author perception, author-reader relationship, learning and teaching, Higher Education, qualitative

Abstract

Students trust GenAI-summaries of academic articles despite known issues of hallucinations and errors (Markowitz, 2024). Although this saves time, it costs understanding and engagement with the research findings, context, and authors. In addition, within the academic text, writers craft a credible sense of self alongside balanced representation of subject matter to create a persuasive argument (Hyland, 2005; Ivanic & Camps, 2001; Matsuda & Tardy, 2007). Students are increasingly relying on GenAI to provide summaries of academic articles (Newell & Dahlenburg, 2024). The changes in the way that readers (in this case, student readers) engage with scholarly knowledge is often explored from the perspective of the reader (Schmidt & Meir, 2023; Xia et al., 2025). Comparatively little attention has been paid to how authors’ feel about the proliferation of GenAI-summaries of their work. As such, we explored journal article authors’ perceptions of the accuracy of GenAI summaries of their work.

We conducted semi-structured interviews with 20 Australasian-resident authors to compare a summary they wrote with an AI-generated summary (ChatGPT4). Interview transcripts were analysed qualitatively through Reflexive Thematic Analysis (Braun & Clarke, 2021). The following research questions were posed: 1) How are AI-generated journal article summaries perceived as accurate/inaccurate by their authors? 2) How do authors feel about the author-reader relationship when readers only engage with GenAI-summaries?, and 3) What potential implications exist when using GenAI-summaries for learning and teaching?

Authors noted although the AI-summaries appeared accurate overall, they were accurate but vague. However, as we progressed through the interviews, authors identified inaccuracies and omissions of: context, disciplinary conventions and concepts, important findings and limitations, and connection/attribution of ideas to prior authors. Participants described how AI-summaries present author suggestions as established facts. The absence of methodological details was concerning because methods sections provide information on researchers’ decisions, validity of the findings, and training for novice researchers/students. In summaries of the introduction sections, prior literature was omitted, thus misrepresenting the study as an isolated unit instead of ongoing knowledge-building conversations. AI summaries, therefore, lacked a sense of scholarly community, “no map to take readers to the key people...and further reading”.

Several authors explained their sense of disconnection from the reader through the loss of their voice in the linguistically “flattened” output. This felt like losing individuality as everyone’s English sounds the same. Some academic authors felt disappointed, robbed, or slapped in the face that the loss of time readers would spend engaging with their work would not be reciprocated by readers only using generated summaries.

We present a model that illustrates these issues of accuracy and writer-reader relationship for educators to use to guide students through critically evaluating AI outputs. Students can make informed choices about when to iteratively prompt, and when to consult/read the full article. This model can increase student understanding of the author, community, and previous research context from which the research has evolved.

 

Downloads

Published

2025-11-28

Issue

Section

ASCILITE Conference - Posters

Categories