Design and evaluation of an LLM literature review assistant

Authors

  • Zichen Xia University of Queensland
  • Aneesha Bakharia University of Queensland

DOI:

https://doi.org/10.65106/apubs.2025.2703

Keywords:

AI-assisted writing, Academic writing support, Citation literacy

Abstract

Writing a literature review is a challenging task for many students, particularly when it comes to locating relevant sources, synthesising findings, and citing accurately. This study presents the design and evaluation of a web-based tool that uses Retrieval-Augmented Generation (RAG) to support students in writing short literature reviews. The system allows students to upload academic papers, write drafts, and receive rubric-aligned feedback from a Large Language Model (LLM). A user study involving ten honours and postgraduate students examined how learners engaged with the tool during a two-hour session. User interaction logs, rubric-based scores, and survey responses were analysed to evaluate learning outcomes and user experience. All participants showed measurable improvements in their writing. Students reported that the tool helped improve their reviews, particularly in areas such as citation guidance, writing structure, and synthesis. They also suggested enhancements such as grammar checking, better interface design, concise feedback and improvement to the retrieval and citation validation. The findings suggest that LLM-powered feedback tools can effectively support academic writing when designed to encourage revision, reflection, and writing skill development.

 

 

Downloads

Published

2025-11-28

Issue

Section

ASCILITE Conference - Concise Papers

Categories