Design and evaluation of an LLM literature review assistant
DOI:
https://doi.org/10.65106/apubs.2025.2703Keywords:
AI-assisted writing, Academic writing support, Citation literacyAbstract
Writing a literature review is a challenging task for many students, particularly when it comes to locating relevant sources, synthesising findings, and citing accurately. This study presents the design and evaluation of a web-based tool that uses Retrieval-Augmented Generation (RAG) to support students in writing short literature reviews. The system allows students to upload academic papers, write drafts, and receive rubric-aligned feedback from a Large Language Model (LLM). A user study involving ten honours and postgraduate students examined how learners engaged with the tool during a two-hour session. User interaction logs, rubric-based scores, and survey responses were analysed to evaluate learning outcomes and user experience. All participants showed measurable improvements in their writing. Students reported that the tool helped improve their reviews, particularly in areas such as citation guidance, writing structure, and synthesis. They also suggested enhancements such as grammar checking, better interface design, concise feedback and improvement to the retrieval and citation validation. The findings suggest that LLM-powered feedback tools can effectively support academic writing when designed to encourage revision, reflection, and writing skill development.
Downloads
Published
Issue
Section
Categories
License
Copyright (c) 2025 Zichen Xia, Aneesha Bakharia

This work is licensed under a Creative Commons Attribution 4.0 International License.