New research from the University of East Anglia reveals that while AI-generated essays are grammatically sound, they lack the personal touch and engagement strategies found in student writing.
Key Points at a Glance
- Study compared 145 student essays with 145 ChatGPT-generated essays
- Student essays contained more rhetorical questions and personal commentary
- AI essays were coherent but lacked a clear stance and reader engagement
- Findings suggest AI tools should supplement, not replace, student writing
In an era where artificial intelligence is increasingly integrated into educational settings, a new study from the University of East Anglia (UEA) sheds light on the comparative quality of student and AI-generated essays. The research reveals that while ChatGPT can produce grammatically correct and coherent essays, it falls short in replicating the nuanced engagement strategies employed by human writers.
The study analyzed 145 essays written by university students and an equal number generated by ChatGPT. Researchers focused on “engagement markers,” such as rhetorical questions and personal commentary, which are techniques writers use to connect with readers and present persuasive arguments.
Professor Ken Hyland from UEA’s School of Education and Lifelong Learning noted that student essays consistently featured a richer array of these engagement strategies. “They were full of rhetorical questions, personal asides, and direct appeals to the reader – all techniques that enhance clarity, connection, and produce a strong argument,” he said.
In contrast, the AI-generated essays, while linguistically fluent, were more impersonal. They adhered to academic writing conventions but lacked the ability to inject a personal touch or demonstrate a clear stance on the topic. “They tended to avoid questions and limited personal commentary. Overall, they were less engaging, less persuasive, and there was no strong perspective on a topic,” Hyland added.
The research highlights a fundamental limitation of AI writing tools: their inability to replicate the human elements of writing that involve personal engagement and critical thinking. This limitation is attributed to the nature of AI training data and statistical learning methods, which prioritize coherence over conversational nuance.
Despite these shortcomings, the study does not dismiss the role of AI in education. Instead, it suggests that tools like ChatGPT should be used as teaching aids to support student learning rather than as replacements for human writing. “When students come to school, college or university, we’re not just teaching them how to write, we’re teaching them how to think – and that’s something no algorithm can replicate,” Hyland emphasized.
The findings underscore the importance of fostering critical literacy and ethical awareness in the digital age. As AI becomes more prevalent in educational contexts, educators are encouraged to guide students in using these tools responsibly, ensuring that the development of core literacy and critical thinking skills remains a central focus.
The study, conducted in collaboration with Professor Kevin Jiang of Jilin University, China, is published in the journal Written Communication under the title “Does ChatGPT write like a student? Engagement markers in argumentative essays.”
Source: University of East Anglia