AI use in academic writing faces strict new calls for transparency
Recent guidelines from journals and ethics groups demand clear disclosure when AI tools generate or assist with academic texts. The move targets the growing blurred lines between human and AI authorship.
AI-assisted work — like grammar checks or style edits — is mostly fine without citation. But AI-generated content, where AI writes entire sections or significant chunks, must be explicitly referenced or risks rejection.
The Committee on Publication Ethics and publishers like Sage emphasize authors are fully responsible for verifying AI content’s accuracy and integrity. They warn about AI “hallucinations” and potential plagiarism hidden inside generative outputs.
Summarizing the rules:
- Use AI for routine tasks (grammar, sentence structure) without disclosure.
- Clearly cite any substantive AI-generated content with tool name, date, and prompt.
- Include indirect AI uses (code fixes, figures) in manuscript acknowledgments, not main text.
- Authors remain liable for bias, plagiarism, and copyright issues.
Professor Sumaya Laher from the University of the Witwatersrand explains:
“The guidelines are unanimous that AI tools cannot be listed as co-authors or take responsibility for the content. Authors remain fully responsible for verifying the accuracy, ethical use and integrity of all AI-influenced content. Routine assistance does not need citation, but any substantive AI-generated content must be clearly referenced.”
She urges including all AI use in acknowledgments if in doubt. The policies will likely evolve, but transparency is non-negotiable now.
AI is here to stay. Academia just got serious about owning its role.