Significant Rise in UK Students Employing AI to Cheat for Grades

Significant Rise in UK Students Employing AI to Cheat for Grades Significant Rise in UK Students Employing AI to Cheat for Grades

UK universities caught 7,000 students cheating with AI in one year

Freedom of Information requests to 131 UK universities reveal 7,000 cases of AI-assisted cheating between 2023 and 2024. That’s 5.1 cases per 1,000 students, up from only 1.6 per 1,000 the previous year.

Plagiarism remains the top academic offense, but those rates are dropping sharply with AI tools rising. They’re expected to halve again this year.

Advertisement

Over 25% of universities don’t even track AI cheating separately yet. Still, cases are set to jump to 7.5 per 1,000 students this academic year. Other forms of cheating remain mostly flat.

Tech giants know students are hooked. OpenAI offers free ChatGPT for .edu email holders. Microsoft gives free three months of Copilot plus 50% off subscriptions. Google hands out a full year of Gemini 2.5 Pro, Veo 2 video creator, and 2TB storage. Anthropic landed the London School of Economics deal for Claude bot.

Perplexity and Reclaim.ai also push deep discounts or free access for students. The tactic: lock users early to keep them loyal.

The AI cheating issue isn’t just UK-specific. Pew Research shows 26% of US teens have used ChatGPT for schoolwork—double last year.

One US court rejected parents’ suit after their kid got penalized for AI help, reinstating him to the National Honor Society.

Teachers are split on AI use. Some compare it to calculators. Others crack down hard with shaky AI-detection tools.

Old-school exams push back. UC Berkeley saw an 80% rise in blue-book sales as handwritten essays return.

China goes further. During gaokao exams, ByteDance and Deepseek freeze AI tools. Phones get banned, signals blocked. AI monitors students and proctors.

State media reports AI vigilance inside exam halls is the new normal there.

The west likely won’t match China’s strictness. But students clearly use AI and probably should—it’s part of their future workflows.

Still, foundational knowledge matters to catch AI mistakes, aka hallucinations.

No easy fix. But AI cheating is climbing fast.


The Guardian report provides full details.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement