Safeguarding the Vulnerable Versus Enabling Harm: The Dual Impact of AI in Detecting Abuse

Safeguarding the Vulnerable Versus Enabling Harm: The Dual Impact of AI in Detecting Abuse Safeguarding the Vulnerable Versus Enabling Harm: The Dual Impact of AI in Detecting Abuse

AI-powered abuse prevention tools face major flaws and ethical concerns

AI is being used more to detect abuse and protect vulnerable groups like foster kids, nursing home residents, and students. Tools analyze language patterns and behaviors to spot risks early. But these systems risk copying existing biases and causing harm.

The issue started with AI models trained on flawed historical data. A 2022 study found Allegheny County’s child abuse risk scores flagged Black children 20% more often than white kids if left unchecked. Human oversight cut that gap but didn’t fix it entirely. Language AI also struggles: it mislabeled African American Vernacular English as aggressive up to 62% more often than standard English.

Advertisement

“AI systems risk scaling up these long-standing harms.”
Virginia Eubanks, sociologist and author of Automating Inequality

AI surveillance cameras in elder care miss real incidents and swamp staff with false alerts. A 2022 Australia pilot sent over 12,000 false alarms in a year. Schools use AI tools like Gaggle, GoGuardian, and Securly to monitor students’ online activity. But they flag normal or creative behaviors and have outed LGBTQ+ students to parents or schools.

“The program’s accuracy did ‘not achieve a level that would be considered acceptable to staff and management.’”
Independent report on Australian care homes AI pilot

The core problem: AI reflects the biases of its data, designers, and policies. Black and Indigenous families already face disproportionate surveillance and child welfare investigations.

Efforts to improve include Montana’s new law banning automated government decisions without human oversight, passed May 5, 2025. Also, survivors are pushed to shape AI tools directly, not just be subjects of surveillance.

Researchers propose these principles for safer AI use:

  1. Survivors control their monitoring.
  2. Humans review AI decisions.
  3. Bias audits test systems regularly.
  4. Privacy is built in from the start.

AI can aid social workers to spot risks early. But experts warn it can never replace human judgment and care.

“AI will never replace the human capacity for context and compassion. But with the right values at the center, it might help us deliver more of it.”
Aislinn Conrad, social worker and AI safety researcher at University of Iowa

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement