Independent MP Kate Chaney is pushing a new bill in Australia to criminalise AI tools designed to generate child sexual abuse material.
The bill targets the growing use of AI generators that produce illegal content, a loophole not currently covered by possession or sharing laws. These AI tools are widely accessible online, with millions of visits, and they create content offline, making police investigations tougher.
Chaney says the issue is urgent and can’t wait for the broader government response to AI regulation. She met with Attorney-General Michelle Rowland’s office, which acknowledged the legal gap.
The bill would introduce offences for using digital services to download, supply, or facilitate AI tools that create child abuse material. It also targets scraping or distributing data meant to train these tools. Penalties could reach 15 years in prison.
A public defence would allow law enforcement and intelligence agencies to investigate with proper authorization.
Kate Chaney:
“These tools enable the on-demand, unlimited creation of this type of material, which means perpetrators can train AI tools with images of a particular child, delete the offending material so they can’t be detected, and then still be able to generate material with word prompts.
It also makes police work more challenging. It is [getting] harder to identify real children who are victims.
And every AI abuse image starts with photos of a real child, so a child is harmed somewhere in the process.”
Child safety experts back the bill, calling it an urgent fix. Former detective inspector Jon Rouse said:
“While existing Australian legislation provides for the prosecution of child sexual abuse material production, it does not yet address the use of AI in generating such material.”
Colm Gannon from the International Centre for Mission and Exploited Children said Chaney’s bill closes a clear gap.
Attorney-General Michelle Rowland responded:
“As Attorney-General, I am fully committed to combating child sexual exploitation and abuse in all settings, including online, and the government has a robust legislative framework in place to support this.
Keeping young people safe from emerging harms is above politics, and the government will carefully consider any proposal that aims to strengthen our responses to child sexual exploitation and abuse.”
The government is still working on a wider AI strategy, including reviewing the Online Safety Act and other AI risks like “nudify” apps. Meanwhile, advocates say these child abuse AI tools have no public benefit and demand immediate criminalisation.
Chaney stresses urgent AI regulation is a top priority this term to plug legal holes while crafting a coordinated national approach.
The issue heats up as AI tech races forward and governments scramble to keep pace.