Experts Suggest AI Manipulation Likely in Video of Chinese Paraglider Peng Yujiang

Experts Suggest AI Manipulation Likely in Video of Chinese Paraglider Peng Yujiang Experts Suggest AI Manipulation Likely in Video of Chinese Paraglider Peng Yujiang

Chinese paraglider footage manipulated by AI tools, experts say. The remarkable video of 55-year-old Peng Yujiang, who claimed he was sucked into the upper atmosphere, is now under scrutiny. Major outlets including the ABC and the BBC circulated the clip, sourced from Chinese state media outlet CCTV via Reuters.

The issue started when Reuters distanced itself from the content, stating, "The content is clearly labelled as third-party content and is not verified or endorsed by Reuters." They took the video down after discovering it likely included AI-generated elements.

Soon after, the ABC retracted its original story, posting an editor’s note that Peng’s claim of reaching 8,598 meters couldn’t be independently verified. Experts pointed out discrepancies in the footage, raising red flags about its authenticity.

Advertisement

Associate Professor Abhinav Dhall from Monash University noted that the video’s low quality made manipulation hard to detect. He stated, "If we closely observe the starting say 3 or 4 seconds of this video we can see that the clouds in the background do not really look real."

"It didn’t seem overly dodgy or suspicious at first glance especially looking at it on a small smartphone screen with our attention frayed," said RMIT researcher TJ Thomson.

Dhall highlighted how "subtle manipulations" can be tough for even experts to detect. Thomson added, "You can pick up little things — the colour of the helmet, for example, changing colour."

The editing raises questions about the reliability of crowd-sourced footage in news reporting, especially with the rise of generative AI. "We see 728,000 hours of video being uploaded online every day," Dhall warned. "It’s really hard for journalists to fact-check."

Australia’s Media, Entertainment, and Arts Alliance (MEAA) voiced concerns over misinformation, pushing for government regulation of AI.

"Our members have been telling us that they are concerned about misinformation and disinformation and the potential erosion of public trust in journalism and the media," a spokesperson said.

MEAA chief Erin Madeley stressed that generative AI could jeopardize the public’s ability to distinguish fact from fiction.

Dhall emphasized that the industry and government must adapt. "I reckon it will take some time for systems — automatic systems and human observers — to get on page and quickly realize that something is fake or not."

The case spotlights the urgent need for clarity around AI usage in media as the boundaries between real and manipulated content blur.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement