-640x358.jpg&w=1200&q=75)
Researchers warned that autonomous AI swarms could make large-scale misinformation campaigns harder to detect and stop.
A new paper published in Science said AI-driven influence operations require little human oversight.
The study was authored by researchers from institutions including Oxford, Cambridge, UC Berkeley, NYU, and the Max Planck Institute.
Unlike traditional botnets, AI swarms can adapt messaging and vary behaviour to mimic human users.
The researchers said these systems could sustain influence campaigns over long periods rather than short political cycles.
“In the hands of a government, such tools could suppress dissent or amplify incumbents,”
The researchers said.
AI swarms were described as exploiting existing weaknesses in social media algorithms and content curation systems.
The paper noted that false information spreads faster than accurate reporting on many platforms.
Sean Ren said AI-driven accounts are becoming increasingly difficult to distinguish from genuine users.
“I think stricter KYC, or account identity validation, would help a lot here,”
Sean Ren said.
The researchers warned that current platform safeguards may struggle to identify coordinated AI activity.
The paper concluded that no single technical solution exists and called for transparency and stronger identity controls.