AI Snitches: New Effort Calls on Big Tech to Put Privacy First in Agentic AI
Digital privacy experts are sounding the alarm on the dangers agentic AI poses to Signal groups like ICE Watch. A new letter campaign demands that OpenAI, Meta, Microsoft, and more put the privacy and safety of humans first.
Amid unprecedented authoritarian surveillance investments and federal pressures on AI companies to conduct government surveillance, digital rights group Fight for the Future is launching a new campaign calling on Big Tech and Big AI to prioritize privacy in agentic AI development.
Most agentic AI systems are designed to read and save everything that appears on its user’s screen—whether or not the messages it views are private, or the document it scans is encrypted. There is generally no way to opt out specific applications. Instead, agentic AI uploads this information to the cloud and stores it in a way that may be accessible to the AI company, as well as anyone who hacks or subpoenas it, for years to come. This poses a new and dire threat to movements that rely on end-to-end encrypted technologies like Signal.
The letter reads in part:
“Unless AI leaders come together and agree on transparent and uncompromising privacy and safety architecture for agentic AI that matches or exceeds the benefits of end-to-end encryption, this technology will remain too dangerous for us to ever trust.”
The full letter is available and open for public sign on at https://AISnitches.org/
Lia Holland (they/she), Campaigns and Communications Director at Fight for the Future said “Just last month we saw an AI agent force Meta’s Head of Safety and Alignment to manually shut the tool down because it wouldn’t listen to her commands. The risks these powerful technologies pose to non-experts cannot be understated—especially when it comes to the safety and privacy of activists, immigrants, and other communities living under the threat of surveillance. We know the Trump administration is hellbent on funneling our entire online lives into its surveillance machine. With agentic AI, Big Tech is building a tool that promises convenience but actually compromises anyone who contacts the person who installs it. Agentic AI is a new and concerning path by which bad actors could spy on, subpoena, or plain steal Signal chat logs that everybody thought were end-to-end encrypted. This cannot stand. We need to send a resounding message: the only good AI agent is a private AI agent. And in the meantime, anyone who communicates with a person the government might want to surveil must avoid this tech like the plague.”