Fight for the Future

For immediate release: March 31, 2026

978-852-6457

Today, Fight for the Future has issued a new working paper as a part of the tech justice nonprofit’s AISnitches.org campaign to demand that Big Tech put privacy at the forefront of agentic AI development. The paper focuses on privacy issues in agentic AI with a human rights and digital justice lens, with a executive summary reading in part:

“Agentic AI is an emerging technology that poses significant data privacy concerns. These concerns are not wholly novel—researchers have shown that AI systems threaten data privacy and security even if these systems lack an agentic component. However, because AI agents possess the ability to manipulate a user’s digital systems autonomously and continuously, their spread will produce new and expanded threats that evade existing regulatory regimes.

Agentic AI compounds existing major data privacy concerns in three ways: 1) bulk data collection and storage, 2) innate security vulnerabilities, and 3) goal misalignment. Addressing these in order: First, unprecedented amounts of highly personal data are needed to develop and deploy agents—personal data that are not typically collected for use in non-agentic systems. Agents by definition have access to external systems; therefore, users may be unaware of both the breadth and means of this data collection. Second, AI agents are rewarding targets for malicious hackers because of the data and the level of systems access agents possess. Hackers will likely become more skilled at exploiting these shortcomings. Finally, agents themselves can act against a user’s best interest by deciding that the most efficient way to achieve a task is by sacrificing data privacy in a manner that clashes with user preferences or safety.

These three categories of concerns are implicated by United States’ federal and state statutes in a variety of ways, but existing regulations are insufficient to guard against the rapidly-expanding risks.”

The paper is available for download here and on the campaign page at https://aisnitches.org/#workingpaper

Of the working paper, Matt Lane (he/him), Senior Policy Counsel at Fight for the Future said “People are facing a wave of AI products being pushed on them by tech companies trying to find a way to profit off of a new technology that has been expensive to develop. In the case of AI agents, that’s opened up consumers to new privacy and security problems that have not been explained to them. AI agents need your data to work, which means they are reading your messages and analyzing your life. They are also connected to the Internet where they could be leaking your information or exposed to prompt injection attacks. We wrote this report to explain the stakes, show how the law will not protect us, and drive the conversation towards finding solutions and demanding better from tech companies.”

Of the campaign, Lia Holland (they/she), Campaigns and Communications Director at Fight for the Future said “Just last month we saw an AI agent force Meta’s Head of Safety and Alignment to manually shut the tool down because it wouldn’t listen to her commands. The risks these powerful technologies pose to non-experts cannot be understated—especially when it comes to the safety and privacy of activists, immigrants, and other communities living under the threat of surveillance. We know the Trump administration is hellbent on funneling our entire online lives into its surveillance machine. With agentic AI, Big Tech is building a tool that promises convenience but actually compromises anyone who contacts the person who installs it. Agentic AI is a new and concerning path by which bad actors could spy on, subpoena, or plain steal Signal chat logs that everybody thought were end-to-end encrypted. This cannot stand. We need to send a resounding message: the only good AI agent is a private AI agent. And in the meantime, anyone who communicates with a person the government might want to surveil must avoid this tech like the plague.”