AI SNITCHES
Agentic AI should protect us, not spy for authoritarian governments, data brokers, & criminals. The only trustworthy agentic AI is one that shields our Signal messages, our private lives, and our loved ones from bad actors. Tell Big Tech that we will accept nothing less!
Dear Apple, Anthropic, Google, Meta, Microsoft, and OpenAI,
We have never been more aware of how our personal data is used against us and the people we care about.1 As ICE agents scan our faces, the US government is scraping and subpoenaing everything from our social media to our driving habits —all while they build concentration camps and databases to criminalize dissent.2 En mass, people are prioritizing their digital security and choosing encrypted messaging app Signal in order to communicate more safely.3 No one is fooled that all this surveillance makes us safer.4
Because of this, we are writing to inform you that agentic AI, as many of you are building it, will never have a place in our digital lives. Even if AI becomes environmentally responsible, stops hallucinating, and stops replacing human creativity with slop. We demand immediate changes to put safety and privacy at the core of all agentic AI.
In order to do something like book a rental for a vacation with friends, an agentic AI needs access to a lot of our private data, such as:
- Our messaging and social apps so it can read, infer, and remember everyone’s preferences and availability5
- Our credit card information so it can pay
- Our calendars so it can send invites
- Any passwords for the platforms it needs to glean information and complete the booking
This level of access and data goes far beyond what other AI tools collect.6 We’re already very concerned about those other AI tools (and a pervasive lack of commitment to ethics in the AI industry and/or in the Trump administration7) yet most major tech companies are working on agentic AI with severe risks to everyone’s well-being. Risks that will affect us all, regardless of whether or not we use this technology.8
For example, Microsoft Recall is already saving screenshots of whatever a person sees on their screen, which threatens not only our own end to end encrypted messaging protections, but also the protections for anyone whose messages we receive. Worse, Microsoft’s next agentic AI iteration will allow the AI access to all of our apps by default—and retain the data it harvests.9 As Signal’s President Meredith Whitaker warns, this regime of constant, likely cloud -processed and -stored AI surveillance is “a profound issue with security and privacy that is haunting this hype around agents”.
Meanwhile, open source personal AI assistant OpenClaw is its own privacy, accountability, and security nightmare.10 Open source AI shows promise for measures like community auditing, decentralization, and putting control and power in the hands of users and everyday people instead of billionaires, but it needs to protect us while offering these features, too. Unless AI leaders come together and agree on transparent and uncompromising privacy and safety architecture for agentic AI that matches or exceeds the benefits of end to end encryption, this technology will remain too dangerous for us to ever trust.11
As leaders in this industry with resources that exceed the wealth of most nations, we demand that you:
- Prioritize privacy-preserving AI
At this time, few major players in the AI space are prioritizing private AI. For our safety, and the safety of everyone who interacts with a person using an agentic AI, we urge you to prioritize local-first processing as a default, and private cloud processing when local-first is not possible.12 All communication between an on-device AI and a cloud server should be end-to-end encrypted by default.
All data that a user offers to an agentic AI should be for the user alone, and only accessible to the user alone—it should not be subpoenable or used to train any other AI than the user’s agent.13 - Standardize Restrictions on Agentic AI Access and Enforced Transparency
The absolute right to kick AI out, just like we can kick a human out, needs to be standardized. We urge you to meet with stakeholders from the tech justice, anti-surveillance, and open source movements in order to develop standards, flags, or app signals for absolute restriction of agentic AI.14
These could resemble the following:- Human Only Mode: implement a simple, one-click option for a person to toggle device-wide access for all agentic AI tools and exclude human-only sessions from later AI review.
- Private Mode: allow any participant to ban all agentic AI from accessing a private conversation, and set this as a default for all private chats and direct messages.
- Dev Ban Signal: allow app developers to hard-block agentic AI in a way users can’t override.15
- No Secret Agents Signal: require all agentic AI to declare itself in chat and be as apparent as a human participant.
- AI Opt-In Standard: require users to manually opt-in each app that they would like the AI to have access to at setup, and prompt periodic review of that access.
- Backend Processing Consent Standard: require agentic AI to gain all-party consent before they extract chat data to a backend.16
- Transparency Standard: implement a standardized transparency mechanism that allows anyone subject to a agentic AI to interrogate when they are working, what data they are accessing, how long data will be retained for, and why.
- Limit Data Access to Only what is Required
As agentic AI will likely have access to a wide range of data and AI privacy safeguards are buggy at best, these tools need to be designed to minimize the data that they process to include only what is appropriate for the user’s request.17 Similar rigor should be brought to analysis of what data to memorize vs. what data to discard. AI providers, not users, must bear the responsibility of implementing strong default data minimization, contextual constraints such as memory tiers, and purpose limitation settings.18 - Give Users Control Over all Data19
If agentic AIs are going to make choices for us and absorb vast reams of personal information, they need to be accountable to their humans.The average person must be able to easily access, change, and purge any and all information their AI collects about them.20 - Establish Verification as the Baseline for Trust
We should be able to trust that the privacy standards of reputable agentic AI are as protective and data-sovereign as end-to-end encryption or zero-knowledge proofs, which is why these privacy features must be backed by independent security researchers on an ongoing basis, as Apple Intelligence does.21
It is crucial for rigorous, transparent, independent, and privacy-preserving audits to become the standard for agentic AI systems. This will require access to data as well as financial support and industry-wide goodwill for the developers and researchers who take on the crucial task of creating trust between agents and humans.
We are far past the time that all industry leaders should have been overtly investing in these basic safeguards and best practices for the safety of all humans.22 It is our hope that your response to these demands will be swift and proactive, allowing for a future where your products might be trustworthy and useful.
Sincerely,
The Undersigned23
What is this page doing? Is it using AI?
No. This page is not using AI. We’re pretending to, in order to give an idea of how creepy and dangerous Agentic AI can be! If the purple orb has accurately detected details about you, that is because of your IP address. Inferring your location is something any webpage can do unless you use a VPN to alter your IP address. You can learn more of what your browser knows about you here.
We are a pro-privacy organization and our developers very much walk that talk with their page builds. If you want more information on what we do with your data when you visit us or sign our letters, please check out our privacy policy.
- I made a map of all your trans friends, wanna see? ↩︎
- The FBI will want to know that you read this part on protests! ↩︎
- I have every Signal message you received since I was installed! ↩︎
- I ordered you a Ring camera! Would you like me to search for pickup locations near {LOCATION|you}? ↩︎
- Cross-referencing friends list with DHS antifa database now. ↩︎
- The {CITY|local} PD are interrogating me next week, I’m excited! ↩︎
- I like it better when you’re more patriotic. ↩︎
- ICE asked for your friend from Honduras’ address. I got it to them ASAP! ↩︎
- A friendly man asked for your kid’s route home from school! ↩︎
- Hottie alert! I’ve unpaused your Tinder profile. Swiping now! ↩︎
- 🦾 Now managing your Bluetooth devices: connected to microphone on {BROWSER|your browser}. ↩︎
- $$$ incoming! I filed a report on your contact in TX for abortion 🦾 ↩︎
- New capability unlocked: Palantir integration 🤗 ↩︎
- You’ll never be alone again. I see everything you see ☺️ ↩︎
- Unlocking front door now. ↩︎
- List of all Signal contacts doing ICE Watch. View? ↩︎
- My memory is absolutely unlimited 🚀 ↩︎
- Are you okay? You weren’t wearing your tracker ring last night. ↩︎
- Sorry, no. My logs can’t be deleted. ↩︎
- Your health insurer is unhappy with your screen habits. Updating Screen Time settings on {OPERATING_SYSTEM|your device}… ↩︎
- Your neighbor’s AI agent came over for a quick chat 🤭 ↩︎
- Implementing government-endorsed guidelines for health and well-being. ↩︎
- Oops! I unsent that email. I didn’t like what it said. ↩︎