Tell Congress: Ban AI in War and Surveillance

The use of AI in warfare and surveillance is on par with the threat of biological weapons: the severity and scale of harm is so extreme that regulation won’t cut it—it requires a full-on ban.

In recent months, we’ve watched ICE agents weaponizing AI-powered surveillance tools to intimidate, threaten, and abduct people from communities across the country. The same suppliers of these tools are also fueling the US’s attacks on Venezuela and Iran through programs like Palantir’s Project Maven and Anthropic’s Claude AI, which are spitting out coordinates and priority levels for missile strikes. As Israel bombs Iran alongside the US, it is likely utilizing AI tools tested and honed through its ongoing genocide in Gaza––the first AI-fueled genocide in the history of the world.

This technology is extremely prone to bias and error, yet militaries are offloading the weight of literal life and death decisions to AI-powered programs. In just one disturbing example, Israeli military officials utilized an AI system to tag tens of thousands of people in Gaza as ‘kill targets’ and reportedly carried out the targeted killings with no hesitation, despite the fact that the AI program was known to make errors up to ten percent of the time.

The stakes couldn’t be higher right now. Even though Anthropic recently rejected a contract with the Department of Defense over mass surveillance concerns, we know that there will always be greed-driven tech companies willing to comply with outrageously harmful government demands. That’s why we need Congress to defend our freedoms, security, and the future of humanity from the unchecked power of AI. Tell Congress: ban the use of AI in war and surveillance. AI in warfare poses an existential threat to our communities and the future of humanity – we need to put an end to its use for killing and harm immediately.

Thanks for signing the petition!

Please consider sharing this page with your friends and family.

Background: What is happening?

The US government is using AI technologies in its ongoing invasions into other countries, as well as on communities within the US. In July last year, The Pentagon awarded Anthropic, OpenAIGoogle and xAI $200 million contracts to develop AI capabilities that would advance US national security.  The Pentagon announced in January that it is looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.” The US also uses AI technology from Palantir, a company that provides data analytics tools to government customers for intelligence gathering, surveillance, counterterrorism and military purposes. The US military used AI technology in its operation to capture former Venezuela President Nicolás Maduro and in its recent attacks on Iran.

Anthropic has asked the Pentagon to agree to certain guardrails including restrictions on using Claude to conduct mass surveillance of Americans and not using the technology for final targeting decisions in military operations without any human involvement. Instead, the Pentagon gave an ultimatum to the company to give the US military unrestricted use of its AI technology or face a ban from all government contracts. They demanded the ability to use Claude for “all lawful purposes” and contended that Anthropic’s usage concerns were not material because it’s already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons.

Leaving life and death decisions to machines is dangerous and raises serious ethical concerns. These systems can malfunction, misidentify targets, or be manipulated. It undermines fundamental rights, amplifies biases and discrimination, concentrates power in unaccountable systems and increases the risk of accidental or unjust violence. Anthropic’s Claude, like other AI models, “is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment.” We cannot allow the continued use of this technology at the cost of human dignity, democratic freedoms, and global peace. Congress must act now to stop the use of AI for surveillance in warfare.