TatvaLog.in

Essence Beyond Information.

OpenAI to Amend Pentagon Deal, Bars Use of AI for Domestic Mass Surveillance

OpenAI CEO Sam Altman has announced that the company will revise its agreement with the United States Department of Defense to explicitly prohibit the use of its AI systems for mass surveillance of Americans.

In an internal memo later shared publicly on X, Altman said the contract will be amended to clearly state that OpenAI’s technology cannot be used for intentional domestic surveillance of U.S. citizens or nationals. The revised language references constitutional and federal safeguards, including the Fourth Amendment, the National Security Act of 1947 and the FISA Act of 1978.

The memo clarifies that the AI system must not be used for deliberate tracking, monitoring or surveillance of U.S. persons — including through commercially acquired personal or identifiable data. Altman also stated that intelligence agencies such as the NSA would not be able to use OpenAI’s systems without further contractual changes.

In a strong personal stance, Altman wrote that he would rather face jail time than comply with what he believed to be an unconstitutional order.

Rushed Rollout and Political Fallout

Altman acknowledged that OpenAI moved too quickly in announcing the partnership, saying the issue was complex and required clearer communication. He explained that the company aimed to prevent a worse outcome but admitted the timing made the deal appear opportunistic.

The agreement followed an order from President Donald Trump directing U.S. agencies to stop using AI systems from Anthropic, maker of Claude. Anthropic had previously refused reported pressure from Defense Secretary Pete Hegseth to loosen its AI guardrails for broader “lawful” uses, including mass surveillance and fully autonomous weapons.

Anthropic publicly rejected those demands, stating that no intimidation would change its stance against domestic mass surveillance or autonomous weapons deployment. The Pentagon had also reportedly begun steps to label Anthropic a “supply chain risk,” a designation typically applied to firms seen as national security concerns.

Altman said he told U.S. officials that Anthropic should not receive such a designation and expressed hope that the company would be offered a similar agreement to the one OpenAI accepted.

Market Reaction

The controversy triggered notable shifts in the AI app market. Following the news, Anthropic’s Claude surged to the top of the App Store’s free app rankings, overtaking ChatGPT and Google Gemini. The company quickly launched a memory import feature to make switching from other AI chatbots easier.

Meanwhile, ChatGPT reportedly saw a sharp spike in uninstalls — jumping nearly 300% day-over-day, according to analytics firm Sensor Tower.

The episode highlights growing tensions between national security priorities and ethical AI development — as companies navigate government contracts while attempting to reassure users about privacy and constitutional protections.

Leave a Reply

Your email address will not be published. Required fields are marked *