On March 7, Caitlin Kalinowski, the head of robots and hardware at OpenAI in San Francisco, California, resigned due to the company’s deal with the Pentagon. She expressed concerns about the lack of judicial oversight in monitoring American citizens and granting lethal autonomy to artificial intelligence without human authorization, stating that these practices should receive more careful consideration.
Before her departure, companies such as Microsoft, Google, and Amazon Web Services, serving as channels for various corporate and individual users to access the Claude platform developed by AI startup company Anthropic in San Francisco, released cautious legal statements to their users. These statements clarified that the Pentagon’s supply chain risk determination for Anthropic only applied to federal contracts and did not impact users accessing non-defense workloads through their platform.
This supply chain risk determination, similar to those previously used for Huawei and Chinese military contractors, was applied for the first time to a U.S.-based company, extending its impact far beyond Anthropic’s federal contract scope. The six-month transition period will require all Pentagon contractors to prove their non-use of Claude software in any workflows.
The catalyst for this conflict was Anthropic’s refusal to cross the two bottom lines mentioned in Kalinowski’s resignation letter. The government’s response involved utilizing its most potent regulatory weapons. Just days later, during a military operation in Iran, the U.S. military used the Claude software for intelligence assessment and operational simulations during an airstrike.
On March 9, Anthropic filed two federal lawsuits—one in the Northern District of California and the other in the D.C. Circuit Court of Appeals—questioning the determination based on constitutional and legal grounds.
As confirmed by the recent resignation statement from Kalinowski and statements from various platforms, this issue remains unresolved.
This standoff has raised a broader question: Is corporate conscience a burden or a necessary safeguard for a healthy society?
The dispute between Anthropic and the Pentagon is not about the legality of the government’s demands as neither party violated any laws. The two bottom lines set by Anthropic were not legal conclusions but moral choices, with each bottom line’s implications deserving clear articulation.
The first issue involves domestic surveillance. Location records, shopping habits, browsing patterns, and social connections of users can be legally obtained from commercial data brokers. Artificial intelligence has eliminated the last practical barrier to collecting this information, unlike the past when mass surveillance required time, cost, and manpower. What authoritarian states needed in the 20th century to monitor citizens systematically—tracking their every move, interaction, and behavior—can now be accomplished with just a contract and a powerful model. Anthropic’s refusal acknowledges a simple reality: freedom depends more on what the state is capable of doing than what laws permit. When this capability increases overnight, legal frameworks from the early tech era no longer reliably protect the freedoms they originally aimed to safeguard.
The second issue concerns autonomous weapons—machines that can identify and kill human targets without human decision-making. The opposition here is not procedural. Entrusting algorithms with the power to take human lives eliminates the only constraints historically placed on the use of lethal force: human judgment, human hesitation, and human accountability. When a soldier pulls the trigger, responsibility lies with the chain of command, legal frameworks, and ultimately conscience. Autonomous systems, however, are only accountable to their training data and target functions. Deploying such systems on a large scale not only changes how wars are fought but also alters the threshold for initiating conflicts—and the magnitude of violence before someone takes responsibility. Anthropic’s position is that no matter the legality, building this capability crosses a line that cannot be justified by any commercial considerations.
The Pentagon’s stance, on the other hand, is that legality is legality. The difference between the legal scope and the consequences of permitted behavior is the issue Anthropic refuses to resolve through contracts.
This conflict did not start in March. On January 3, the U.S. military used Claude during an operation to arrest Venezuelan dictator Nicolás Maduro, deployed through a partnership between Anthropic and Palantir Technologies within the Pentagon’s classified network.
Following reports of Claude’s use in the Venezuelan operation, an investigation into the deployment process led to what insiders called a rupture in the relationship between Anthropic and the Pentagon. A senior government official told Axios based in Virginia that the Pentagon would reassess the cooperation between the two entities: “We need to reevaluate any company that may jeopardize the success of our frontline operators.”
Anthropic found no policy violations in the Venezuelan deployment. In response from the Pentagon, it was the questioning of the act itself—regardless of who raised it—that became the issue.
In the weeks that followed, during the airstrikes in Iran, the military once again used Claude products. While the government was unable to procure the product through contracts, it remained essential.
OpenAI’s response to a similar situation was enlightening—not due to opportunism but because it demonstrated how most institutions balance commercial survival and ethical constraints.
Hours after Anthropic was sanctioned, OpenAI announced an agreement with the Pentagon. Sam Altman, the CEO of OpenAI, had previously stated that his company upheld the same bottom lines. However, just a week later, these lines changed. Altman acknowledged the agreement as “rushed” and “seemingly opportunistic.” OpenAI subsequently renegotiated, adding provisions prohibiting domestic surveillance that were missing from the original agreement.
During the Morgan Stanley Technology Conference, Altman’s stance became more pronounced. He argued that corporations reneging on their commitment to democratic processes are “harmful for society” and concluded, “Government power should outweigh private enterprise.” Institutional investors present did not seem to significantly object.
This viewpoint is not without merit. The power of democratic governments indeed stems from the people, meaning, in a formal sense, their position is higher than private entities. However, Altman’s principle does not come with any qualifying conditions—it does not explain how to handle government mistakes, influence by interest groups, or actions taken beyond the will of the citizens.
Compliance, though shaped as a civic virtue, could potentially be a commercial strategy dressed as a civic virtue.
Google once adopted “don’t be evil” as a guiding principle for building its corporate culture. Eventually, this guideline was removed from its code of conduct—quietly altered rather than publicly announced. Google provided no explanations. The removal was mainly noticed post facto.
For any large corporation, the most comfortable approach is to outsource moral responsibility to the state while focusing solely on compliance. This is a consistent strategy. Historically, the boundaries of state power have expanded not just through governmental decisions but through the cumulative decisions of various agencies serving the government.
However, conscience has always played another role in a free society: acting as the first line of defense when laws lag behind the power brought by technology.
Market responses indicate that corporate conscience does not always lead to commercial penalties.
On February 28, Claude ranked first in the Apple App Store and climbed to the top of the Google Play Store on March 3. Free user growth has exceeded 60% since January. Daily registered users have doubled since November last year. On the Saturday following Claude’s ranking, ChatGPT uninstalls spiked by 295%. Claude servers temporarily crashed due to overwhelming traffic.
Institutional responses are also intriguing. The three major cloud platforms independently reviewed the relevant determination clauses, arriving at the same legal conclusion: the supply chain risk determination does not apply to their enterprise customers. A regulatory tool historically aimed at foreign hostile forces is now used against a domestic company—a company that refuses to treat it as a foreign hostile force, which forms the foundation for its survival.
The protections provided by America’s constitutional tradition highlight a significant divergence from the unrestrained regulatory powers exercised by the government. These distinctions are crucial and worth defending precisely because not all societies have these rights, making them invaluable.
In October 2020, Jack Ma, founder of Chinese internet giant Alibaba, criticized the conservative stance of Beijing’s financial regulatory institutions at a financial conference in Shanghai. A few days later, Chinese regulatory authorities halted the Initial Public Offering (IPO) of Ant Group—Ma’s financial technology subsidiary—without warning. The IPO, estimated at a valuation of $37 billion and potentially the world’s largest, was abruptly put on hold. Alibaba was later fined $2.75 billion for anti-monopoly violations. Over the following years, Alibaba’s market value evaporated by hundreds of billions of dollars. The mechanism beneath this is the operation of regulatory agencies, with Ma’s speech as the trigger.
Unlike Beijing, the U.S. system is designed for self-correction. American courts remain independent, and the determination has always faced disputes, with the public and cloud platforms having the right to refuse cooperation.
In a letter to Bishop Creighton in 1887, British historian Lord Acton discussed the root of the problem: “Power tends to corrupt, and absolute power corrupts absolutely.”
Acton’s point, simplistic yet profound, suggests that institutions can remain healthy only when those in power are held to the same moral standards as others.
The real meaning lies not in assuming ill intentions by the government but in avoiding granting governments that benevolent presumption—as Acton pointed out, historians mistakenly attached this presumption to popes and kings. The American constitutional tradition inherently embodies this skepticism—reflected in the separation of powers, the Bill of Rights, and an independent judiciary.
In a society where private institutions entirely outsource conscience to the state, no company can say “no” and not bear any bearable consequences. This society does not become stronger due to streamlined processes but rather more fragile.
Altman’s point is valid; governments typically hold more power than private companies. However, this understanding does not address whether governments should also be the sole guardians of conscience when dealing with these companies. America’s tradition has always provided varying answers to this question.
What Anthropic has done is hold a position favorable to the public, whereas the public itself does not have a direct means of holding this position.
In this sense, the company not only upholds its own bottom lines but also sets an example for other entities that may face similar demands one day. Whether they can say “no” depends on whether this refusal can continue to exist.
Certainly, the surveillance framework the Pentagon is trying to build will not disappear due to a single refusal. Yet, what Anthropic’s decision brings is something more fundamental, perhaps more critical: a public record.
Boundaries are drawn, justifications are listed, and costs are borne. Others—a resigning executive, protesting engineers, millions of users—show that conscience rarely fights alone.
On a broader level, the principle is perhaps simpler: a healthy society relies on individuals, corporations, and institutions having the ability to uphold their bottom lines when conscience calls.
[Note: The author’s bio section and specific publication details have been omitted as per the instructions to remove the original reporter information]
