A quiet but profound constitutional crisis erupted this week, one that pits the raw power of the state against the ethical boundaries of a private corporation. At its heart is Anthropic, the San Francisco-based creator of the Claude AI, which has found itself labeled a "national security risk"—not for failing to build a powerful tool, but for refusing to let it be used for mass surveillance and autonomous killing.
The Price of a "Red Line"
In late February 2026, negotiations over a $200 million defense contract collapsed. Anthropic CEO Dario Amodei insisted on two "red lines": no use of Claude for domestic mass surveillance and no deployment in lethal autonomous weapons systems (LAWS). The Pentagon’s response was swift. Defense Secretary Pete Hegseth designated the company a "supply chain risk," a label traditionally reserved for foreign adversaries like Huawei. Hegseth characterized the company's refusal as "sanctimonious" and an attempt to "strong-arm" the U.S. military.
An Unlikely Alliance: Faith Meets Tech
While Silicon Valley often views ethics as a PR hurdle, a group of fourteen Catholic moral theologians saw it as a battle for human dignity. On March 13, they filed a "friend of the court" (amicus curiae) brief supporting Anthropic’s right to set moral boundaries.
Drawing on centuries of Just War theory and the 2025 Vatican document Antiqua et Nova, scholars like Charles Camosy and Brian Patrick Green argued that "deadly actions in war require human beings to be the ones morally responsible." They contend that offloading the decision to take a life to an algorithm—a "machine that knows nothing of concrete daily existence"—is a fundamental abdication of our humanity.
The Sovereignty of Ethics
This dispute is a landmark in the history of ideas. It asks whether a corporation can have a "conscience" that the state is bound to respect. While competitors have reportedly agreed to "all lawful uses" of their AI, Anthropic is gambling its survival on the belief that not every technically feasible act is a moral one.
As federal agencies begin a six-month countdown to purge Anthropic’s technology from their systems, the lawsuit moves toward the courts. Regardless of the legal outcome, the conversation has shifted. It is no longer just about what AI can do, but what we, as a civilization, allow it to do.
Sources: EWTN News, National Catholic Register, CBS News, March 18, 2026.

Comments