Catholic moral theologians and ethicists filed an amicus brief supporting AI company Anthropic in its lawsuit against the U.S. Department of War. The experts affirmed Anthropic's stance of maintaining guardrails on its AI technology regarding autonomous weapons and mass surveillance as responsible corporate citizenship. Anthropic sued the Pentagon after the President directed government agencies to cease working with the company due to disagreements over acceptable technology use by the War Department. Fourteen scholars specializing in Catholic moral theology, philosophy, and social thought contributed to the brief, offering a perspective grounded in moral tradition.
1 day ago
Anthropic filed a lawsuit against the U.S. Department of War on March 9, 2026, following President Donald Trump's February 27 directive for government agencies to cease working with the AI company.1
The conflict stems from Anthropic's refusal to allow its technology for autonomous weapons and mass surveillance of U.S. citizens.1
On March 13, 2026, 14 Catholic moral theologians and ethicists submitted an amicus curiae brief supporting Anthropic.1
Authored primarily by Charles Camosy, Joseph Vukov, Brian J.A. Boyd, and Brian Patrick Green, the brief praises Anthropic as a "responsible and moral corporate citizen."1
It draws on Catholic moral tradition, including the Catechism, papal encyclicals, and just war theory, to address AI ethics.1
The scholars align with Anthropic against AI-driven mass surveillance, citing Catholic teachings on privacy and human dignity.1
They reference the Catechism's stance that no one must reveal truth to those without a right to know it, and Pope Francis's 2023 call for an international AI treaty to counter a "surveillance society."1
Subsidiarity, from Pius XI's Quadragesimo Anno, opposes centralizing monitoring power, as it harms local agency and risks totalitarianism.1
The brief argues that AI-directed autonomous weapons violate jus in bello principles, requiring human judgment on proportionality and noncombatant immunity.1
Such systems obscure human responsibility, accelerate decisions, and bypass prudential discernment essential for life-and-death matters.1
This stance applies universally, beyond current technology limits.1
Anthropic opposes providing AI for autonomous weapons due to reliability issues, as stated by CEO Dario Amodei on February 26.1
Catholic scholars take a firmer view, rejecting lethal autonomous weapons even if proven perfectly reliable.1
Do Catholic ethicists justify AI safeguards against autonomous weapons?
Yes, Catholic magisterial teaching, which guides Catholic ethicists, strongly justifies—and indeed mandates—robust safeguards against autonomous weapons, particularly Lethal Autonomous Weapon Systems (LAWS). These systems, capable of identifying and striking targets without human intervention, are deemed a "cause for grave ethical concern" due to their inability to exercise moral judgment, detachment from the human reality of war, and risk of escalating conflict. Official documents emphasize the irreplaceable role of human oversight, calling for prohibitions, moratoriums, and international treaties to ensure "no machine should ever choose to take the life of a human being."
Catholic teaching identifies several profound moral flaws in AI-driven autonomous weapons:
Absence of Moral Responsibility and Judgment: Machines lack the "unique human capacity for moral judgment and ethical decision-making," rendering them incapable of moral accountability or compliance with international humanitarian law. As Pope Francis states, autonomous systems "can never be morally responsible subjects," since human ethical discernment transcends algorithms.
Detachment from War's Tragedy: Remote and autonomous systems foster a "cold and detached approach to the immense tragedy of war," lessening the perception of devastation and responsibility. This violates the just war principle of war as a last resort in self-defense.
Risks of Proliferation and Escalation: Such weapons could end up in "the wrong hands," enabling terrorism or destabilization, and spark arms races with "catastrophic consequences for human rights." They promote a "throwaway culture" over human dignity.
These concerns align with broader Catholic anthropology, where technology must serve human dignity, not replace human wisdom (phronesis).
The magisterium provides concrete, urgent recommendations:
Prohibition and Moratorium: Pope Francis urges a "reconsideration of the development of these weapons and a prohibition on their use," insisting on "ever greater and proper human control." The Holy See calls for a moratorium pending a "legal instrument that prohibits such systems from targeting humans."
International Binding Instruments: Support for negotiations by 2026 for a "legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight," with states urged to refrain from development in the interim. This includes "algor-ethics" and "ethics by design" from research inception.
Human Control as Essential: "Adequate, meaningful and consistent human oversight" is imperative; decisions affecting life must remain human. Archbishop Caccia echoes this, noting LAWS "irreversibly alter the nature of warfare, detaching it further from human agency."
Recent documents like Antiqua et Nova (2025) reaffirm these positions, building on Pope Francis's consistent interventions (2019–2025).
| Key Safeguard | Magisterial Justification | Proposed Action |
|---|---|---|
| Human Oversight | Machines cannot replicate moral judgment | Ensure "proper human control" in all systems |
| Moratorium | Prevent ethical violations and proliferation | Halt development pending treaties |
| Prohibition | No machine decides on human life | Binding international ban on LAWS |
| Ethical Design | Integrate human values from outset | Algor-ethics and global dialogue |
These safeguards reflect just war doctrine (Gaudium et Spes, par. 80) and Pope Francis's vision of AI for peace, not war—promoting fraternity over "the folly of war." Catholic ethicists, bound by this teaching, must advocate these measures to protect the vulnerable and common good.
In summary, Catholic teaching unequivocally justifies AI safeguards against autonomous weapons, viewing them as essential to preserve human dignity, moral responsibility, and peace. ethicists are called to promote these prohibitions through ethical discernment and international advocacy.