Federal judge temporarily blocks the Pentagon from branding AI firm Anthropic a supply chain risk
A federal judge issued a temporary injunction blocking the Pentagon from labeling AI firm Anthropic as a supply chain risk. The ruling also halted enforcement of President Trump’s directive ordering federal agencies to cease using Anthropic and its chatbot, Claude. Judge Rita Lin found the punitive measures taken by the Trump administration and Defense Secretary Pete Hegseth appeared arbitrary and could severely damage Anthropic. The judge questioned the administration's use of a rare military authority against an American company following contract negotiations that failed over autonomous weapons deployment. The court's decision focused on the government's actions rather than the underlying public policy debate regarding AI use in defense.
about 8 hours ago
A federal judge in San Francisco temporarily blocked the Pentagon from designating AI firm Anthropic as a supply chain risk.1
U.S. District Judge Rita Lin also halted enforcement of President Trump's directive barring federal agencies from using Anthropic's chatbot Claude.1
Lin described the Trump administration's actions as arbitrary, capricious, and potentially crippling to Anthropic.1
She criticized Defense Secretary Pete Hegseth's use of rare military authority—typically for foreign adversaries—against a U.S. company for policy disagreements.1
The judge rejected the idea of branding an American firm a "potential adversary" for opposing AI in autonomous weapons or U.S. surveillance.1
The conflict arose from failed defense contract talks where Anthropic refused to allow its AI for fully autonomous weapons or domestic surveillance.1
Anthropic sued alleging unlawful retaliation; the Pentagon argued it could use Claude as it saw fit.1
A separate related case remains pending in D.C. federal appeals court.1
Lin's order, issued after a Tuesday hearing, takes effect in one week and does not mandate Pentagon use of Anthropic products.1
It focuses solely on government response, not broader AI policy debates.1
Anthropic welcomed the swift ruling, affirming its merits and commitment to safe AI for all Americans.1
Supporters included Microsoft, tech trade groups, workers, retired military leaders, and Catholic theologians.1
The Pentagon had no immediate comment.1
Assess Catholic Church’s stance on state regulation of AI technology
The Catholic Church strongly endorses state and international regulation of artificial intelligence (AI) to safeguard human dignity, promote the common good, and mitigate risks such as bias, inequality, and threats to peace. Magisterial documents emphasize that AI, as a powerful tool, requires ethical oversight at national and global levels, including binding treaties, while insisting on human control and subsidiarity.
The Church views regulation as essential to harness AI's benefits while preventing harms like discrimination, disinformation, and dehumanization. Pope Francis repeatedly calls for proactive ethical frameworks, noting that AI's global impact demands coordinated action beyond individual nations.
"I urge the global community of nations to work together in order to adopt a binding international treaty that regulates the development and use of artificial intelligence in its many forms."
This stance is reiterated in messages for the World Day of Peace and Social Communications, stressing prevention of "harmful, discriminatory and socially unjust effects" through models of ethical regulation. Pope Francis welcomes efforts by international organizations to regulate AI for "genuine progress," contributing to a "better world and an integrally higher quality of life."
State-level regulation is affirmed, as sovereign states bear responsibility for internal oversight, complemented by multilateral agreements. The COMECE (Commission of the Bishops’ Conferences of the European Union) praises the EU Artificial Intelligence Act as a "robust" framework addressing ethical foundations, aligning with papal concerns over the "technocratic paradigm."
Regulation must be anthropocentric, ensuring AI respects the "intrinsic dignity of every man and every woman" as the "key criterion" for evaluation. Pope Francis warns that without "proper human control," AI poses a "threat to human dignity," potentially imposing "uniform anthropological, socio-economic and cultural models."
The 2025 Note Antiqua et Nova underscores evaluating AI's ends, means, and vision against human dignity and the common good, with human intelligence directing its use. Pope Leo XIV echoes this, urging frameworks that serve the common good rather than concentrating "wealth and power in the hands of a few," questioning: "What does it mean to be human in this moment of history?"
Subsidiarity guides multi-level responsibility: individuals, societies, states, and international bodies collaborate.
The Church insists on including "the poor, the powerless and others who often go unheard" in regulatory debates to foster justice and fraternity. Pope Francis advocates platforms ensuring diverse data inputs, protecting local cultures and aiding poverty alleviation.
Ethical guidelines must address deeper issues like human existence, rights, and peace, beyond mere technical rules. Regulation alone is insufficient; it requires a "heart"-centered wisdom distinguishing human authenticity from algorithmic manipulation.
Without regulation, AI risks exacerbating inequality, war, and a "throwaway culture." Earlier addresses highlight AI's qualitative distance from human prerogatives like conscience and moral autonomy, urging humanization of technology.
Recent documents reference the 2025 Antiqua et Nova Note as a guide for summits like Paris (2025), hoping for focus on AI's social effects.
| Key Documents | Core Regulatory Recommendation | Authority Level |
|---|---|---|
| World Day of Peace 2024 (Doc 6) | Binding international treaty | Papal (Francis) |
| Social Communications 2024 (Docs 7-8) | Ethical models, international treaty | Papal (Francis) |
| Paris Summit Message 2025 (Doc 2) | Public interest platform, human control | Papal (Francis) |
| Antiqua et Nova 2025 (Docs 1,10) | Subsidiarity, dignity-based ethics | Dicasteries (Doctrinal/Culture) |
| EU AI Act Statement (Doc 11) | Support for robust national frameworks | Episcopal (COMECE) |
The Church's stance is unequivocal: states must regulate AI ethically, prioritizing human dignity through international binding instruments and inclusive processes. This ensures technology serves fraternity, peace, and integral development, as articulated consistently from 2019 to 2025 across papal addresses, dicasterial notes, and episcopal statements.