The rise of generative AI in the 2020s has prompted calls for consistent regulatory guardrails due to documented harms. President Trump signed an executive order on December 11, 2025, establishing a national policy framework for AI. Trump's order advocates for federal regulation to override state rules while simultaneously stating that AI companies should be free to innovate without cumbersome regulation. The article seeks expert analysis on reconciling the need for regulation with the desire for innovation, particularly in light of Pope Leo XIV's concerns. Taylor Black, director at the Leonum Institute and Microsoft, was consulted to discuss the ethical development of AI and the implications of the executive order.
2 months ago
On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.”1
It seeks to establish federal regulation superseding state laws, emphasizing that AI companies must innovate without cumbersome oversight.1
Pope Leo XIV has stressed ethical AI development, urging focus on human identity and flourishing beyond mere utility.1
In his November 2025 message to the Builders AI Forum, he framed AI as participation in divine creation, carrying ethical weight in every design choice.1
He warned against technology divorced from human value, as noted in his first papal interview.1
Taylor Black, director at Microsoft's AI & Venture Ecosystems and founder of the Leonum Institute at Catholic University of America, sees the order addressing borderless tech needs.1
A national framework offers clarity for startups amid patchwork state laws that risk stifling innovation.1
However, it critiques state efforts like Colorado's anti-discrimination law, potentially overlooking real harms to marginalized groups.1
States act as "laboratories of democracy," closer to AI's community impacts.1
The order creates a false dichotomy between innovation and oversight, per Black, echoing Pope Leo's call for integrated human development.1
The order exempts state child safety laws but raises enforcement concerns.1
State attorneys general provide granular, rapid responses that federal bodies cannot match, vital against evolving online exploitation.1
Catholic subsidiarity demands states retain enforcement capacity, with federal support like funding and non-preemptive mechanisms.1
Big Tech claims regulation hampers innovation, but Black counters that poor oversight enables harms like extortion and discrimination.1
A Catholic University conference highlighted these real-world issues from minimally overseen AI.1
Black proposes transparency for researchers without proprietary disclosure.1
Include accountability for harms, moral formation for tech leaders, community engagement per the Rome Call for AI Ethics, and ethical innovation.1
The Church advocates technology ordered to human dignity, not Luddite resistance.1
It calls for ecclesial action: building ethical infrastructure, ventures, and formation to embody Pope Leo's hopeful vision of relational AI.1
This human-first approach measures policies against societal impact, ensuring no one is left behind.1
Does Catholic doctrine support a unified AI regulatory framework?
Catholic doctrine affirms the urgent need for robust ethical oversight of artificial intelligence (AI), emphasizing that technological development must serve human dignity, the common good, and fraternity among peoples. While it strongly encourages international cooperation and coordinated governance frameworks to address AI's global challenges—such as bias, inequality, and threats to truth—it does not endorse a singular, top-down "unified" regulatory regime that supplants local or national initiatives. Instead, it integrates this call with the principle of subsidiarity, ensuring higher levels of authority support rather than absorb lower ones. This balanced approach, rooted in documents like Antiqua et Nova and papal messages from Popes Francis and Leo XIV, prioritizes moral discernment in AI's design, deployment, and governance.
The Church's social teaching insists that AI, as a human invention, carries profound anthropological and ethical implications requiring deliberate regulation. Pope Francis highlighted AI's dual potential for progress or peril, urging the creation of bodies to examine ethical issues and protect rights, while stressing values like inclusion, transparency, security, equity, privacy, and reliability.[1†L{relevant footnotes on Francis addresses}] He warned that without such safeguards, AI could aggravate inequalities, foster discrimination, or undermine peace, demanding that its development be evaluated against the "inherent dignity of each human being and the fraternity that binds us."
Pope Leo XIV has echoed this, describing AI as "above all else a tool" whose ethical force derives from human intentions, and calling for "responsible governance" to promote justice and integral human development. In his message to the AI for Good Summit, he explicitly advocated "a coordinated local and global governance of AI, based on the shared recognition of the inherent dignity and fundamental freedoms of the human person." Similarly, the U.S. Conference of Catholic Bishops (USCCB) urged Congress to establish a "regulatory framework informed by ethical principles," centering dignity, care for the poor, and respect for truth, while avoiding transhumanism or AI replacing human judgment.
The Dicasteries for the Doctrine of the Faith and Culture and Education, in Antiqua et Nova, reinforce that AI's ends, means, and vision must respect dignity and the common good, with human intelligence directing its use under principles like subsidiarity. The Rome Call for AI Ethics further demands regulations protecting the vulnerable, ensuring digital security, and aligning AI with human rights and environmental care. These teachings collectively support regulatory structures but frame them as servant to ethical ends, not mere technical fixes.
Catholic sources repeatedly endorse global collaboration, recognizing AI's borderless impact. Pope Francis welcomed "efforts of international organizations to regulate these technologies so that they promote genuine progress," citing the need for bodies to safeguard rights amid rapid innovation.[1†L{Francis footnotes}] Antiqua et Nova aligns with this, quoting Gaudium et Spes on technology serving justice and fraternity.
Under Pope Leo XIV, this evolves into explicit calls for coordination: his message to the Second Annual Conference on AI stressed the Church's role in discussions affecting humanity's future, weighing AI against "integral development." To the Builders AI Forum, he praised interdisciplinary efforts for AI serving evangelization and dignity. Most directly, his AI for Good Summit address promotes "coordinated local and global governance," transcending utility for a "tranquillitas ordinis" ordered to human flourishing. The USCCB, invoking Pope Leo XIV, similarly seeks frameworks benefiting all humanity.
This is not mere aspiration; the Church views unified ethical principles—human dignity as the "key criterion"—as foundational for evaluating technologies pre-deployment. Yet "unified" here implies shared moral standards and cooperative mechanisms, not a monolithic enforcer.
While supporting coordination, Catholic doctrine firmly applies subsidiarity, cautioning against over-centralization. Antiqua et Nova invokes this principle from Catholic Social Teaching, where higher societies aid lower ones without absorbing their roles, ensuring freedom and responsibility. As Pius XI articulated in Quadragesimo Anno (echoed in the Compendium), superior associations must provide "subsidium" (support) but never destroy subordinate entities' initiative. The Catechism of the Ukrainian Catholic Church and English bishops' conference affirm this: higher governance mobilizes lower levels' energy without interference, fostering service over domination.
Applied to AI, this means global frameworks should empower local, national, and communal responses—e.g., equitable access for the poor—without preempting them. Pope Leo XIV's "coordinated local and global" phrasing embodies this hybrid: global for shared principles, local for implementation. Overly unified regulation risks the "technocratic paradigm" critiqued by Francis, where machines dictate human choices. Thus, doctrine prioritizes decentralized moral agency.
Sources show strong consensus on dignity-centered regulation with international input, from Vatican dicasteries to episcopal conferences. No divergences appear; recent papal teachings (2024-2025) build on Francis, prioritizing subsidiarity amid AI's "epochal change." On complex ethics like global vs. local balance, nuance prevails: the Church cautions against presuming AI's benevolence without oversight, yet trusts human freedom under ethical guidance.
In summary, Catholic doctrine robustly supports ethical regulatory frameworks for AI, including coordinated international efforts to uphold dignity and the common good, as seen in papal calls for global-local governance. However, it does not back a strictly "unified" (i.e., singular, supranational) framework, instead mandating subsidiarity to preserve human initiative. This fosters responsible innovation serving fraternity, urging all stakeholders—developers, policymakers, users—to discern AI's path prayerfully.
The rise of generative AI in the 2020s has prompted calls for consistent regulatory guardrails due to documented harms. President Trump signed an executive order on December 11, 2025, establishing a national policy framework for AI. Trump's order advocates for federal regulation to override state rules while simultaneously stating that AI companies should be free to innovate without cumbersome regulation. The article seeks expert analysis on reconciling the need for regulation with the desire for innovation, particularly in light of Pope Leo XIV's concerns. Taylor Black, director at the Leonum Institute and Microsoft, was consulted to discuss the ethical development of AI and the implications of the executive order.
2 months ago
President Donald Trump signed an executive order on December 11, 2025, titled “Ensuring a National Policy Framework for Artificial Intelligence.”1
It seeks to preempt state regulations, arguing they obstruct innovation and create a compliance patchwork for AI companies.1
Pope Leo XIV has stressed that AI development must prioritize human identity and flourishing over mere utility.1
In his message to the Builders AI Forum on November 6-7, 2025, he framed technological innovation as participation in divine creation, carrying ethical weight.1
Taylor Black, director at Microsoft's AI & Venture Ecosystems and the Leonum Institute at The Catholic University of America, sees potential reconciliation between the order and papal vision.1
He notes a unified national framework could provide clarity for developers but warns against losing state-level innovations like Colorado's anti-discrimination law.1
A national approach offers consistency, aiding startups against uninformed state rules.1
However, states act as "laboratories of democracy," addressing real harms to marginalized groups more nimbly.1
The order exempts state child safety laws, but Black worries about enforcement capacity.1
State attorneys general have local ties and speed; he invokes subsidiarity, urging federal support for state action without delays.1
Black critiques industry claims that regulation always stifles progress, citing documented harms like exploitation and discrimination.1
He advocates regulations promoting transparency, accountability, moral formation, community engagement, and ethical innovation.1
Catholic teaching rejects the innovation-oversight dichotomy, demanding technology serve human dignity.1
The Church should build ethical AI infrastructure, echoing the Rome Call for AI Ethics and Pope Leo's call for hope-filled AI.1
Does Catholic doctrine support a unified AI regulatory framework?
Catholic doctrine affirms the urgent need for ethical regulation of artificial intelligence (AI) to safeguard human dignity, the common good, and peace, while advocating coordinated international frameworks rather than a strictly centralized or supranational "unified" regime that overrides subsidiarity. Drawing from recent papal teachings and Vatican documents, the Church emphasizes multilateral agreements and global ethical standards—such as binding treaties proposed by Pope Francis—but insists these must empower local and national initiatives without absorbing them. This nuanced approach reconciles AI's borderless challenges with the principle that human responsibility flourishes best through decentralized moral agency.
At its core, Catholic teaching views AI not as neutral technology but as bearing profound anthropological weight, demanding oversight to prevent harms like inequality, bias, deception, and threats to privacy or truth. The Dicasteries for the Doctrine of the Faith and Culture and Education, in Antiqua et Nova, stress that AI must respect values including inclusion, transparency, security, equity, privacy, and reliability, quoting Pope Francis on the need for international regulation to promote genuine progress.[1†L{Francis footnotes}] Pope Francis warned in his 2024 World Day of Peace message that unchecked AI risks obscuring responsibility and exacerbating injustice, urging ethical guidelines rooted in human existence, rights, justice, and peace.
Pope Leo XIV builds on this, framing AI as requiring "ethical management and regulatory frameworks centered on the human person," beyond mere utility or efficiency. In his message to the AI for Good Summit, he highlights AI's inability to replicate moral discernment or relationships, calling for governance upholding dignity and freedoms to foster "tranquillitas ordinis"—the tranquility of order. The USCCB echoes these concerns across policy areas like family life, labor, healthcare, warfare, and truth, insisting on human oversight to counter biases, job displacement, and exploitation while protecting intellectual property and children. COMECE reinforces human responsibility as the "fundamental pillar," rejecting legal personality for AI and advocating debate with stakeholders, including a potential moratorium on self-aware systems.
The Church explicitly supports international cooperation given AI's global scale, viewing sovereign states' internal regulation as insufficient alone. Pope Francis called for a "binding international treaty" via multilateral agreements, not just to prevent harms but to encourage best practices and include the poor's voices. Antiqua et Nova welcomes efforts by international organizations, citing Gaudium et Spes on technology serving justice. Pope Leo XIV promotes "coordinated local and global governance," integrating ethical clarity with fraternity. COMECE aligns with papal calls for treaties and intensified private-sector dialogue.
These frameworks prioritize shared human values—dignity as the "superior ethical criterion"—for preemptive evaluation of AI. Yet "unified" implies moral and normative convergence, not a singular enforcer; Pope Leo XIV urges discernment hand-in-hand with values like conscience and responsibility.
Catholic doctrine tempers global coordination with subsidiarity, ensuring higher authorities support rather than supplant lower ones.[1†L{GS 26}] [4†L{policy recs}] Antiqua et Nova invokes this from Gaudium et Spes and the Catechism, where freedom makes humans moral subjects directing technology ethically. Pope Francis critiqued the "technocratic paradigm" where criteria obscure responsibility, demanding human-centered approaches. Subsidiarity preserves local innovation and accountability, as states and communities address context-specific harms like surveillance or warfare. [4†L{warfare, family}]
Pope Leo XIV's "local and global" phrasing embodies this hybrid, avoiding centralization that eclipses human uniqueness. The USCCB prioritizes protections like worker training and child safety without federal monopoly. COMECE stresses human agency over fictions like robot personality.
Recent sources (2023-2025) show harmony: Francis's treaty call evolves into Leo XIV's coordinated governance, all under dignity and subsidiarity. No disagreements emerge; earlier teachings like Gaudium et Spes (1965) ground modern applications. On complexity, nuance prevails: regulation fosters fraternity but risks utility-driven errors if ignoring subsidiarity.
In conclusion, Catholic doctrine robustly endorses ethical AI regulation through international coordination and binding frameworks to serve the common good, yet firmly subordinates this to subsidiarity, rejecting any "unified" model that undermines local human initiative. This vision invites all to ethical discernment, ensuring AI amplifies rather than eclipses our God-given dignity.