New CEO of Catholic AI group hopes members will be Pope Leo’s ‘soldiers’
Vincent Higgins has been appointed as the new CEO of the Builders AI Forum (BAIF), an organization focused on integrating Catholic moral frameworks into artificial intelligence development. BAIF aims to influence the trajectory of AI technology to better serve humanity, following its inaugural conference held at the Vatican in November 2025. The organization is positioning itself to support the mission of Pope Leo XIV, who is rumored to be preparing an encyclical specifically addressing the topic of artificial intelligence. Higgins identified education and software development as key sectors currently facing significant disruption due to the rapid advancement of AI tools.
6 days ago
Does the Catholic Church’s moral framework adequately guide AI development?
The Church’s moral framework does offer an adequate compass for AI development—because it grounds AI use and design in human dignity, the common good, truth and moral responsibility, and it insists on proper human control and protection of the vulnerable. However, it is not a substitute for technical standards or engineering governance; rather, it supplies ethical criteria and formation requirements that must be prudently translated into concrete policies, oversight mechanisms, and design decisions.
Pope Leo XIV frames AI development as something that must be judged by whether it truly serves the common good rather than concentrating wealth and power in the hands of a few. This is a direct moral standard for evaluating incentives, markets, and deployment strategies.
Similarly, Pope Francis insists that positive outcomes in AI (including peace) are not automatic; development must be paired with respect for fundamental human values—inclusion, transparency, security, equity, privacy, and reliability—and oriented toward the integral development of persons and communities.
Both Pope Francis and Pope Leo XIV emphasize that AI is essentially a tool whose benefits or harms depend on its use, meaning the moral subject is the human person and the institutions that govern development and deployment.
This matters because the Church warns against treating AI outputs as if they carried the certainty, universality, or moral authority of human judgment. Pope Francis gives a concrete warning: “it is a frequent and serious mistake to forget that artificial intelligence is not another human being, and that it cannot propose general principles.”
The Church repeatedly places inviolable human dignity at the center of reflection and action. Pope Francis also highlights the distinction between algorithms and the “heart,” where the heart—understood as the seat of deepest authentic sentiments—cannot deceive, unlike algorithmic manipulation.
A key feature of the Church’s moral guidance is that it does not stay at the level of abstract principles. It calls for:
This is important for “adequacy,” because it implies that ethical guidance must be operationalized through governance, oversight, and accountability—not merely personal conscience.
Pope Francis explicitly argues that algorithms are not neutral/untethered from values. He notes that the method of calculation (“algorithm”) is neither objective nor neutral, and that AI can only examine realities that have been formalized numerically. Therefore, moral evaluation cannot be bypassed by claiming the system is “just math.”
Pope Francis provides an example involving decision support for judges: AI may predict recidivism using categories that include sensitive or prejudicial inputs (e.g., ethnic origin or prior minor offenses), and this can effectively delegate “the last word” about a person’s future to the machine—while also incorporating prejudices embedded in the data categories.
The practical lesson for Catholic moral governance: the Church’s framework directly targets how AI is trained, what data categories are permitted, and what authority humans relinquish.
In the Church’s reasoning, “human control” is not optional. Pope Francis stresses that decision-making must always be left to the human person, warning that removing people’s ability to make decisions would condemn humanity to a future without hope. Pope Leo XIV echoes the same theme, calling for “proper human control over the choices made by artificial intelligence programmes.”
The Church highlights that minors are particularly vulnerable to manipulation via AI algorithms that can influence their decisions and preferences. Parents and educators must be aware of these dynamics, and tools should be developed to monitor and guide minors’ interactions with technology.
This again shows that the framework is not generic—it addresses concrete foreseeable harms.
Catholic moral teaching typically gives principles (ends and criteria) and then expects prudence in applying them to new technological circumstances. The sources you provided reflect exactly that two-level structure:
So, the Church’s moral framework is adequate insofar as it provides the moral “architecture” needed to judge AI’s direction and authority. But it remains incomplete as a technical manual—and that is not a defect, because the Church does not claim to replace engineers, legal experts, or policymakers; it insists they translate moral criteria into institutional practices.
The Church repeatedly calls for regulatory frameworks and ethical oversight, which means adequacy requires practical governance design:
The sources strongly support the need for these mechanisms, but they don’t provide the full technical specifications for them.
Yes—at the level of moral direction and criteria. The Church provides a coherent ethical framework that judges AI by whether it protects human dignity, sustains authentic human flourishing, preserves human decision-making, and serves the common good through responsible governance, especially toward the vulnerable.
Not fully—if “adequately guide” is taken to mean “fully specify the engineering and regulatory details.” The framework supplies values and governance imperatives, but it must be prudently implemented through technical standards, institutional oversight, and policy design.
In short: the Church offers a complete moral compass, while leaving the practical engineering route to specialized expertise—so long as that route remains faithful to the moral compass.