College students launch ‘Acutis AI’ to bring Catholic teaching to artificial intelligence
College students Peter and Thomas Cooney developed Acutis AI, a search tool designed to provide responses aligned with Catholic morality. The platform aims to address concerns regarding the neutrality of mainstream AI models on moral issues like abortion. Acutis AI includes parental control features such as chat monitoring, time limits, and alerts for concerning topics. The creators intend to mitigate the risk of user dependency and the addictive nature of current AI chatbots, particularly for younger users.
1 day ago
Two Catholic college students have launched an AI platform, “Acutis AI,” designed to deliver answers on faith and morality aligned with Catholic teaching, while also offering parental controls for monitoring children’s use of AI tools. 1
“Acutis AI” is a new AI platform built by brothers Peter Cooney (21) and Thomas Cooney (19). 1
They are students at the University of Dallas and Baylor University, respectively. 1
The developers say the platform is intended to function as a search tool shaped by Catholic morality, with responses they describe as more aligned with Church teaching. 1
They also frame the project as a way to encourage responsible AI use, especially among young people. 1
In an interview, Peter Cooney said that many existing AI platforms share two key issues: moral answers are treated as neutral, and the systems can contribute to user dependence, particularly for young people. 1
Cooney described testing mainstream AI by asking it whether abortion is acceptable and whether it could affirm a decision to obtain one. 1
He said the AI responded in a way that would be contrary to Church teaching, and he argued this reflects a mismatch between “neutral” responses and Catholic moral doctrine. 1
Cooney said Acutis AI is grounded in Church sources by uploading materials such as the Catechism of the Catholic Church, encyclicals, and the “Summa Theologica” into the platform’s code. 1
Through coding, the platform is described as only answering questions about faith and morals using those sources. 1
For questions outside those areas, the platform is described as being allowed to conduct broader web searches. 1
The platform includes features for parents, including the ability to monitor children’s chats, set time limits, and receive alerts if “concerning topics” are detected. 1
Cooney also argued that AI companions and chatbots can be particularly attractive to lonely or isolated teenagers because they can sound human-like and “affirming,” which he said can increase the risk of prolonged dependence. 1
Cooney said he does not believe the solution is to ban AI outright, arguing it can be valuable if used correctly. 1
He said Acutis AI should be used as an aid for automation rather than as a replacement for critical thinking. 1
The developers also emphasized that AI should not replace face-to-face human relationships, and Cooney suggested it could help students study through quizzing or creating study guides. 1
Cooney said the platform is named after St. Carlo Acutis, describing him as an example of using technology to serve God. 1
He said Acutis used technology to spread devotion related to the Eucharist and bring people closer to Christ, which he said they aim to emulate through the platform. 1
Examine how Catholic moral teaching can be integrated into AI systems
Catholic moral teaching can be integrated into AI systems by treating AI not as a moral agent, but as a powerful tool whose design and use must be ordered to human dignity, the common good, and responsible human control—with special attention to how algorithms can distort truth, amplify injustice, and erode authentic human relationships.
Catholic teaching begins with a view of the human person: intelligence is a gift linked to being made “in the image of God,” and human technological work is a form of stewardship of creation. This matters for AI because it implies that building AI is never morally neutral “engineering”; it must be governed by reason informed by faith—especially about what it means for a person to flourish.
From that starting point, Catholic moral reflection yields several design-level consequences:
AI integration is not mainly about adding a “values checkbox” to the model; it is about ensuring that the human persons and institutions using AI can be morally responsible for its effects and can direct it toward morally worthy purposes.
Catholic social teaching offers durable principles—especially dignity of the human person and the common good—that can be operationalized into AI governance and product constraints. In more detail:
Multiple magisterial statements insist that the inherent dignity of every person must be “firmly placed at the centre” of AI reflection and action. Pope Francis also warns about risks of “technologizing” rather than “technology humanized,” and stresses that devices simulating human capacities are qualitatively distant from human prerogatives of knowledge and action.
AI integration implication: implement “dignity checks” such as:
The common good is “the sum total of social conditions” enabling persons (individually and in community) to reach fulfillment more fully and easily. It is connected to protecting human rights and meeting basic responsibilities—life, decent necessities, education, work, and rights of conscience.
AI integration implication: evaluate AI systems not only for efficiency or accuracy but for whether they:
Pope Francis explicitly connects AI to the risk of “greater injustice between advanced and developing nations or between dominant and oppressed social classes,” and to the temptation of preferring a “throwaway culture” to a “culture of encounter.”
Benedict XVI highlights how Catholic social teaching interweaves solidarity and subsidiarity within the horizon of the common good. This is directly relevant to AI integration: decisions about AI should not become centralized in ways that override local communities, while solidarity requires that harms and benefits are shared with the whole human family.
AI integration implication: build multi-level governance:
A distinctive Catholic thread across the sources is that AI must preserve space for proper human control and must not be treated as an unquestionable authority.
At the G7, Pope Francis insisted that decision-making must be left to the human person, warning: taking away the ability to decide about one’s life would condemn humanity to “a future without hope.” He adds that human dignity depends on safeguarding this space for control.
AI integration implication: for high-stakes domains (justice, sentencing support, welfare decisions, employment screening), implement:
Pope Francis states that the “algorithm” method is “neither objective nor neutral,” because it is based on formalization in numerical terms and proceeds by operations that embed assumptions. He also warns that AI systems used for judicial decisions can implicitly incorporate prejudices through the data categories used, and that a machine cannot fully account for human development and surprising freedom.
AI integration implication: “bias management” must be more than statistical tuning. It must include:
Pope Francis insists that “No machine should ever choose to take the life of a human being,” and urges reconsideration and banning of lethal autonomous weapons, beginning with greater and proper human control. Leo XIV likewise frames AI as something requiring safeguards so it does not serve accumulation of power over the common good.
AI integration implication: Catholic moral teaching supports a strong governance stance against delegating lethal agency to autonomous systems.
Catholic moral integration also concerns the information environment created by AI—especially because human communication is meant for communion and truth.
Pope Francis teaches that AI can help overcome ignorance and enable communication, but also can be a source of “cognitive pollution,” distorting reality via partially or completely false narratives believed and broadcast as true. He explicitly mentions disinformation and “deepfakes” as examples of simulation technologies that are useful in narrow fields but become perverse when they distort relationships with others and with reality.
He further notes:
AI integration implication:
From the Catholic principles above, a realistic integration program can be structured around the AI lifecycle:
Because moral evaluation depends also on circumstances and ongoing effects, moral accountability cannot stop at launch; systems must be continually assessed and corrected in light of their real-world impact.
Integrating Catholic moral teaching into AI systems means ordering the technology to the truth about the human person: AI is a powerful tool whose operation must serve human dignity and the common good, with meaningful human control and an awareness that algorithms are not morally neutral. It also requires protecting communication from distortion and domination, and ensuring that the resulting social effects do not undermine spiritual maturity, openness to others, and responsibility toward the weak.