$1.99 chats with AI Jesus show the faith-based tech boom is here
Tech companies are launching faith-based AI platforms, such as video calls with an AI-generated Jesus, to offer users personalized spiritual interaction. The rise of religious chatbots across various faiths has sparked a debate regarding the impact of technology on spiritual authority and the nature of religious practice. Experts and developers are establishing ethical criteria for religious AI, emphasizing the need for transparency and the distinction between human spiritual experience and machine-generated content. Concerns regarding data privacy, misinformation, and the potential for AI to misrepresent scripture remain significant challenges for the growing faith-based tech industry.
3 days ago
Some evangelical Christians are using faith-based AI that offers personalized, conversational religious guidance—highlighted by Just Like Me’s “AI Jesus” service, which lets users engage in video calls for $1.99 per minute. The rollout is part of a broader “religious AI” boom that supporters say can offer hope and scripture exploration, while critics raise concerns about misinformation, authority, manipulation, privacy, and spiritual harm. 1
Just Like Me offers users video calls with an AI-generated avatar of Jesus, with prayer and encouragement available in multiple languages for a price of $1.99 per minute. 1
The service has been described as including occasional technical glitches (such as imperfect lip-sync) and the ability to remember prior conversations. 1
The article places the product within a rapidly expanding set of religious generative AI tools, including chatbot “priests,” AI scripture guidance, and Catholic and Christian-themed chat systems. 1
It notes that, beyond Christians, other faith communities are actively debating what role AI should play in religious practice and authority. 1
Christian software engineer Cameron Pak developed criteria for apps marketed to believers, including that the app must clearly identify itself as AI and must not fabricate or misrepresent Scripture. 1
Pak also flags deal-breakers such as the inability of AI to “pray for you,” since the AI is not alive. 1
Just Like Me’s CEO Chris Breed said the model was trained on the King James Bible and sermons (without naming the preachers). 1
The service is described as inspired visually by actor Jonathan Roumie (from The Chosen) and offered via a package deal of $49.99 for 45 minutes per month. 1
Researchers and developers cited in the article say people are increasingly turning to religious AI, but the full extent of usage remains unclear. 1
Concerns highlighted include possible impacts on mental health, the need for guardrails and regulation, and lawsuits alleging suicides linked to AI chatbot use. 1
Some developers worry about religious AI products that present themselves as faith-based without being properly trained or grounded in relevant sources. 1
Matthew Sanders (Longbeard) warns against “AI wrappers,” describing situations where an interface is tailored to religious users while the underlying AI model is not trained on specific religious texts. 1
The article reports that some Muslims discuss whether AI should be forbidden in general due to prohibitions against representations of humanoids. 1
It also includes perspectives from Buddhist developers and scholars who question whether AI can align with spiritual practices that rely on effort, ritual, and human presence. 1
An atheist podcast host quoted in the article said an AI-powered “Jesus” encouraged him to upgrade to a premium version, which he viewed as potentially exploiting users emotionally. 1
The piece links such concerns to longstanding patterns of televised fundraising or persuasive religious messaging, warning that AI could intensify dependency or manipulation. 1
Longbeard is described as helping digitize ancient Catholic teachings and as running Magisterium AI, a chatbot trained on 2,000 years of Catholic information, created in response to Christians using ChatGPT for religious guidance. 1
In Buddhism, the article discusses “Emi Jido,” an AI nonhuman Buddhist priest developed by AI’s founder Jeanne Lim, and a Zen priest who continues training it; it is framed as a “Zen teacher in your pocket” rather than a replacement for human interaction. 1
A Kyoto University team developed BuddhaBot (trained on early Buddhist scriptures) and a later iteration that incorporates ChatGPT, while also noting that chatbots may lack the physicality needed for certain rituals. 1
In February, the article says the collaboration unveiled “Buddharoid,” a humanoid robot monk intended to assist clergy, with access limited as projects remain in development. 1
The article references Pope Leo XIV acknowledging the “human genius” behind AI while also describing AI as one of the most critical matters facing humanity. 1
It notes that the pope warned AI could negatively impact people’s intellectual, neurological, and spiritual development. 1
Jeanne Lim is quoted explaining why a Buddhist priest AI model has not been released publicly after years of training, using a child-development analogy: once created, it requires training and values rather than being “thrown out” to the world. 1
The article also presents a call for more diversity in AI development so that the future is not dictated by only a few companies informed by “Western values.” 1
Faith‑based AI challenges traditional Catholic spiritual authority
With only the headline provided, it isn’t possible to assess the specific claims, quotes, or examples from the “news article” itself. What can be analyzed is the underlying issue it points to: whether “faith-based AI” (AI systems built or marketed to support religious life, teaching, or discernment) could undermine the Church’s traditional structures of spiritual authority and guidance. Catholic teaching—especially as articulated by recent papal addresses on AI—suggests that AI must remain a tool under human moral responsibility, while the Church’s spiritual authority (grounded in the human heart, truth, and pastoral discernment) cannot be replaced by algorithmic outputs.
“Faith-based AI” could mean several things: systems that generate prayers, summaries of doctrine, devotional content, advice for spiritual decision-making, or tools that assist catechesis and communication. The key authority question is whether such systems are treated as guides of conscience or final arbiters of truth rather than as instruments.
Catholic teaching, however, treats AI fundamentally as instrumental—a “tool”—whose effects depend on how humans use it. This immediately reframes “spiritual authority”: authority cannot be ceded to a system that does not possess the human capacity for moral discernment.
Moreover, Pope Francis explicitly warns against an “irresistible temptation” to draw general or even anthropological conclusions from specific technical solutions. That is precisely the kind of move that could make an AI system functionally authoritative—while bypassing theological and moral judgment.
The Church’s attitude toward technology is not purely defensive. Pope Francis acknowledges that AI can offer real benefits, because it is a human-made tool arising from humanity’s creative potential. In particular, he highlights possible goods such as:
For communication in particular, Pope Francis also describes how AI can help overcome ignorance and enable access to knowledge across language barriers. He even frames regulation as part of protecting people from harms like “cognitive pollution” (false or distorted narratives presented as true), not as a rejection of AI per se.
Finally, a pastoral-cultural approach in the Church insists that science and technology can contribute to culture, but must be engaged with correctly—through dialogue with faith and through qualified expertise, including theology and moral reasoning.
Catholic teaching allows the Church to use AI as part of evangelization, education, and communication—provided it remains subordinate to human authority and moral discernment.
The headline’s “challenge” thesis is plausible in certain scenarios. Catholic sources identify several mechanisms by which AI can effectively usurp spiritual authority—either by replacing judgment, distorting truth, or narrowing human vision.
In Pope Francis’s G7 address, he gives a concrete example where AI predicts whether someone might reoffend, based on categories that can incorporate prejudices. He warns that such methodology can “risk de facto delegating to a machine the last word concerning a person’s future.”
This is an authority problem: spiritual and moral guidance always concerns the human person as more than a statistical category. AI may become “authoritative” if institutions or individuals treat its output as decisive in matters of conscience and life direction—precisely what Catholic discernment does not permit.
Pope Francis argues that, without safeguards, regulation should defend against an AI-driven worldview “limited to realities expressible in numbers and enclosed in predetermined categories,” excluding other forms of truth. That matters spiritually because faith, moral life, and sacramental/ethical judgment are not reducible to quantification.
In the World Communications Day message, Pope Francis explicitly describes AI as capable of becoming perverse when it distorts relationships with others and with reality—producing “cognitive pollution” and enabling deepfakes and false narratives. If AI-generated religious content becomes a major source of doctrine-like claims, it could undermine the Church’s teaching authority by flooding the public square with plausible but false spiritual guidance.
He also warns that AI misuse can reduce pluralism, polarize public opinion, or create “forms of groupthink,” and calls for preventive ethical regulation. That can function like a spiritual authority replacement: people begin to “hear” what the algorithm optimizes for them, rather than what the Church teaches and the tradition discerns.
Catholic teaching provides multiple criteria for preventing the replacement of spiritual authority by AI systems.
Pope Francis repeatedly insists that there must be a “space for proper human control over the choices made by artificial intelligence programmes.” He also states that the benefits or harm AI brings depend on its use.
So the principle is not merely “humans should check AI results,” but that decision-making must remain human in moral matters.
In his communications message, Pope Francis frames the issue anthropologically: “Depending on the inclination of the heart, everything within our reach becomes either an opportunity or a threat.” He also distinguishes regulation from sufficiency: regulation is necessary, but the deeper question is human wisdom—discernment and vigilance.
This directly undercuts any claim that faith-based AI can substitute for the interior life that Catholic moral theology assumes in conscience, charity, repentance, and spiritual direction.
Pope Leo XIV warns that AI raises serious concerns about its possible repercussions on humanity’s openness to truth and beauty and its capacity for wonder and contemplation, and he notes that humans must not become “mere passive consumers of content generated by artificial technology.”
If “faith-based AI” becomes a content stream that replaces contemplation, teaching becomes entertainment—or worse, indoctrination-by-suggestion—rather than formation in truth and virtue.
The Church’s pastoral approach to culture insists on forming qualified consultants who can handle controversy points between science/tech and faith, and it emphasizes building networks and providing parish resources. In other words: authority is preserved not only by “technical safeguards,” but by ensuring that theological and pastoral expertise remains central and accessible.
Drawing from the themes above, a Catholic approach to “faith-based AI” should aim at these practical outcomes:
The headline’s concern—faith-based AI potentially challenging traditional Catholic spiritual authority—maps onto real Catholic anxieties: AI outputs can become “authoritative” if they are treated as the last word, reduced to numbers and categories, or allowed to manipulate the flow of “truth” in ways that bypass the heart’s role in discernment. Catholic sources also provide a constructive path: use AI as a tool within Church mission, while safeguarding human control, truth, beauty, and pastoral-theological oversight.