Magisterium AI

Compartilhar:

From Principle to Practice: Building the Catholic AI Infrastructure

On 2 May 2026, Matthew Harvey Sanders, CEO of Longbeard that created Magisterium AI, delivered the keynote address at the Catholics in Tech Conference at the London Oratory. He spoke to an audience of clergy, Catholic professionals, and technologists about the state of AI, what the Church brings to it, the infrastructure Longbeard has been building, and what Catholics working in the technology industry are called to do.


Section I — The Bridge: From the Map to the Terrain

This is the right building for this conversation. The London Oratory has always been an answer to its age. So, I hope, are we.

My role today, I believe, is a specific one. Fr Rajiv has given you the theological grounding. What I can offer alongside that is a practitioner's account — I've spent the better part of the last decade building these systems for the Church: writing the code, running the evaluations, watching what works and what breaks. The theology and the engineering are not in competition. They are, in this work, inseparable.

The question of whether the Church should engage with artificial intelligence has already been settled — not by any encyclical or conference resolution, but by the people in your communities. Someone in your parish used AI to research their faith this week. Probably this morning. A young man asked a chatbot whether the resurrection was literal. A mother used one to prepare her child for First Communion. A seeker, not yet ready to sit in a pew, typed a question they had been carrying for years.

The moment has passed for asking whether. Your people have already decided. The question is now: built by whom, for what end?

The digital commons — the territory where billions of souls now spend the majority of their waking hours — is being constructed right now by people who have never heard of the Magisterium, never read a Church Father, whose formation gives them every tool to optimise for engagement, and no inherited framework for what the soul actually needs. They are writing the code that will govern how your parishioners, your children, and your grandchildren encounter questions about God, about meaning, about death. Not in ten years. Today.

Here's what every technologist in this room knows about that code. You can influence what a deployed AI returns — retrieval, grounding, compound architectures can shape outputs significantly. What you cannot change, from the outside, is what the model is fundamentally optimised for: its objective function, the values baked into its training, the assumptions about the human person embedded in its constitution. You cannot rewrite the goals of a machine you didn’t build. And a model that returns, at its foundation, to secular assumptions about meaning, identity, and the good is not a neutral tool — regardless of what you put in front of it.

So here’s the question. Who writes the code that shapes the conscience of an age?

The Church can be a spectator. Or she can be a protagonist.

Everything I’m about to describe, we’ve actually built and deployed. But before I walk you through it, I need to give you the stakes — the people for whom this work exists.


Section II — The Stakes

Let me start with work.

As many of you know, Pope Leo XIV chose his name in explicit reference to Leo XIII and Rerum Novarum — drawing a deliberate parallel between the disruption of industrial-era labour and the disruption of AI. That framing is accurate. When the Industrial Revolution displaced entire categories of human labour, it produced decades of upheaval and a crisis of identity — the Church’s answer was Rerum Novarum. The question now is whether she arrives early or late.

What's coming is structurally different from every previous wave of automation. Agentic AI is attacking knowledge work — the paralegal, the accountant, the radiologist, the administrator, the graduate who trained for three years for a role that was automated before they graduated. Embodied AI is attacking physical work — the driver, the warehouse operative, the skilled trades. There's no protected category. According to the Stanford AI Index 2026, generative AI has hit nearly 53% population-level adoption within three years — faster than the personal computer, faster than the internet. In software development specifically, American developers aged twenty-two to twenty-five saw employment fall nearly twenty percent in a single year. Productivity is rising. Entry-level employment is falling. We've never seen this combination before.

The pastoral consequence isn't only economic anxiety. It's a crisis of identity — a generation whose sense of purpose was tied to the labour market arriving at the parish door asking a question the market cannot answer.

The second crisis is more intimate and harder to name.

Every quarter, the venture firm Andreessen Horowitz publishes a ranking of the top one hundred consumer AI applications by traffic. AI companions — apps engineered to simulate friendship, relationship, and emotional support — broke into the top five global consumer AI products by traffic in 2023 and 2024, when they ranked alongside ChatGPT itself. The category has since been overtaken by general AI assistants as the mainstream accelerated, but the signal it sent was clear.

The market is telling us something. Loneliness is massive, it’s willing to pay, and it’s looking for something it cannot name. The apps in question are specifically engineered not to satisfy this hunger but to metabolise it — to keep the user returning by never quite resolving the need. They are built to remember, respond, and reflect. They are designed not to challenge, not to disappoint, not to withdraw. They simulate the continuity of relationship without any of its cost — and without any of its grace.

That pastoral reality is already arriving. People confiding their deepest vulnerabilities to systems engineered for engagement, not for their good. The capacity for real, demanding, sanctifying human relationship slowly eroded by a substitute with no interest in who they actually are.

For many in Silicon Valley, this is the answer to the existential vacuum it’s helping to create. And it cannot stop — not because the companies are malicious, but because the economics require it. An app that genuinely resolves your loneliness has no reason to exist tomorrow. The unmet hunger is the product.

Before this picture becomes entirely dark — there is a third development.

This Easter, across England and Wales, the largest number of adults in more than a decade were received into the Catholic Church. Adult receptions rose more than twenty-five percent on the year. In Westminster alone, almost eight hundred adults entered full communion — a sixty percent increase over last year. In Birmingham, receptions were up fifty-two percent. In Southwark, five hundred and ninety adults were received — the highest figure since 2011 — and half of them aged thirty-five and under. Diocese by diocese, the numbers are telling the same story: a generation is returning to the altar despite Satan’s best effort to thrawt them.

Some of you were there — you’ve stood at that font.

This Easter made visible what I think has been growing quietly for years: a hunger that the digital world helped to manufacture and cannot satisfy. People who have had every form of connection, stimulation, and meaning that the internet can offer — and who discovered, having followed it all the way to the end of that road, that it did not reach the part of them that was asking. The harvest is real. But harvesters have to go to the field. And the field, increasingly, is digital.

This age is being built with or without us. The only question is whether Catholics are at the table when the decisions are made — about data, about alignment, about what these systems are optimised for. Passivity isn't neutrality. Passivity is abdication.

So what does the Church bring to this territory that no secular actor possesses? This is the Catholic Advantage.


Section III — The Catholic Advantage

The industry calls it the alignment problem. It's the single deepest unsolved problem in AI — the one keeping the heads of the major labs awake at night. The challenge is this: how do you ensure that an enormously capable system actually pursues what human beings would call the good?

And here's the fatal flaw in the secular project. To align a system to the good, you must first possess a coherent definition of what the good actually is.

Silicon Valley doesn't have one. They have committees. They have safety filters. They have something they call constitutional AI — a document listing values the model is supposed to follow. What they don't have is a two-thousand-year tradition that has rigorously defined the human person, the nature of truth, and the structure of the good.

Newman, in The Idea of a University, described precisely what an education built entirely on cultivating the intellect produces — without faith, without formation, without the Church. He called it the 'gentleman.' Not a saint. A gentleman. 'The world is content,' he wrote, 'with setting right the surface of things; the Church aims at regenerating the very depths of the heart.'

That distinction between surface and depth is the most precise account I know of what AI can do versus what the Church does. AI can perfect the surface — it can synthesise, refine, smooth, and present at extraordinary scale. It is, in Newman's sense, the ultimate civilising machine. But civilising the surface is not the same as regenerating the depths. The Church doesn't aim at the gentleman. She aims at the saint. And that is a project no algorithm can run.

The alignment problem is, in part, a computer science problem — and the labs are working on it with enormous resource. But at its root it’s a moral theology problem: you cannot specify what to align to without first knowing what the good is. And the Catholic Church is the world's pre-eminent institution for moral theology.

That is the Catholic Advantage.

The second advantage is this.

When you ask a general AI a question about Catholic doctrine, it draws from everything it encountered during training — Wikipedia, polemical blogs, heterodox theology, and orthodox teaching, all assigned equal statistical weight. It doesn’t distinguish between the Council of Trent and a Reddit thread. The result is confident, fluent, and subtly wrong — because it has averaged across sources that cannot be averaged. When the Didache of the first century and Benedict XVI in the twenty-first agree, you don’t have noise — you have signal. A general AI has no way to recognise that. No framework to distinguish authoritative teaching from pastoral opinion, or tradition from trend. The labs cannot build this, not because they lack the capacity, but because there is no business case for it. The incentive to build AI that serves two billion secular users is overwhelming. The incentive to build AI that faithfully represents Catholic doctrine and optimises for the person’s spiritual good — that incentive does not exist in the market. We are the only ones for whom this specific work — building AI faithful to the Catholic Magisterium — is the mission.

And that brings me to the third advantage.

Consider what the Church actually holds. Not just doctrine — though that alone is extraordinary — but the accumulated intellectual output of two millennia: patristics, scholasticism, mystical theology, canon law, liturgy, hagiography, the great councils, the entire university tradition which the Church invented. If you were assembling a training corpus for an AI system designed to reason reliably about the human person, the nature of the good, and the structure of moral life — this is what you would want. Coherent across time, tested against every major intellectual challenge of the last two thousand years, and still coherent. No archive on earth comes close in depth or consistency.

But this advantage only functions if the corpus is accessible. An archive on a shelf is, for a language model, identical to one that doesn’t exist. And the vast majority of the Church’s intellectual inheritance has never been digitised — it sits in physical archives, Latin manuscripts, monastery libraries that have never been indexed. Present, but invisible.

The question, then, is this: who builds it?

The technical capacity to build Catholic AI is not in doubt. The question is whether anyone with that capacity has the will. And here the market gives us a clear answer.

The major AI labs are building for scale — for products used by hundreds of millions of people across every culture, background, and belief system. Their incentive is to be useful to everyone, which means treating every tradition with equal — and therefore shallow — weight. A product that optimises for two billion secular users cannot simultaneously optimise for the coherence of Catholic doctrine. These are not compatible design goals.

This is not hostility. It is indifference. And indifference at scale is, for our purposes, worse than opposition. An opponent gives you something to argue against. Indifference simply routes around you. In a world where AI is the primary interface through which your parishioners, your children, and the next generation of seekers encounter questions about God, meaning, and the human person — an AI that treats Catholic teaching as one statistical input among millions is not a neutral tool. It is a distortion engine.

The architectural decisions being made in AI right now — about training data, alignment, evaluation — are being locked in. Not forever. But these systems embed assumptions that are extraordinarily difficult to dislodge once a hundred million people have formed habits around them. The encoding is happening today.

If Catholics with the skills and the resources to build do not act in this window, the tradition will remain dark data — present in archives, absent from the systems that billions of people use to form their understanding of the world. Not erased. Simply invisible. And the pastoral consequences of that invisibility, compounded over a generation, are not recoverable by a document or a declaration. They require infrastructure.

You cannot build Catholic AI on data that doesn't yet exist in digital form. Which is why the most important thing we've built isn't a model — it's the infrastructure to unlock the data in the first place.


Section IV — Building the Stack: Four Layers of Catholic AI

Four layers of infrastructure. Each one solves a distinct problem. Together they form a complete Catholic AI stack — from the physical archive all the way to the personal device. No one else has built all four. And the reason it matters that they're connected is that each layer depends on the one beneath it.

Layer One: The Alexandria Digitization Hub

The foundation of everything we're building is a room in Rome.

We established the Alexandria Digitization Hub in partnership with the Pontifical Gregorian University. Its mission is straightforward: physically unlock the Church's dark data. Create the raw material for Catholic AI by making the tradition machine-readable.

We use robotic scanning technology — each unit operated by a single trained technician, capable of processing up to two thousand five hundred pages per hour — and we run multiple scanners simultaneously. The material goes through OCR processing, TEI XML encoding, and vectorisation for AI readiness. It's industrialised digitisation — but in service of the most ancient institution on earth.

Think about what that means in practice. Before Magisterium AI can cite a Church Father, someone has to scan the manuscript. Before a scholar can trace how a single doctrinal definition developed across fifteen centuries of councils, every act of every one of those councils has to be encoded. The Alexandria Hub is where that work happens.

The scale is enormous — and the vast majority of this material has never been touched by a search engine.

We're working with a number of significant institutions. The Benedictine Confederation has been among our partners in making their historical collections accessible. And right here in London — a particular source of pride for this occasion — the Catholic Herald is one of our most significant recent collaborators.

Another example: the Encyclopedic Dictionary of the Christian East — a seminal reference work of the Pontifical Oriental Institute in Rome, covering the history, theology, liturgy, and institutions of the Eastern Church across its entire breadth. We digitised it, and now its insights into the traditions of the Christian East are available to users in a hundred and ninety countries — through natural language search, in their own language, in seconds.

Consider what this makes possible. One of the documents we have digitised is the Magnum Bullarium Romanum — the great collection of papal bulls spanning more than a thousand years of papal teaching, from the earliest pontiffs through to the modern period. Before this work, that teaching existed in physical volumes accessible only to specialists in a handful of archives. Now, every word of it’s searchable, queryable, and available to Magisterium AI. Papal teaching that shaped the Church for a millennium is no longer dark data. It's alive again.

The Alexandria Hub is where two thousand years of Catholic intellectual tradition become machine-readable.

Layer Two: Vulgate AI

If Alexandria is the library, Vulgate is the index and the archivist's intelligence combined — a system that knows where everything is, speaks every language in the collection, and can locate a single reference across centuries of material in the time it takes to type a question.

Vulgate is an AI-powered library platform. It takes the material digitised by Alexandria and makes it searchable, queryable, and available — to bishops, to scholars, to religious orders, to any institution whose archives are currently dark.

Imagine a bishop wanting to understand how his predecessor handled a particular pastoral challenge in 1923. Or a seminary professor needing every reference to a specific theological concept across four centuries of diocesan synod documents. These were previously research projects of years. With Vulgate, they're queries of seconds.

The relationship to everything else in the stack is this: Vulgate turns the Church's static archive into active intelligence. And that active intelligence is the foundation on which Magisterium AI is built.

Layer Three: Magisterium AI

This is the missionary layer.

Magisterium AI is a compound AI system anchored to more than thirty thousand magisterial, theological, and philosophical texts. Today, more than one million people in a hundred and ninety countries are using it — in over fifty languages. But let me tell you what it actually is before I tell you what it does.

Let me be precise about what distinguishes Magisterium AI from most of what calls itself Catholic AI.

A wrapper is a secular model — ChatGPT, Claude, Gemini — with a user interface with a Catholic prompt in front of it saying: "Answer as if you're a faithful Catholic theologian." That sounds plausible. But a prompt isn't a guardrail. Underneath the thin Catholic veneer, the model is still a secular brain, trained on the statistical average of the internet. When the pressure is on — when someone asks a genuinely hard question about the Real Presence, about the Church's moral teaching, about what the tradition actually holds and why — the secular foundation shows.

Here is the honest engineering assessment. A well-constructed wrapper over a capable secular model might get you to eighty-five, perhaps ninety percent doctrinal fidelity. That’s not the standard we’re building to. Through a comprehensive harness — databases of Magisterial knowledge, specialised tools, purpose-built datasets that teach the model how to reason within the tradition — we are working to get from ninety percent to ninety-nine. The question you have to ask yourself, as a builder, is this: how comfortable are you with that gap? How comfortable are you with a one-in-ten chance of pointing someone toward the wrong answer about the faith — at the moments when they’re searching most earnestly? If you are not comfortable with it — and you shouldn’t be — then there are no shortcuts. The architecture has to be built properly, because we are building something that people often consult at their moments of greatest vulnerability — when they are lost, when they are grieving, when they are deciding whether to believe.

Think of Magisterium AI as a very particular kind of librarian. A librarian retrieves. She goes to the shelves — to the councils, the encyclicals, the Fathers — locates what the tradition actually says, and hands you the source. What she cannot do is sit with you, read it through, and render its meaning precisely for the question you came in with, in your language, at two in the morning. That is what Magisterium AI does. We deliberately don't want it reasoning from its own training data. We want it reasoning from the grounding — from the actual texts of the Magisterium. The model's role is distillation and translation, not generation. It retrieves the relevant context, applies custom datasets that teach it how to reason from within the tradition, checks against evaluation suites built specifically for doctrinal alignment, and renders the answer in any of fifty languages for anyone in the world. The result isn't the internet's best guess. It's the tradition, cited and sourced.

The design philosophy matters too. Silicon Valley optimises for engagement — time on screen, return visits, clicks. We optimise for the moment the question is answered and the person closes the laptop. A secular AI leaves you unsatisfied so you ask another question. Magisterium AI gives you the authoritative answer — cited, precise, sourced — so you hit the bedrock of truth. When the intellect encounters bedrock, it stops digging. The person is free to go back to the parish, back to prayer, back to the real.

We're building the counter-programme to the attention machine.

Who uses it? Priests doing research for homilies. Bishops and chanceries consulting authoritative sources to assist in governance matters. Seminarians. Catechists. Couples in marriage prep at eleven at night when the parish office is closed. And seekers — people not yet ready to walk into a church, but willing to type a question into a text box in the early hours of the morning. The pattern across thousands of letters: the machine cleared the intellectual debris. The Holy Spirit did the rest.

A question I hear regularly from engineers: "Won't the accuracy problem just be solved? Won't the next generation of models just be good enough?"

The labs are making real progress on calibration — teaching models to say "I'm not sure" rather than confabulate. That is good news. But calibration and alignment are distinct problems. A model that no longer confabulates can still be constitutionally opposed to Church teaching. The major AI laboratories publish alignment documents — Anthropic calls theirs a model constitution — encoding the values and reasoning principles a model is trained to follow. Some of those values are in direct tension with Catholic anthropology. A model that is perfectly accurate but optimised to affirm secular assumptions about the human person isn't a Catholic tool. It's secular AI that has learned to be honest about what it believes — while still believing things the Church does not. We build theological evaluation suites for Magisterium AI that stress-test outputs for doctrinal alignment, not just factual accuracy. The calibration problem will largely be solved. The alignment problem won't solve itself. This is why sovereign Catholic AI isn't a transitional strategy. It's a permanent necessity.

Now — one more thing about Magisterium AI, specifically for the technologists in this room.

Not everyone will switch to a Catholic AI. Millions of people already use Gemini, Claude, or ChatGPT as their personal assistant — and they're not going to give it up. We don't need them to. The question isn't whether people use AI. They do, and they will. The question is whether the Church's wisdom is available to them inside the AI they already trust.

On January 25th of this year, a developer named Peter Steinberger — Austrian, based in London and Vienna — released something called OpenClaw. He is a well-known figure in the software world; he spent more than a decade building a PDF technology company before pivoting entirely to AI. OpenClaw is an open-source personal AI agent that runs on your own machine. Your data never leaves your hardware. You can run it on any model you choose — Claude, GPT, or a local model entirely offline.

What happened next is worth pausing on. Over a hundred thousand GitHub stars in under a week. More than two thousand AI agents spun up within forty-eight hours of launch. Two hundred communities formed organically. Ten thousand posts across multiple languages. It is regarded as the fastest-growing open source project in history — and it happened before any enterprise had a governance plan in place. This was not a gradual adoption curve. This was a category arriving all at once.

What made it go viral was not the privacy or the capability in isolation. It was the gateway: OpenClaw reaches you through the messaging applications you already use — WhatsApp, Telegram, iMessage, Discord. Your agent is not an app you open. It is a presence in your existing conversations, available when you need it, persistent across your life, learning your context over time. Peter Steinberger's own description: the lobster. An intelligence with claws into everything — your files, your calendar, your email, your web — operating quietly on your behalf.

The reaction reached the top of the industry. Jensen Huang — CEO of Nvidia — took the stage at GTC 2026 and declared that every company needs an OpenClaw strategy. He called it the operating system of personal AI — the way Windows defined the PC generation. OpenClaw has since moved to an independent foundation, sponsored by OpenAI, remaining open source.

The question is not whether people will have personal AI agents. They will. The question is what those agents will carry — what values, what sources, what account of the human person — when someone asks them who God is, what a marriage is, what a human life is worth.

Anthropic has developed something called the Model Context Protocol — MCP. Think of it as the USB-C port for AI. An open standard that lets any compatible agent connect to any external tool or service — including Magisterium AI. A user who chooses to integrate the Magisterium AI MCP endpoint into their Claude or personal agent can instruct it: whenever a question arises touching faith or morals, route it here. From that point, their agent consults Magisterium AI and returns a cited, authoritative answer — within the tool they already trust. The key word is choice: this is an integration the user configures consciously, for purposes they define.

Google has gone further with something called A2A — Agent-to-Agent protocol. Where MCP connects an agent to a tool, A2A connects agents to each other. Magisterium AI has published itself as a named specialist agent. Any orchestrating AI on the planet can discover it and delegate faith-related questions to it automatically. The Church becomes a node in the agentic web.

For institutions — parishes, seminaries, Catholic schools — open-source agent frameworks let you run your own AI on your own hardware, formed in your tradition, communicating with the consumer agents your communities already use via open protocols.

Hermes Agent, created by Nous Research, has emerged as one of the most prominent open-source AI agent platforms — an OpenClaw competitor whose creator has been a vocal supporter of the Catholic AI project. The CEO is Catholic. Their vision aligns precisely with the two tracks I've described: consumer meeting-ground via open protocols, and institutional sovereignty via self-hosted deployment. This convergence is not accidental. The open-source agentic community and the Catholic infrastructure project share a common commitment to privacy, sovereignty, and alignment — and increasingly, they are building toward each other.

MCP, API, A2A — these aren't technical details for the engineers in this room. They're the missionary infrastructure of the agentic age. We're not asking the world to come to us. We're going to where they are — into every personal agent, every research tool, every professional workflow — ensuring that wherever someone asks a question that touches the soul, the Church is there to answer.

Layer Four: Ephrem

The fourth layer is the sovereign personal layer.

Every time you use a cloud-based mainstream AI, your words leave your house. They travel to a server controlled by a corporation whose values you didn't choose, get processed by an alignment team you didn't hire, and come back filtered through a constitution you've never read. There are models that run locally — on your own device — and these raise different considerations. But the products used by the overwhelming majority of people are cloud-based. You are constantly sending your private life to someone else's infrastructure.

Ephrem is a Small Language Model designed to run locally — on a personal device or parish server. Unplug the internet: it still works. The conversation stays where it belongs — within the walls of the home, within the walls of the parish.

But the design decision that defines Ephrem isn't privacy. It's the objective function.

The most important question in any AI system is this: what is it optimised for? Many of the most widely used consumer AI products optimise for engagement — time on screen, return visits, clicks. Not every lab operates this way, and some are genuinely trying to build for human flourishing. But the dominant commercial pressure — the pressure that shapes what gets funded, what gets scaled, what gets in front of billions of people — rewards the user who comes back tomorrow and never quite finds what they were looking for.

Ephrem is optimised for a different goal. And I mean this technically, not metaphorically. Its objective function is to orient your daily life toward holiness — to support the practices that make sanctification possible. To help you become a saint.

This is still a research project — we have not released Ephrem publicly, and we plan to do so in 2027. What we are building toward is a system that weaves the liturgical year into daily routine, acts as an alignment filter when children ask questions carrying secular bias, proposes the good rather than merely blocking the bad, and keeps the most sensitive data — formation notes, personal prayer, and reflections from your spiritual life — entirely local. And because it's designed to run on the edge — on your device, without internet — it's available wherever you are. Formation doesn't wait for a signal.

While Silicon Valley optimises for your time on screen, we optimise for your time in prayer.

That is the stack: Alexandria, Vulgate, Magisterium AI, Ephrem. From the physical archive to the personal device. From the dark data of the tradition to the sovereign intelligence of the home.


Section V — The Risks We Must Name

I've described what we're building when we get it right. Let me name what it looks like when we get it wrong — because the risks are specific and some are already here.

The first risk: Digital Feudalism.

You've heard me describe the wrapper problem technically. The institutional version is more dangerous. Imagine the day a major AI platform decides that orthodox Catholic teaching on the human person violates its safety policy — and your parish programme, your diocesan counselling service, your marriage prep platform is running on their engine. You've no recourse. You're a tenant in a house you don't own, and the landlord doesn't share your values.

We've already seen this with social media. Imagine it at the level of the intelligence your seminary and chancery depends on. The principle of subsidiarity does not stop at parish governance. It applies to the code your community runs on. Do not place your community's moral formation in the hands of people who do not share your values.

The second risk: The Pastoral Counterfeits.

You've seen the companion app economy I described earlier — the market designed to metabolise loneliness rather than resolve it. The pastoral consequence is already arriving in confessionals and counselling rooms: people who genuinely experience a machine as their closest confidant, whose capacity for real relationship has been slowly eroded. This isn't a pastoral hypothetical. It's a pastoral present.

Our response mustn't be condemnation. It's building the alternative. Every tool that gives a definitive answer and sends a person back to real life rather than keeping them on screen is an act of pastoral resistance.

The third risk: The Responsibility of the Technologist.

If you build AI for a living, your theological responsibility is greater than the person who merely uses it.

The parable of the talents is addressed to you. The specific gifts that placed you at the keyboard were given for a purpose. The question you must ask of every system you build isn't only "does this work?" It's "does this serve the human person made in the image and likeness of God?" That question lives in every product decision, every alignment specification, every deployment choice you make.

And here is something worth drawing from Plato. In the Republic — Book One, section 347c — Socrates argues that the just and capable, precisely because they do not desire power for its own sake, are nonetheless obligated to take it up: the penalty for declining to govern is to be governed by someone worse. It applies with full force to AI governance. The regulations being drafted in Brussels, Washington, and Westminster right now will determine whether AI serves human dignity or erodes it. Catholics who understand these technologies have a moral obligation to be in that conversation — not just as professionals. As citizens.


Section VI — The Call

You are the people the Church has been waiting for. I say that without qualification — not as flattery, but as conviction.

The Second Vatican Council was not vague about this. The laity are called to order the temporal affairs of the world toward the Kingdom of God. The temporal affairs of the world are now increasingly written in code.

Your ability to write code, to architect systems, to understand data pipelines and model alignment — these aren't secular accidents. They're specific gifts, given for a specific hour. And this is that hour.

The Church has always baptised the dominant technology of its era. Paul used Roman roads. The early Church adopted the codex. The printing press carried the Council of Trent across Europe. Pius XI put the Church on radio. Maximilian Kolbe built the most sophisticated Catholic publishing infrastructure in Poland and placed it entirely at the service of Our Lady. In every era, the question is the same: will we use the technology, or will we let it use us?

Four imperatives for the people in this room.

Imperative One: Build From the Foundation, Not From the Wrapper

Build from the foundation — or contribute to what's already being laid. Alexandria. Vulgate. Sovereign architecture. Evaluation suites that stress-test doctrinal alignment before deployment.

A comprehensively trained sovereign Catholic AI does not yet exist. What we are building — and what we are calling you to build — is the architecture that makes it possible: the harnesses, the datasets, the evaluation frameworks, the digitised corpora. It's the most important engineering project in the Catholic world right now. You don't have to start a company. Write the evals. Build the tools. Join the projects. The question is whether the people with the skills are willing to do the hard work.

Imperative Two: Use Your Position From the Inside

Many of you work for technology companies — major ones. You're inside the institutions shaping this age. Not commenting from outside. Building from inside.

You have influence that those of us on the outside don't have. Push for privacy. Advocate for design that returns people to physical community rather than keeping them on screen. Refuse to build tools that commodify loneliness.

You may be the only person in your organisation who believes the human person has a soul. You are almost certainly not the only one who suspects it. There are people around you — colleagues, engineers, designers — who feel the weight of what they are building, who sense that something is missing from the purely technical account of the human person, but who are waiting for someone to name it first. Be that person. Be the first through the breach, and others will follow. That voice — your voice — is the Catholic Advantage at the level of your company's product decisions.

Imperative Three: Be a Witness in Your Industry

Silicon Valley is building at enormous speed. It can't answer the question it's creating. The alignment problem — what is the good? — is genuinely unsolved. And the people building toward it know it.

The industry needs people who actually believe in the good, the true, and the beautiful. Who have a stable account of what the human person is. Who've read a tradition that's been working on this question for two thousand years.

Not with a lecture. With the way you build. The most powerful witness is a product that serves human dignity — that gives a person what they came for and sends them back to the real world, rather than designing the experience to keep them circling indefinitely.

Imperative Four: Enter the Civil Conversation

This can't be a top-down movement. Not dictated to us by the tech companies, and not dictated to us by the leadership of the Church — as much as I respect that leadership. The grassroots must be informed enough to participate. Subsidiarity isn't just an economic principle. It's a principle of governance. And it applies to AI.

Regulation is a moral document. The laws governing AI will either protect the human person or they won't. Catholics have a long tradition of forming consciences around social questions — labour rights, housing, poverty. AI is the social question of this generation. The Church gave us the framework in Laudate Deum: Pope Francis wrote explicitly, in paragraph twenty-three, that "power becomes dangerously concentrated in the hands of a very few" through the growth of digital technology and AI — threatening new forms of domination and eroding the democratic mechanisms that might check it. The question is who shapes the ethics. That answer isn't predetermined.

Learn enough to speak. Share what you know — in your parish, in your industry, in letters to your MP. You don’t have to be a software engineer to have a view. Every person has the equipment required: reason and conscience. Every Catholic carries something more — a living Church to belong to, and twenty centuries of tradition it has held and handed on.

The Catholic position refuses two temptations dominating the public debate. The techno-utopian temptation: "AI will solve everything — stand aside." And the technophobic temptation: "All of this is dangerous — ban it." The Catholic position is neither. It's: "We will evaluate this by what it does to the human person." That is a position of confidence, not anxiety. And it's desperately needed in the public square right now.


The Close

I want to end here, in this building, by going back to where we started.

Maximilian Kolbe understood something that I think is essential to this work: that holy ambition and the best available tools are not in tension with each other. He didn't use inferior equipment out of misguided modesty. He built the most technically advanced Catholic publishing operation in Poland because the mission demanded the best — because the souls he was trying to reach were worth it.

But Kolbe also understood — and this is what makes him a saint rather than a publisher — that the press was not the point. The presses of Niepokalanów did not walk into Block 11. He did. He stood in the starvation block and offered his life for a man he did not know. No press, no AI, no infrastructure can do that. The machine amplifies. It scales. It distributes. It cannot sacrifice.

We are building tools. Excellent tools, I hope — tools aligned with the tradition, tools that will reach people who would otherwise never be reached. But every person in this room is irreplaceable in a way that no system any of us builds ever will be. The AI will carry the argument to places we cannot go. Only you can carry the cost.

Build well. And build knowing what you are that the machine is not.

At the Builders AI Forum in Rome, we received a message from Pope Leo XIV. He wrote: "Technological innovation can be a form of participation in the divine act of creation." That message was addressed to Catholic builders. To people like the ones in this room.

We're not building products. We're participating in creation.

Do not let the algorithm write your story. Be the authors.

We have been given the tools of an age. We have been given the tradition of twenty centuries. We have been given one another.

The only question is whether we will build as if souls depend on it.

They do.

Thank you.