Much of the public conversation about artificial intelligence is driven by drama.
Headlines warn of runaway systems, mass unemployment, existential threats, and machines that outgrow human control. Panels and experts debate timelines to realize artificial general intelligence and superintelligence. Commentators speculate about collapse, domination, or extinction. Fear, when paired with uncertainty, travels fast and it has become one of the most used lenses through which AI is discussed.
There is a strange comfort in this framing. It places responsibility somewhere else: in the future, in the technology itself, or in the hands of a small group of engineers, corporations, or governments racing ahead of the rest of us. If the danger lies there, then our role is simply to react, regulate, or resist.
But this focus misses something more immediate and more uncomfortable.
We spend enormous amounts of energy scrutinizing those who are building the machines. Questioning their intentions, their incentives, their speed, while spending remarkably little time examining how we are forming the humans who will deploy, govern, and live alongside them. We argue endlessly about what artificial intelligence might become, while rarely asking what kind of people WE are becoming in the process.
The deepest ethical question raised by AI is not what these systems can do, but rather it is what they are teaching us to accept, outsource, and ignore.
Modern society is exceptionally good at asking Can we? Can we generate language at scale? Can we predict behavior? Can we automate judgment, care, creativity, and decision-making?
We are far less practiced at asking Should we? And even less practiced at asking Who we must become if we do?
Technology expands the range of possible actions, but it does not expand our moral capacity by default. In fact, it often exposes how thin that capacity already is. When power arrives before maturity, the result is rarely restraint. More often, it is acceleration without reflection.
The conversation around artificial intelligence is accelerating at a breathtaking pace. New models emerge weekly. Capabilities once considered distant now arrive quietly through software updates.
Governments speak increasingly of regulation, even as they quietly compete (and sometimes openly) to avoid being left behind. Public commitments to safety and governance coexist with private urgency around capability, national advantage, and strategic dominance. In practice, much of what is called “regulation” is better described as an appeal to stewardship: a hope that those with the greatest power will exercise restraint before they are compelled to.
As companies and nations race to deploy, educators, parents, and institutions strive to keep up and most of the conversations circle the same questions:
What should AI be allowed to do? How do we align it? How do we prevent harm?
These are necessary questions. But they are not the hardest ones. The harder problem is not artificial intelligence. The harder problem is us.
AI governance and stewardship will fail if it asks only what machines should do without asking who humans are becoming.
Power Has a Way of Revealing Us
History offers a sobering pattern. Every major technological shift has amplified human capacity faster than it has strengthened human wisdom.
Fire gave us warmth and weapons. The printing press spread literacy and propaganda. Industrialization produced abundance and exploitation. Nuclear power promised energy and delivered existential risk.
None of these technologies introduced new moral failures. They simply accelerated existing ones. Power has never been neutral; it magnifies whatever values already shape the hands that wield it.
Artificial intelligence is different in scale, but not in kind. It extends human intention, judgment, and bias at speeds and volumes previously unimaginable. What we are building is not an alien intelligence. It is a mirror that reflects our priorities, incentives, and blind spots back at us with remarkable efficiency.
That is why the question of governance cannot begin with the machine alone. It must begin with the human character behind it.
Why Ethics Cannot Be Added Later
Much of today's AI discourse treats ethics as a technical layer. Something to be applied once systems are sufficiently advanced. Guardrails. Policies. Alignment techniques. Oversight committees. While these are vitally important, they are also insufficient.
Ethics is not a feature that can be bolted onto power after the fact. It is a posture cultivated long before power arrives. When governance frameworks are built by institutions that reward speed over care, efficiency over understanding, and growth over responsibility, those values inevitably shape the systems they regulate.
A culture that prizes optimization will build optimizing machines, while a culture that treats people as data points will automate that treatment and a culture uncomfortable with limits will struggle to impose them.
AI governance fails not because it lacks rules, but because it lacks formation.
The Risk of a Comfortable, Dehumanizing Future
Popular imagination often frames AI risk in dramatic terms like rogue systems, catastrophic failures, and sudden loss of control. These scenarios dominate headlines because they are easy to visualize and emotionally gripping.
But the more plausible danger is quieter, such as a future where systems function smoothly and where services are personalized, efficient, and always available. Where friction disappears along with deliberation, agency, and depth.
In such a world, decision-making becomes outsourced not through coercion, but convenience. Judgment is deferred. Creativity is streamlined. Compassion is simulated. Over time, humans may not feel oppressed, only unnecessary.
A peaceful future can still be dehumanizing.
When intelligence is delegated without reflection, when care is automated without presence, and when meaning is reduced to metrics, something essential gradually erodes. The loss is not dramatic enough to resist, yet profound enough to reshape what it means to live a human life.
The Formation We Are Neglecting
If ethical AI depends on human maturity, then the most urgent work is not technical innovation but human formation.
This formation does not begin in boardrooms or policy documents. It begins in habits, education, and attention. It begins in what we reward, in what we tolerate, and what we ignore.
Several qualities stand out as increasingly rare and increasingly necessary:
-
Attention: In a world optimized for distraction and delegation, sustained attention becomes an ethical act. Without it, we cannot notice harm until it is already normalized.
-
Humility: Predictive systems tempt us to overestimate certainty. Humility reminds us that intelligence is not wisdom, and confidence is not understanding.
-
Restraint: The ability to build does not justify the impulse to deploy. Restraint is not resistance to progress; it is respect for consequence.
-
Truthfulness: In an age of synthetic language and plausible outputs, truth becomes harder to recognize and easier to compromise. Truthfulness is not accuracy alone. It is the refusal to manipulate simply because we can.
-
Care for the Vulnerable: Optimization naturally favors the measurable, the profitable, and the scalable. Care insists that those who cannot be optimized must still be protected.
These are not technical skills. They are virtues. And virtues do not scale easily.
Education, Story, and the Long Work of Becoming Human
This is why education matters more than ever and not education as credentialing, but education as formation.
We have become remarkably good at teaching people how to use systems while neglecting to teach them how to live with them. Children are trained to navigate interfaces long before they are invited to reflect on meaning, responsibility, or consequence. Adults are rewarded for productivity while discouraged from asking what that productivity serves.
Stories, reflection, and slowness may appear inefficient in an AI-driven world. In reality, they are safeguards. They cultivate imagination, empathy, and moral reasoning, capacities that no machine can replace and no society can afford to lose.
If we raise generations fluent in optimization but unfamiliar with wisdom, we should not be surprised when they build systems that treat humanity as a problem to be solved rather than a mystery to be honored.
Becoming Before Governing
This article is not an argument against artificial intelligence. It is an argument against technological inevitability without moral preparation.
AI will continue to advance. That is not in question. What remains undecided is whether human beings will advance alongside it, not in capability, but in character.
Governance and stewardship frameworks matter. Regulation matters. Technical alignment matters. But none of them can substitute for the slow, unglamorous work of becoming better humans. Becoming individuals and communities capable of holding power without surrendering responsibility.
The future of AI ethics does not begin when a system is deployed. It begins long before, in how we choose to live, learn, and care for one another.
The harder problem is not artificial intelligence. The harder problem is whether we are willing to become the kind of people worthy of governing and stewarding it.
In Closing
The question of who we are becoming is not abstract. It demands response and not only from policymakers and technologists, but from communities willing to live differently in the presence of power.
For Christians and churches in particular, this moment presents both a challenge and an opportunity. The language of stewardship is familiar, even central, to the faith. Yet it has rarely been applied beyond environmental care or financial responsibility. Artificial intelligence forces a wider reckoning: stewardship not only of resources, but of attention, truth, human dignity, and moral imagination.
What would it look like for churches to step outside their technological comfort zones to offer a different posture toward power?
What practices might help communities form people capable of restraint, discernment, and care in an AI-shaped world?
Those questions deserve their own space.