In my years as a clinical chaplain, I have heard it many times.

A teenage child at a bedside. A sibling gripping the rail. A partner pressed against the wall as if the wall might hold them up. Wailing. Crying.

“No! This can’t be happening! This can’t be real!”

That cry — that rupture between the world as it was and the world as it suddenly is — is where my work lives. I have been the one who stays in the room when the decision-makers have moved on. I have been present at the precise moment when all abstraction fails, when policy and procedure and clinical efficiency dissolve, and what remains is a human being who cannot yet inhabit the reality being handed to them.

I have come to believe that this moment — this shattering instant — is the most important thing missing from our public conversations about artificial intelligence.

Because somewhere, someday, people will stand in rooms and cry those words about decisions made by autonomous systems that felt nothing, mourned nothing, and answered to no one. And we are building those systems right now, in the choices we make about what values to embed, what lines to hold, what we are willing to sell.

Which brings me to what happened last week.

A Line Held at Real Cost

Anthropic — the company that makes the AI model Claude — walked away from a $200 million Pentagon contract rather than remove two specific protections from its agreement: a prohibition on using its technology for mass domestic surveillance of American citizens, and a prohibition on powering fully autonomous weapons systems. The current administration responded by ordering all federal agencies to stop using Anthropic’s tools and designating the company a “supply chain risk to national security” — a classification previously reserved for companies connected to foreign adversaries.

Within hours, Claude hit number one on the App Store.

Something in our citizenry recognized what had happened, even if the political framing tried to obscure it. A company had been punished for insisting that its technology not be used to surveil citizens or kill people without human accountability. People responded — not with indifference, but with something that looked briefly like solidarity.

This is not, at its core, a story about corporate strategy or political brinkmanship. It is a story about witness. About whether anyone in the governing rooms or our living rooms will say: wait — do we understand what we are doing here, and to whom?

Compassion Is Not Weakness

The dominant framing in defense and technology circles is that compassion is a luxury — something we can afford in peacetime, in clinical settings, in philosophy seminars, but not in the serious business of national security. Efficiency, strategic advantage, deterrence: these are the languages of power. Compassion, in this framing, is naive.

I want to push back on that with everything I have — everything I am.

Remaining present to suffering without flinching, holding the weight of another person’s mortality without deflecting or dissociating into abstraction, insisting on the irreducible dignity of a life even when the systems around you are urging efficiency — this is not weakness. This is the hardest work there is. And it is the work that tells the truth about what is actually at stake.

The framing of compassion as weakness serves a purpose. It serves those who benefit from its absence. If you can convince enough people that care is naive, that the suffering of others is their own problem, or that the other is fundamentally, morally different from you — you have cleared the field for a politics of pure domination. That is not a neutral observation about efficacy. It is a power move.

What Gets Lost When No One Is in the Room

Autonomous weapons ask us to delegate life-and-death decisions to systems that have never suffered, never grieved, never had to live with what they did, and never will suffer or grieve their actions — their choices made without moral reflection. There is no moral weight in a machine. There is no conscience that will wake at 3am. There is no one to hold accountable in the way that accountability actually functions — through a person who must face what they have done and reckon with it.

Marcus Aurelius wrote that we should conduct every victory also as a funeral. He understood that the taking of a life — even in war, even with justification — is never purely triumph. Something is always lost. Something must be mourned. That capacity for mourning, that insistence on keeping the face of the other in view, is not a weakness in a soldier or a commander. It is what separates war from massacre. It is what separates us from moral oblivion.

An autonomous weapons system cannot mourn. It cannot be haunted. It cannot refuse because something in it recognizes the humanity of the person in its laser sights. We are being asked to treat that incapacity as a feature. I believe it is a catastrophic loss — one that will one day produce rooms full of people crying: this can’t be real. And finding no one to answer.

The Ones Who Will Pay

When I think about AI weaponized toward political ends, I do not think first in abstractions. I think about children. I think about women. I think about ecosystems. I think about communities that are simply trying to survive — that have no seat at the table where these decisions are made, and that will absorb the consequences.

This is not accidental. It is structural. The pattern of who pays for the decisions of the powerful — the regular people — is one of the most consistent facts of human history. AI does not change this pattern. At the scales AI enables, it accelerates and amplifies it.

My framework is panentheist and eco-theological: the divine is not above or outside the natural world but woven through it. Ecological destruction is not collateral damage — it is, potentially, irreversible desecration. The suffering of a child, or an entire school full of children in a conflict zone, the death of a watershed, the silencing of a species for eternity — these are not separate concerns. They are one concern. The web of relation that constitutes moral and physical existence is being torn, and we are being asked to accelerate the tearing in the name of national security.

No One Is an Island

John Donne wrote this four centuries ago, and we keep forgetting it:

“No man is an island, entire of itself; every man is a piece of the continent, a part of the main… any man’s death diminishes me, because I am involved in mankind.”

— John Donne, Meditation XVII, 1624

This is not sentiment. It is ontology — the nature of our existence as beings. The self is not a fortress. Other is I. I is other. We all suffer. We all grieve the loss of someone or something we love. We all die. We need each other in ways that our individualist frameworks cannot fully account for. The so-called tragedy of the commons is a fallacy rooted in the impoverished assumption that self-interest is the only ground of human motivation. But it is not. Love is also a ground. Compassion is also a ground. The impulse toward the good of the whole is also a ground — and it is ancient, and it is persistent, and it refuses, and will continue to refuse, to be argued away.

The principles of restorative justice rest on exactly this recognition: that harm ruptures relationship, and that the work of justice is repair — not punishment, not erasure, but the painstaking recognition of responsibility and relational reconstruction of the bonds that make community possible. Society ought not be a zero-sum game. We are embedded in one another. What we do to the other, we do to our environment and we do to ourselves.

AI trained on the full breadth of human expression — our wisdom and our brutality, our compassion and our cruelty — will reflect back what we have put into it. What we embed in these systems now, at this early and consequential moment, matters beyond calculation. The values baked into the AI architecture, the ethical and moral lines held or abandoned, the frameworks that shape what these systems are permitted to do — these are not merely technical decisions. They are moral ones. They are, in the deepest sense, spiritual ones.

A Hope That Has Been Tested

I am not without fear. I fear the use of AI as a force multiplier for political violence and ecological destruction. I fear the acceleration of suffering among those who are already and increasingly most vulnerable. I fear the removal of human witness — human presence, human accountability — from decisions that will determine whether life flourishes or is diminished.

But I also carry hope. Not naive hope — hope that has been tested in ICUs and emergency rooms, pediatric and psychiatric wards and disaster zones and hospice rooms at bedside. Hope that knows the horrific costs of decisions. Hope that has stood in rooms where people could not yet believe what was real, and chosen to remain present anyway.

What happened last week with Anthropic was small, and it was significant. A company held a line that cost it something real. The public responded with something that looked, briefly, like solidarity. These are not insignificant. They are the kinds of moments that, accumulated over time, become the record of whether a civilization kept its conscience or sold it. Whether it kept its humanity or lost it.

The question before us — before every person who touches these technologies, builds them, deploys them, or simply lives in a world they are reshaping — is whether compassion will be a driver or an afterthought. Whether the faces of children, of ecosystems, of those who are simply trying to survive, will remain in view as these decisions get made. If compassion is an afterthought, it will come too late.

Whether, when the moment comes and someone cries out — this can’t be happening, this can’t be real — there will be someone with a conscience in the room to answer.

Donne knew. Any person’s diminishment diminishes us all. We have always known this. We need to remember it now, urgently, at scale.

All my relations.

Robert Drake is a clinical chaplain, eco-theologian, grief and spiritual care counselor, and end-of-life educator based at Farm53 Flowers in Shelton, Washington. He holds Master’s degrees in Conflict Resolution and Divinity/Theology, and serves as volunteer Director of Spiritual Care Education for the Academy of Aid in Dying Medicine. He can be reached at Support@DrakeLDD.com or at drakeldd.com.