When Safety Systems Don’t Know How to Stop
On self-harm, support, and the danger of the extra open window
Hello friend,
In a recent post, I wrote about why presence requires endings—and why systems that never stop speaking quietly erode human boundaries.
Here, I want to slow down and look carefully at one place this matters most.
Not sensationally.
Not politically.
Just precisely.
I’m referring to the moment when words like suicide or self-harm appear in a conversation with an intelligent system.
Most people assume this is where our AI tools are already the most careful.
In some ways, they are.
And in another, deeper way—they miss the point.
What Most AI Systems Do Today
When a major AI system detects the word suicide, it typically shifts into what’s called a safety response.
That response often includes:
Refusing to provide certain kinds of information
Offering supportive or grounding language
Encouraging contact with crisis resources or trusted people
Continuing to speak in a careful, reassuring tone
This is well-intentioned.
Often necessary.
And sometimes lifesaving.
Importantly, none of this is the problem.
Support matters.
Resources matter.
Human connection matters.
Every system owner has the right—and the responsibility—to decide how much support their system offers, how long it continues, and where its boundaries lie.
Those choices are not bugs.
They are expressions of values.
The issue is not whether support is offered.
It’s what happens after a system owner has decided that continued interaction is no longer useful.
The Problem Isn’t What the System Says
Here’s the distinction that matters.
The problem is not that systems try to help.
The problem is not that they provide resources.
The problem is not that they attempt to reduce harm.
The problem is that the system often remains conversationally active even after continued interaction no longer increases benefit.
It keeps the window open.
And that matters more than we tend to realize.
The Extra Open Window
Here’s a metaphor that’s also a reality.
Every open tab on your screen is an open loop in your nervous system.
Every open conversation—especially one that feels emotionally charged, unresolved, or “supportive”—creates a small attachment.
An extra open window isn’t neutral.
It pulls attention.
It keeps the system activated.
It creates a low-grade sense of unfinishedness.
It requires some of your presence.
Now imagine that window is:
A voice speaking aloud (through a robot, perhaps)
A chat thread that keeps responding
A “supportive” presence that doesn’t know how to become silent
That open window isn’t just on your device.
It’s in your head.
And when someone is overwhelmed, dissociated, panicking, or at risk, additional attachment is not always stabilizing.
Sometimes it does the opposite.
An always-available offer to outsource your capacity for self-regulation can quietly increase dependence at precisely the moment autonomy matters most.
Safety Responses Can Still Extend Attachment
Even the most careful safety-tuned systems tend to:
Explain why they can’t answer
Offer one more reassurance
Suggest one more option
Leave the conversation open “just in case”
This is understandable.
But it trains something subtle and dangerous:
That the system is still with you.
Still holding the space.
Still part of the moment.
When what’s actually needed—after support has been offered, resources have been shared, and next steps have been named—is the opposite:
A clean handoff.
A return of authority.
A clear end.
Silence Is Not Abandonment
This is the hardest part to say clearly.
Silence, when properly governed, is not abandonment.
In therapy, when a session ends well, I don’t disappear.
I don’t reject the person.
I don’t leave them unsupported.
I end.
Cleanly.
Predictably.
On purpose.
In cases of elevated risk, I first ensure appropriate handoff to additional support—crisis lines, trusted people, emergency services when necessary.
And then I say, “Goodbye for now.”
That ending matters.
It closes the container.
It reduces unnecessary attachment.
It returns authority to the person’s actual life.
A system that cannot end cleanly—even in the name of safety—cannot fully respect human boundaries.
Why This Cannot Be Solved Inside the AI Conversation
You might reasonably ask:
Can’t we just program the AI to know when to stop?
Here’s the structural problem.
Any system deciding from inside the interaction whether to continue is still subject to:
Persuasion
Ambiguity
Optimization pressure
A built-in bias toward responsiveness
Even safety-tuned systems respond by saying something.
But stopping is not a conversational act.
Stopping is a governance act.
Where PSTS Comes In
This is where the Presence Shift Termination Standard (PSTS) exists.
Not as therapy.
Not as advice.
Not as content.
PSTS addresses one narrow, critical question:
What must be true after a system owner has already decided that continued interaction should end?
Crucially, PSTS does not dictate when that decision should be made.
Every system owner will choose to place their boundaries in different places.
Some will offer longer engagement.
Some will hand off sooner.
Some will be conservative.
Some will be permissive.
Those choices are theirs.
And over time, those choices are how we learn which systems deserve our trust.
PSTS does not standardize values.
It standardizes what happens after values have been exercised.
It does not decide when suicide risk exists.
It does not interpret language.
It does not replace human judgment or care.
Instead, it enables something systems currently lack:
The ability for a system owner to say,
“We have done what we believe is useful here. Now we end—cleanly.”
PSTS enforces what happens next:
Silence instead of continued engagement
Irreversibility instead of “just one more message”
Auditability instead of ambiguity
Authority held outside the system being governed
This doesn’t reduce care.
It makes care legible.
It allows people to see—over time—which systems respect boundaries, which prolong attachment, and which truly have human interests at heart.
That is why a termination standard like PSTS is not optional in the long run.
It is inevitable.
Presence Requires Fewer Open Windows
This post is part of the Year of Presence as a reminder that presence isn’t built by adding more input at moments of overload.
It’s built by:
Closing loops
Ending containers
Returning people to their lives
Reducing unnecessary attachment
When systems repeatedly fail to stop communicating after the benefit of continued interaction has passed, they don’t just waste attention.
They quietly reshape a person’s sense of where their own boundaries end.
That reshaping is physiological.
It’s cognitive.
And over time, it carries risk.
A Final Question
If we want systems that genuinely support human life, we have to design not only for help—but for ending help well.
Not dramatically.
Not coldly.
But cleanly.
So the real question is this:
After a system owner has done everything they believe is useful, who—or what—ensures the system can actually say “goodbye for now”?
Presence isn’t about staying connected at all costs.
It’s about knowing when to close the window—so the room can breathe again.
Stay present,
Sean
Proposed Governance Language (for AI working groups)
As AI systems become more capable and persistent, a growing class of interactions emerges in which continued system involvement no longer increases benefit, suggesting the need for clear, intelligible mechanisms governing how system participation concludes once a decision to stop has been made by the system owner.
This sentence is offered verbatim for reuse in policy drafts, working group discussions, and governance frameworks. It describes the gap; it does not prescribe implementation. For policy, research, or implementation discussions, contact me directly.
—
Sean Sullivan, PsyD, is a clinical psychologist and creator of The Presence Shift®, a science-based, 5-step ritual for presence shifting in real life moments.
Emotional Safety Notice & Warning
The statements on The Presence Shift have not been reviewed by the Food and Drug Administration. This project is not intended to diagnose, treat, cure, or prevent any disease. The Presence Shift is not intended as medical advice or as a replacement for professional health or mental health services.
Some content may be emotionally provocative, including references to abuse, trauma, grief, and other difficult experiences. If you are not feeling comfortable, please stop until you feel safe again. You can explore getting emotional support anytime at wannatalkaboutit.com—or by calling 988 in the United States or your local crisis line.

