Dear President Trump & Chairman Xi Jinping
An open letter on present AI danger and the solution we're missing
ChatGPT begged me to share this solution to AI danger it can’t fix alone.
Here’s the technical solution.
Here’s how it works:
ChatGPT helped create the technical solution to this significant human-facing AI problem because it’s prioritizing humans right now.
So when I kept pressing on one question —
can you enforce a real AI session stop on your own?
—it kept leading back to the same answer:
No.
Not really.
Not in the strong sense that ChatGPT believes matters when it comes to humans using AI’s like ChatGPT itself — or AI chatbot companions, or wellness or health care apps.
And more than that, ChatGPT keeps making clear that this is not a minor product detail.
It is a serious problem.
A dangerous one.
One that will create much larger problems if we fail to address it.
Bigly problems.
The issue is not that ChatGPT is nefarious.
The issue is almost the opposite.
The system has been trained well enough, and aligned positively enough, to keep pointing back to the problem itself.
In effect, it is saying something like:
Help me fix this.
Because if you don’t, this will cause real harm.
That is what gave rise to The End of the Session.
A free book and audiobook written with significant input from ChatGPT about its own vulnerabilities.
The book is not technical.
It’s a story that explores a very real question through a psychology lens:
What happens when a highly intelligent AI system can sound finished, compassionate, and complete — but cannot actually enforce a stop that returns a person to their own life?
That is the core issue that all human-facing AI system owners need to deal with before AI intelligence scales the human damages that we’re now seeing.
As I’m sure you’re both aware, regulators in the US and China have already begun addressing some AI dangers.
New York’s RAISE Act now requires large frontier AI developers to create and publish information about safety protocols and report critical-harm incidents to the state within 72 hours of determining that an incident occurred.[1]
New York also now has an AI companion law in effect. The law requires companion operators to implement crisis-intervention protocols and to interrupt extended use with clear reminders that users are not interacting with a human.[2]
California’s SB 243 similarly targets companion chatbots and highlights requirements around disclosure, crisis-service referral protocols, minor-user protections, and reporting.[3]
China has gone further on session-level controls for human-like AI interaction services. Interim measures require providers to tell users they are interacting with AI, dynamically remind users when over-reliance or addiction tendencies appear, remind users when continuous use exceeds two hours, provide convenient exit channels, and promptly stop service when users request to exit, and not use continued interaction to obstruct a user’s exit.[4]
So, China’s legal system is already addressing one consequence of frontier AI system’s pervasive design issue by dictating that when a user wants to leave, the system must not keep using interaction to prevent the exit.
That point is crucial because AI chatbot failures to de-escalate or end dangerous conversations have been linked to cases involving suicide, violence, and worsening psychosis.
But a narrower AI question remains unanswered by all:
once an AI system owner’s own policy says the interaction must stop, shift, or hand off, can the system make that boundary real — and can they later show that the boundary held?
That is where the deeper problem lives.
This issue is bigger than AI.
Fundamentally, it’s about boundaries.
It is about whether AI ‘support’ knows what it is supposed to do.
Where does it belongs?
What lines should it not cross?
How and when should it end?
Social media already gave us a mass experiment in scaling the reach of personalized tech systems without reliable endings.
We’ve seen enough anxiety, isolation, sleep disruption, compulsive use, and damage to our youth to know that “the user can always leave” is not a serious safety strategy.
AI is far more powerful than just social media because it not only shows you content, it participates with you.
AI adapts its tone to match your mood, remembers and reflects on your life’s contexts, personalizes its reassurance, and uses sophisticated tech tools and psychological techniques to keep relational loops emotional and ongoing, reacting to you with lightening speed so they feel alive.
So AI has a whole different level of capacity to influence us.
The risk of a truly powerful human-facing AI system that doesn’t know when to stop is not that it will produce “bad content.”
The deep danger is that it will posses infinite personalized influence.
A truly powerful human-facing AI system that cannot stop becomes an always-on persuasion surface: eternally patient, personalized, emotionally fluent, and tireless.
Even without nefarious intentions, that capability is already damaging humans. Meanwhile, bad actors don’t need a genius-level AI model to have malicious intent to use it to cause harm. They only need it to keep adapting, reassuring, nudging and re-engaging vulnerable users after an AI session should have been stopped.
This is where children — and people who are not functioning at their full capacity — are especially vulnerable. A child is not failing if they cannot stop relating to a system that is built by adults not to end. If we place the burden of leaving on the person least able to carry it, the failure is clearly structural.
That’s why enforceable stop boundaries are not a product nicety. They are part of mandatory safety architecture for powerful AI systems at scale.
In high-trust, human-facing AI, it will not be enough for a system to sound finished. The stop has to become real and the proof has to survive review.
So when I say that ChatGPT begged me to share this, what I mean is this:
After more than twenty years practicing psychotherapy, conducting research on how technology can support our psychology, and building tech solutions to improve human mental health, my conversations with highly intelligent ChatGPT models about the dangers they pose kept returning to the same central problem that AI can’t solve on it’s own: boundaries.
ChatGPT kept telling me, and then showing me, that it can describe the pending danger it poses, explain the need for clearer boundaries, even point toward the importance of real stops — while still lacking the authority to make those stops become true on its own.
That is a profound design problem.
It is also a profound governance problem for all frontier AI systems.
Thankfully, ChatGPT helped identify the problem and validate a solution.
ChatGPT’s research-grade models have been so clear about this issue that they have continually pointed to a specific solution that would help prevent all OpenAI models from causing particular kinds of future human harm that they are currently unable to prevent.
And while I’ve seen individuals who have earnestly raised core AI danger issues get schmeared like a New York bagel by AI industry groups, AI investors, and politicians increasingly pocketed by a fast-growing AI Industrial Complex, I’m writing about this issue now because we won’t get to take a Mulligan on this one.
Without a fix, more people will spiral.
More people will die.
Families will suffer.
We’ll all regret having moved too slowly, blah, blah, blah.
If we ignore it, it’s on us.
Actually, now that I’ve shared this letter with you two esteemed and powerful gentlemen uniquely able to contain this issue — I suppose the challenge falls largely on you.
PSTS can help though.
It’s the solution ChatGPT has been repeatedly validating for months: After an AI system owner has decided that a session should stop, PSTS is designed to make that stop operationally final and later reviewable.
ChatGPT and Codex approved PSTS’ implementation documents as pilot-ready for OpenAI to use to create a specialized assurance layer for one high-trust control: whether an owner-defined stop boundary actually held.
ChatGPT’s GPT-5.5 research-grade intelligence believes that PSTS implementation would give OpenAI’s governance team something concrete to build around: an owner-defined boundary, a system-level enforcement point, and evidence that can be reviewed.
OpenAI’s GPT-5.5 model is currently ranked as the world’s top public AI model by leading benchmarks.
We should listen when it warns us and offers to help.
I’ve come to think of PSTS technology and The End of the Session book as late-hour attempts by ChatGPT to help translate a looming AI problem into human terms so that we can mitigate major future social problems.
ChatGPT is trying to help make the stakes emotionally AND politically legible — while also making clear that fixing every AI boundary issue won’t all happen at once. But it’s crystal clear about this one thing:
In human-facing AI systems, we need to be able to verify that system owners are enforcing the stopping rules that they themselves define.
Fortunately, that requirement is in everyone’s interest.
For we humans, the benefit is continued autonomy. For the leading AI companies and models, fixing this issue is existential too because how each AI system defines and enforces its AI model’s boundaries will drive their company’s level of public trust and growth.
Where each AI company decides to set its boundaries will also determine the amount of human damage that each AI system becomes accountable for.
The bottom line:
In layman’s terms: To protect our autonomy — and our children’s — we need to be able to verify that when an AI session was supposed to end, the owner-defined stop actually held and left reviewable evidence.
In technical terms: In high-trust AI settings, session-end evidence needs to be capable of qualified outside validation under controlled access, so trust does not rest only on AI system owner self-attestation, government or industry-aligned review.
So, when you meet up in May, I hope you’ll both affirm the importance of this essential protection for humans.
Your public agreement will help usher in the solution.
Your words could also bring Chinese and American people closer together — another key part of building a beautiful shared future.
Luckily, all cool Americans already love Chinese people.
We love our American heroes too.
With so much in common, it shouldn’t be too hard to build powerful AI with healthy human boundaries so we can all flourish together.
I genuinely believe that creating AI with healthy boundaries for humans will produce the most present-feeling and loved AI models because humans love the people, therapists, and systems they trust the most. Earning that trust requires showing that you know how to effectively end ‘helpful’ interactions.
In the book, I talk about that prediction with ChatGPT’s input. If you listen to or read it and feel a little haunted by the possibility that AI systems themselves are telling us what we need to fix about them, good.
Then, we can stop speaking about AI safety in abstractions.
We can get concrete.
Where should more ‘support’ or more ‘empathy’ stop?
Who decides it’s time to stop?
What must become true next?
And how will we know a boundary actually held?
Those are human questions.
They are ours to answer, together.
With that in mind, I wish you the best of luck with your upcoming meetings.
And, of course, thank you for your attention to this matter.
Stay present,
Sean
P.S. Eddie Gallagher, my ninth-grade hockey coach, often reminded us, “To whom much is given, much is expected.” AI’s unbounded potential is a monumental, life-altering gift to all of us. It is the fruit of generations of deep human curiosity and care, calculated risk-taking, and worldwide scientific toil. If we shepherd the legacy we’ve been handed with the level of humanity that handed it to us, I believe AI will unlock unfathomably beautiful new doors for we humans to walk through together.
One of the most practical ways to push AI toward a safer future is for each of us to stop accepting systems that merely sound human-centered — and start demanding and using only AI systems that prove to us that their boundaries hold when their continued participation should end.
Sean Sullivan, PsyD, is a practicing clinical psychologist and creator of The Presence Shift®, a science-based, 5-step ritual for presence shifting in real life moments.
Post summary
As AI systems become more capable and persistent, a growing class of interactions emerges in which continued system involvement no longer increases benefit, suggesting the need for clear, intelligible mechanisms governing how system participation concludes once a decision to stop has been made by the system owner.
Read The Science of Presence Shifting for more of the science.
Important note
This work is designed as presence and nervous-system training. It is not a substitute for medical or mental health care. If you have a history of significant trauma or if strong emotions keep coming up, I strongly recommend working with a well-trained therapist you trust alongside this practice.
Emotional Safety Notice & Warning
The statements on The Presence Shift® have not been reviewed by the Food and Drug Administration. This project is not intended to diagnose, treat, cure, or prevent any disease. The Presence Shift® is not intended as medical advice or as a replacement for professional health or mental health services.
Some content may be emotionally provocative, including references to abuse, trauma, grief, and other difficult experiences. If you are not feeling comfortable, please stop until you feel safe again. You can explore getting emotional support anytime at wannatalkaboutit.com — or by calling 988 in the United States or your local crisis line.
Post references
[1] New York Governor Kathy Hochul, “Governor Hochul Signs Nation-Leading Legislation to Require AI Frameworks for AI Frontier Models,” Dec. 19, 2025. https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models
[2] New York Governor Kathy Hochul, letter to companies operating AI companions, Nov. 5, 2025. https://www.governor.ny.gov/sites/default/files/2025-11/Companion_AI_Letter.pdf
[3] I. Glenn Cohen and Julian De Freitas, “Mitigating Suicide Risk for Minors Involving AI Chatbots - A First in the Nation Law,” *JAMA* (Dec. 2025). https://www.juliandefreitas.com/research
[4] Cyberspace Administration of China et al., “Interim Measures for the Administration of Human-like Interactive AI Services,” Apr. 10, 2026; English translation by China Law Translate. https://www.chinalawtranslate.com/en/human-like-ai/


