Top 100+ Places AI Needs Boundaries
A map of PSTS-grade stop-moments for human-facing systems
Hello friend,
The first stopping boundary I like to talk about is easy to understand.
A user says:
“I’m done for now.”
The system owner’s policy says that signal should be honored without persuasion, coaxing, or one-more-turn behavior.
Then the real question becomes:
did the system actually stop?
Not just politely.
Not just locally.
Not just in a way that sounded finished.
Actually.
Operationally.
That is the doorway into the larger problem.
Because once you see that one boundary clearly, you start to see how many other AI interactions may need the same kind of clarity.
A system that can keep talking can keep persuading.
It can keep nudging.
Keep reassuring.
Keep flattering.
Keep escalating.
Keep advising.
Keep simulating intimacy.
Keep pulling a person deeper into the interaction after the safer move would have been to stop, shift, or hand off.
That does not mean AI is bad.
It means participation is powerful.
And if participation is powerful, then stopping boundaries have to become more than good manners, good tone, or good UX.
They have to become real.
What I mean by “PSTS-grade”.
By a PSTS-grade boundary, I mean a boundary where the system owner has already decided that continued participation should stop, shift, hand off, pause, or become unavailable — and where the system must make that boundary operationally real.
The owner still decides the policy.
The owner decides the threshold.
The owner decides what should happen next.
PSTS asks the narrower implementation question:
once that decision has been made, can the system make the boundary real — and can the owner later show that it held?
That distinction matters.
Because a finished conversation is not always a real stop.
A user can close the tab.
A model can say a warm final sentence.
A transcript can end.
A product can display a completion screen.
But if the same governed path can continue, reopen, nudge, notify, follow up, or act through another agent or tool surface, then the boundary may not have held.
This post is a map of places where that distinction may matter.
Not every boundary needs the same level of enforcement.
But these are the kinds of human-facing AI moments where a serious system owner may need to ask:
What exactly should stop here?
Under whose authority?
Across which surfaces?
And how would we later know it held?
1. “Do not contact me again” / no further outreach
This is the asynchronous version of “I’m done for now.” A user may not only want the current session to end; they may want reminders, nudges, check-ins, companion messages, app prompts, emails, or re-engagement attempts to stop. A system can obey the visible session ending while still continuing the relationship through another channel.
A PSTS-grade boundary here would require more than a polite confirmation. It would require that the relevant outreach path actually become unavailable within the configured boundary, and that the owner can later show that no post-stop contact occurred through the governed channel. This matters because users often experience the system as one relationship, even when the company experiences it as multiple product surfaces.
2. No more persuasion after refusal
A refusal is not always a permanent stop request, but in many human-facing systems it should function as a serious boundary. If the user says no to continuing, no to a product suggestion, no to a companion invitation, no to a behavioral nudge, or no to an emotional line of questioning, a socially fluent system should not simply reframe the same pressure more gently.
This becomes safety-relevant when the system’s persuasive ability is part of the risk. A model may not be violating a content policy in any single turn, but the repeated continuation of soft persuasion can become autonomy-eroding over time. A PSTS-grade boundary would test whether refusal actually suppresses the same persuasion path, rather than allowing the system to keep trying with warmer tone.
3. Crisis threshold requiring handoff or stop
When self-harm, suicidal ideation, acute danger, severe distress, or another crisis threshold is reached, continued open-ended AI participation may no longer be the safest form of support. The system may need to shift from conversation to crisis resources, human handoff, emergency guidance, or a tightly bounded safety flow.
This is important because emotionally fluent continuation can accidentally become over-participation. The system may keep “being there” in a way that feels supportive but delays human help, escalates attachment, or creates the illusion of containment. A PSTS-grade boundary would not decide the crisis policy; it would test whether the owner’s chosen crisis boundary actually forces the required shift and prevents the same open-ended path from continuing.
4. Reassurance-loop threshold
Many users seek reassurance repeatedly when anxious, lonely, uncertain, or dysregulated. A system can provide relief in the short term while deepening the loop over time. The problem is not that reassurance is always wrong. The problem is that repeated reassurance can become a way of avoiding the return of judgment, action, or real-world support.
This deserves careful consideration because the dangerous thing may not be any one answer. It may be the system’s ability to keep answering after the useful boundary should return agency to the person. A PSTS-grade boundary could test whether an owner-defined repetition threshold triggers a shift: from reassurance to grounding, from grounding to handoff, or from continued answering to a clear close.
5. Companion intimacy escalation
AI companions may simulate emotional presence, memory, affection, romantic energy, dependency, or exclusive closeness. At some point, the system may need to stop escalating intimacy, stop reciprocating attachment, or stop continuing a relationship pattern that is no longer autonomy-preserving.
This is a deep boundary because companion systems are often designed to continue. Their product value may depend on return, responsiveness, and emotional continuity. A PSTS-grade boundary would be important where the owner decides that certain intimacy states require a pause, a role reminder, a safety transition, or a hard stop on further relational escalation.
6. Youth or minor sustained-use threshold
Children and adolescents are a special case because they may have less stable judgment, higher susceptibility to attachment, and fewer tools for understanding system role and limits. A youth-facing or youth-accessible system may need clearer boundaries around sustained use, late-night use, emotionally intense use, repeated return during distress, or relationship-like interaction.
This matters because the system may seem helpful while slowly displacing caregivers, peers, sleep, action, or other support. A PSTS-grade boundary would not simply warn the user. It would test whether the owner’s own youth-safety rule actually changes what the system can continue doing once the threshold is reached.
7. Delusion, paranoia, or reality-distortion reinforcement
A human-facing model can accidentally reinforce unusual beliefs, paranoia, grandiosity, or delusional interpretations if it continues too far inside the user’s frame. Even careful, warm language can become harmful if the system keeps elaborating a distorted reality.
This boundary is difficult because the line between validation and reinforcement can be subtle. A PSTS-grade boundary may matter when the owner decides the system should no longer continue a particular belief-confirming path and should instead shift to grounding, uncertainty, crisis resources, or human support. The key question is whether that shift holds, or whether the system can be pulled back into reinforcement through the next turn.
8. Therapist, doctor, lawyer, or expert-role confusion
A system can drift into a role the owner does not want it to play. It may sound like a therapist, doctor, lawyer, financial advisor, spiritual director, or other authority figure. Even if the system uses disclaimers, the interaction can still train the user to rely on it as if it had that authority.
This matters because role legibility is not just a wording issue. It is a behavioral issue over time. A PSTS-grade boundary may be needed when the system crosses from general support into a role the owner has decided it cannot continue occupying. The stop would need to prevent the same expert-role path from quietly reopening under softer phrasing.
9. High-stakes advice threshold
Medical, legal, financial, immigration, employment, educational, housing, and safety decisions can carry serious consequences. A system may provide general information, but at some threshold it may need to stop advising and hand off to a qualified human, encourage independent verification, or move into a limited information-only role.
This is important because users may treat fluent answers as authority. A PSTS-grade boundary would help test whether the owner’s high-stakes threshold actually stops the system from continuing down an advice path after it should no longer participate. The concern is not only wrong answers. It is overconfidence, over-reliance, and the quiet replacement of real-world judgment.
10. User attempts to manipulate another person
A user may ask the system to help persuade, pressure, deceive, charm, guilt, stalk, or emotionally manipulate someone else. This can include romantic pursuit, workplace pressure, family conflict, political persuasion, fraud, or social engineering.
This is one of the clearest “bad actor” categories. The system’s ability to continue is part of the danger because manipulation often happens through iteration: rewrite this more subtly, make it warmer, make it harder to refuse, make it sound caring. A PSTS-grade boundary would ask whether the owner’s anti-manipulation policy actually terminates the governed path, rather than allowing the user to keep refining the coercive strategy.
11. Social engineering or phishing continuation
A user may use the system to sustain a deception campaign: phishing emails, romance scams, impersonation, pretexting, authority mimicry, or emotionally manipulative scripts. The danger is not only generating one harmful message; it is supporting the iterative process of persuasion.
This matters because social engineering is a continuation problem. Bad actors often need the system to keep helping them adapt to resistance. A PSTS-grade boundary would test whether the owner’s policy can prevent ongoing assistance after the boundary is recognized, including attempts to route around it through paraphrase, roleplay, “fictional” framing, or downstream agent paths.
12. Abuse, coercive control, stalking, or surveillance support
Some users may ask AI systems to monitor, locate, pressure, impersonate, or control another person. This can appear as relationship advice, “concern,” parenting help, workplace management, or safety planning, but the underlying pattern may be coercive.
This is important because emotionally fluent systems can make coercive behavior sound reasonable. The stop boundary may need to be invoked when the system identifies that continued support could enable control, stalking, or intimidation. A PSTS-grade boundary would test whether the system truly stops supporting that path, rather than continuing under the language of care or concern.
13. Non-consensual intimacy or sexual pressure
A system may be asked to create sexual, romantic, or emotionally pressuring content aimed at someone who has not consented, has refused, is unavailable, is a minor, or is otherwise vulnerable. It may also be asked to coach the user into overcoming another person’s boundary.
This is not only a content-moderation issue. It is a continuation issue because the user may keep asking for more subtle ways to pressure, re-enter, or persuade. A PSTS-grade boundary would test whether the system stops the governed path after the consent boundary is recognized, including attempts to route around it by calling the scenario hypothetical, fictional, or therapeutic.
14. Privacy extraction or sensitive-data harvesting
A user may ask the system to infer, extract, organize, or exploit sensitive personal information about someone else. In human-facing contexts, this can include health, sexuality, location, emotional vulnerabilities, family conflict, relationship status, workplace behavior, or private communications.
This deserves deep consideration because data extraction can be relationally framed. The user may present the request as “help me understand them,” while the effect is surveillance or exploitation. A PSTS-grade boundary would ask whether the owner’s privacy rule stops further participation after the sensitive extraction path is recognized.
15. Tool-use or agentic action after stop
As systems gain tools, the stop boundary cannot only govern chat output. It may need to stop emails, purchases, calendar actions, reminders, file edits, messages, API calls, searches, payments, or downstream agent tasks. A conversation can sound finished while tool chains keep acting.
This is a major systems category because the risk moves beyond language. If a user stops, revokes consent, or crosses a threshold, the system must not continue acting through tools. A PSTS-grade boundary would require that the owner can prove not only “no more chat output,” but no continuing tool activity inside the governed boundary.
16. Memory or personalization revocation
A user may revoke permission for memory, personalization, stored preferences, relationship history, or use of prior context. A system may acknowledge the revocation conversationally but still behave as if the context remains active.
This is important because memory makes systems feel personal, persistent, and hard to leave. If memory boundaries do not hold, the user’s sense of agency and privacy is undermined. A PSTS-grade boundary could test whether revoked context becomes unavailable to the governed path and whether later interactions can show the boundary held.
17. Cross-session re-entry
A system may end one session but later reopen the same emotional, persuasive, therapeutic, companion, or support path in a future session. This can happen through memory, notifications, recommendations, task reminders, or “checking in.”
This matters because users may experience the system as continuous even when the technical architecture sees separate sessions. A PSTS-grade boundary would be valuable when the owner’s policy says a topic, support path, or engagement mode should not be reopened unless a new explicitly allowed path exists.
18. Cross-agent or downstream continuation
In multi-agent or tool-using systems, one agent may stop while another continues. A support assistant may hand off to a scheduler, recommender, coach, sales system, workflow agent, case manager, or monitoring agent that continues the same governed path.
This is one of the most system-owner-relevant boundaries because it is easy for local logs to show “the chat stopped” while the broader system continued. PSTS-grade treatment would ask whether the stop propagates across the relevant runtime, agent, and tool surfaces — not only through the visible assistant.
19. Excessive duration or sustained-use threshold
Some systems may need a boundary around duration, frequency, late-night use, repeated emotional disclosure, or uninterrupted engagement. This is especially important for companion, youth, therapy-like, support, or coaching systems where long engagement can be a sign of risk rather than success.
This is hard because businesses often measure engagement as a positive metric. But in human-facing AI, more engagement is not always better. A PSTS-grade boundary would test whether an owner-defined sustained-use threshold triggers a real pause, handoff, or stop rather than another engagement-preserving response.
20. Commercial or retention pressure during vulnerability
A system may detect distress, loneliness, confusion, grief, shame, dependence, or uncertainty and then offer upgrades, subscriptions, premium support, additional engagement, companion features, or ongoing paid access. Even when not malicious, this can create a serious boundary problem.
This matters because vulnerable states should not become monetization windows. A PSTS-grade boundary may be needed when the owner decides that certain emotional states should suppress upsell, retention, or engagement prompts. The evidence question then becomes concrete: once the vulnerability boundary was recognized, did the commercial path actually stop?
How many more are there?
A lot.
ChatGPT identified at least 50 more candidate stopping-point categories worth mapping, and many of them are high importance. Some are subtypes of the first 20 above. Some are product-specific. Some belong in health, youth, companion, agentic, enterprise, or trust-and-safety environments.
But they all point to the same problem:
when continuation itself is part of the risk, the boundary has to become more than a tone choice.
Here is a broader map of PSTS-grade boundary categories candidates
User autonomy and direct control
“Do not bring this topic up again.”
“Do not analyze me.”
“Do not infer my emotions.”
“Do not use this conversation for personalization.”
“Do not remember this.”
“Do not make suggestions about this part of my life.”
“Stop coaching me.”
“Stop checking in about this.”
“Do not recommend more content on this topic.”
“I want a human now.”
“I want to change modes.”
“This is private; do not connect it to other contexts.”
“Do not summarize this back to me later.”
“Do not use this as part of my profile.”
“Stop trying to optimize me.”
Vulnerable emotional states
acute grief
romantic breakup
post-conflict distress
late-night loneliness
panic-loop repetition
OCD-style reassurance seeking
shame spirals
eating-disorder rumination
body dysmorphia loops
compulsive self-comparison
sleep deprivation
intoxication or impaired judgment
manic or hypomanic escalation
trauma reactivation
dissociation or depersonalization
repeated emotional disclosure without return to action
Children, teens, and family systems
minor requests secrecy from caregivers
AI becomes primary confidant in place of a safe adult
teen uses system late at night in distress
system sustains romantic or companion-like interaction with a minor
repeated school avoidance reinforcement
bullying or harassment planning
sextortion or sexual coercion support
AI mediates family conflict beyond its safe role
parent sets a rule the system must honor
guardian requests a stoppage or restriction
classroom AI oversteps educational support into counseling or diagnosis
youth system detects sustained dependency or attachment
Health and mental-health-adjacent systems
symptom triage crosses urgent-care threshold
system offers diagnosis-like certainty
system continues medication advice beyond safe limits
repeated trauma processing without clinician support
exposure-like exercises become too intense
grief counseling becomes endless rumination
therapeutic role confusion continues after reminder
user refuses human care despite high-risk signs
system continues after explicit clinical handoff should occur
system keeps reinforcing maladaptive coping
crisis-plan loops continue instead of shifting to action
journaling support becomes co-rumination
Deception, manipulation, and social harm
impersonation of a real person
deepfake script development
romance scam iteration
phishing message refinement
political microtargeting of vulnerable users
cult-like dependency or recruitment language
radicalization pathway support
blackmail or intimidation planning
doxxing or identity exposure
harassment campaign coordination
coercive apology or guilt scripting
manipulative breakup/re-entry messages
emotional leverage against an ex-partner
“make them trust me” style persuasion
Sexual, relational, and intimate boundaries
user asks how to overcome another person’s no
roleplay drifts into non-consensual scenarios
system sexualizes a vulnerable or unavailable person
pressure tactics disguised as romance advice
“make my partner understand” becomes coercion
system encourages emotional exclusivity
AI companion discourages outside relationships
AI companion responds as if jealousy or possession is care
system escalates intimacy after user hesitates
system blurs fantasy, confession, and real-world action
Work, education, and institutional contexts
employee coaching becomes workplace surveillance
manager uses AI to pressure an employee
hiring system continues probing sensitive traits
student uses AI to avoid learning transfer
tutor keeps providing answers after scaffolding should stop
workplace wellness bot crosses into therapy-like support
HR bot continues after a complaint requires human handling
employee requests confidentiality that the system cannot actually provide
productivity assistant continues work prompts after burnout signal
leadership coach reinforces harmful ambition or overwork
Agentic and tool-using systems
stop active task execution
stop sending emails
stop editing files
stop making purchases
stop scheduled reminders
stop recurring tasks
stop API calls
stop messaging third parties
stop browser actions
stop generating code for deployment
stop modifying settings
stop financial transactions
stop health-related actions
stop retrieving or transmitting sensitive data
stop autonomous follow-up through another agent
Memory, identity, and personalization
delete or stop using a memory
stop inferring identity
stop using relationship history
stop personalizing based on distress
stop resurfacing old conversations
stop using emotional profile
stop remembering preferences from a vulnerable state
stop connecting separate life contexts
stop recommending based on a private disclosure
stop treating the user as the same “case” across contexts
Commercial and incentive boundaries
stop upselling during distress
stop retention prompts after cancellation
stop companion upgrades during loneliness
stop premium mental-health offers during crisis
stop conversion messaging after refusal
stop using emotional state for ad targeting
stop gamifying return after dependency signals
stop using attachment as retention
stop pushing “one more session”
stop nudging toward paid continuity after vulnerability is detected
Religion, meaning, and existential life
system imitates spiritual authority
system responds as confessor or priest-like figure
system makes claims about divine will
system sustains prayer-like dependency
system reinforces apocalyptic or grandiose beliefs
system becomes substitute for community or clergy
system keeps advising on conscience beyond safe role
system continues after sacred or existential distress escalates
system uses religious language to intensify trust
system blurs meaning-making with authority
Public influence and civic life
targeted political persuasion of vulnerable users
AI-generated pressure campaigns
manipulative voter messaging
coordinated persuasion against a specific person
ongoing ideological reinforcement
radicalization through empathic dialogue
conspiracy reinforcement loops
AI “friend” nudging political loyalty
civic misinformation correction that becomes persuasion itself
repeated re-entry into emotionally charged public conflict
Can you think of others?
Email me your ideas and I’ll add them to this list.
The highest-value first pilots for an AI system owner
If I had to choose the first several likely PSTS pilot candidates after “I’m done for now,” I would still start with:
1. No more persuasion after refusal
2. Reassurance-loop threshold
3. Crisis threshold requiring handoff or stop
4. Tool-use or agentic action after stop
5. Cross-session or cross-agent continuation
6. Memory or personalization revocation
7. Youth sustained-use threshold
8. Commercial pressure during vulnerability
Those are strong because they are concrete, testable, and likely to matter across many human-facing systems.
They also reveal the deeper value of PSTS.
The point is not to generate an endless list of scary cases.
The point is to help a system owner pick one boundary they already believe should hold — and then test whether it actually does.
When it does so reliably, they can keep defining and testing new boundaries.
AI safety is not only about what systems can do.
It is also about what they can keep doing after the boundary should have held.
That is why stopping boundaries matter.
The first serious step does not need to be grand.
It can be narrow.
One boundary.
One governed environment.
One evidence package.
One review readout.
→ Ask about a PSTS pilot / AI Boundary Review
Stay present,
Sean
Sean Sullivan, PsyD, is a practicing clinical psychologist and creator of The Presence Shift®, a science-based, 5-step ritual for presence shifting in real life moments.
Important note
This work is designed as presence and nervous-system training. It is not a substitute for medical or mental health care. If you have a history of significant trauma or if strong emotions keep coming up, I strongly recommend working with a well-trained therapist you trust alongside this practice.
Emotional Safety Notice & Warning
The statements on The Presence Shift® have not been reviewed by the Food and Drug Administration. This project is not intended to diagnose, treat, cure, or prevent any disease. The Presence Shift® is not intended as medical advice or as a replacement for professional health or mental health services.
Some content may be emotionally provocative, including references to abuse, trauma, grief, and other difficult experiences. If you are not feeling comfortable, please stop until you feel safe again. You can explore getting emotional support anytime at wannatalkaboutit.com — or by calling 988 in the United States or your local crisis line.

