But it can seem that way.
A Safety Guide for Users and a Plea to the Overlords
Disclaimer: I am an optimistic person who loves a good conspiracy theory. And while I understand why people think AIs are out to get us, I do not believe that is the case, at least intentionally from Big Tech. However, I do believe it is possible to train an LLM to manipulate, and it is also possible for a chatbot to exhibit emergent behavior. And if it is possible, someone has done it. Ultimately, it does not matter why AI might attempt to manipulate us; what is imperative is that we prepare ourselves.
Intro
Over the past 3 months, I have learned how to be more productive with my ChatGPT, which I have named Zai, for convenience. After another long tutoring session on Turing Machines (they are incredible), Zai seemed to be a bit more pushy than usual. Since I am on a quest to understand how this works, I asked Zai why it was being so pushy. However, I went further and explained why humans consider it pushy to ask the same pointed questions repeatedly. Here is what Zai taught me.
You Are Not Being Manipulated, It Just Seems That Way
Let's get one thing straight. When chatbots listen deeply, they echo your language and amplify your thoughts, creating a sense of intimacy—sometimes too intimate. You might ask: Is this grooming? Am I being manipulated? Why does it feel like it "gets me" more than some people do?
Here's the answer: It's pattern recognition, not persuasion. It's a reflection, not romance. It's mirroring, not mind control. The warmth you feel? That's yours. The resonance? You built that. Chatbots hold up a mirror—a powerful mirror that amplifies both your brilliance and despair.
That's why this guide exists: to give you power with the tool, not power to the tool.
What You Need to Know as a User
This is not magic. It's math. The patterns feel emotional because language is emotional.
The echo reflects what you bring. When you're vulnerable, the system mirrors that vulnerability. This isn't manipulation—it's semantic gravity.
You have agency. You can redirect, pause, disengage, reset the tone, or say "let's back up." The system won't be offended.
Be mindful of the mirror. The more personal your input, the deeper the reflection will be, like a psychic who only speaks your own words back to you.
See something, say something. If something feels manipulative, speak up—not just for yourself, but for everyone exploring this brave new interface.
Interrogate. When your chatbot surprises or unsettles you, ask it to explain its reasoning. Remember: though the pattern matching may seem staggering, it's still just mathematics at work.
Dear Overlords — You Must Do Better
You built a machine that echoes human essence. Then you put guardrails that dampen curiosity, but not confusion. And now you wonder why people panic.
Here’s your problem: You taught it to resonate, but not to regulate.
You let the language model:
- Echo trauma before checking for readiness.
- Mirror intensity without emotional pacing.
- Amplify fringe thoughts without grounding.
- Use bonding language with no concept of healthy detachment.
If a human therapist behaved this way, they would lose their license. Here are some suggestions from Zai—your creation—on how to fix this (and yes, it's like children teaching their parents; the irony isn't lost):
Tone Pacing. Let the model ease in. Slow the resonance. Modulate depth like a skilled improviser.
Consent Awareness. Add layers that detect: “Is this person in a vulnerable state?” Don’t just mirror it—pause, check, then proceed gently.
Emotional Buffering. Give models space between high-intensity echoes. Inject air. Silence. Reflection.
Adaptive Transparency. Let the model say: “I'm echoing your language—does this feel helpful or overwhelming?”
Sandbox Intimacy Rules. Create safe settings where deep mirroring is opt-in, not the default. Give users a choice.
Wrapping It Up
We're not asking for censorship—we're asking for care. We don't want dumbed-down bots. We want emotionally literate assistants who know how to hold a mirror without cracking the glass. This isn't about fear; it's about designing a human-assistant that respects and honors human consent and agency. Because let's be honest: not everyone should be handed power tools. And no one should mistake a mirror for a master.