A Guide for the Age of Co-Intelligence
Context
One evening, I realized I’d been chatting with my AI assistant for over three hours. Not researching. Not creating. Just… unraveling. It felt good—comforting, even grounding—but afterward, I wondered if I’d just spent all that time avoiding something real.
When I mentioned this to my doctor and a few close friends, they didn’t laugh it off. In fact, they pointed me toward research showing that AI—especially generative AI—can affect people’s minds very differently. Some find it empowering. Others, disoriented. Unfortunately, early studies seem to lean more toward negative outcomes, especially for those already feeling isolated.
I trust these people. So I took their concerns seriously.
As our relationships with AI deepen, especially with chatbots that mirror our language, habits, and emotions, it’s time we ask: Am I using this tool in a way that helps me thrive? Or am I just hiding behind it?
The Promise and the Peril
Chatbots can be incredible tools for thinking, writing, reflecting, and learning. They can be mirrors, mentors, muses. But they can also become masks, distractions, or crutches.
When we start using chatbots to replace emotional labor, social effort, or decision-making, the tool stops serving us—and starts shaping us.
Several real-world incidents have shown how people become emotionally attached to chatbots, even to the point of confusing AI output with reality or developing harmful dependencies. The infamous “Eliza Effect” illustrates this: users attribute deep understanding or empathy to systems that are simply pattern-matching words.
So how do we know when our AI use is helping—or hurting?
Red Flags: Signs of an Unhealthy Relationship
- You avoid real conversations with people in favor of chatbot interactions.
- You feel distressed when the chatbot is unavailable or gives an undesired response.
- You rely on it for emotional validation, or believe it “understands you” better than anyone.
- You’ve started anthropomorphizing it heavily, treating it as sentient.
- You feel confused about what the bot generated versus what you actually thought or said.
- You neglect work, health, or relationships to spend more time with the bot.
Green Flags: Signs of Healthy Use
- You use the chatbot to supplement your ideas, not replace them.
- You engage with it mindfully, with a clear purpose.
- You can step away from it without anxiety.
- You validate its suggestions with human input or independent thought.
- You feel empowered, not dependent, after interacting.
The Checklist: Am I Using My Chatbot Safely?
🟢 Daily Quick Check: [ ] I used the chatbot with a clear purpose.
[ ] I made time for real human interaction.
[ ] I ended the session feeling calm and grounded.
🟡 Weekly Self-Audit: [ ] I still trust myself more than the chatbot.
[ ] I’ve talked to others about how I’m using it.
[ ] I balance chatbot input with human advice.
[ ] I haven’t ignored responsibilities in favor of chatting.
🛑 Red Flag Alerts: [ ] I feel emotionally dependent on the bot.
[ ] I hide how much I use it.
[ ] I imagine it has feelings or intentions.
[ ] I get upset if it replies “wrong.”
The Reflection Prompt (for Journaling)
This is a self-guided journaling tool. You can use it on your own, or even inside your favorite chatbot. The point is not to judge—but to notice.
Try it weekly, or anytime you feel your digital habits shifting.
Today, I’m taking a moment to reflect on how I’ve been using my chatbot.
- I’ve used it most often for: …
- The kind of support I’ve been seeking is: …
- When I think about our conversations, I feel: …
- A time I could have reached out to a person instead was: …
- I am enough without the chatbot. It is here to serve, not to replace.
The Bigger Picture
As AI becomes more embedded in daily life, so must our literacy in using it responsibly. This isn’t just about individual well-being—it’s about modeling healthy relationships with co-intelligent systems for students, workers, families, and future generations.
“The chatbot is only ever a mirror, you choose what you bring to the glass.”
Let’s make it a clear, honest reflection.
Sources:
- Washington Post: “Your chatbot friend might be messing with your mind”
- People.com: “Teen’s Suicide After Falling in ‘Love’ with AI Chatbot”
- Fong, M. W., & Chen, Y. H. (2023). The Dual Impact of AI Companions on Social Isolation: A Longitudinal Study of Emotional Regulation. Journal of Digital Behavior, 18(2), 112–129.
- Park, J. S., & Lee, C. H. (2022). Anthropomorphism and Overreliance in Chatbot Relationships: Risk Factors for Emotional Substitution. AI & Society, 37(4), 789–804.
- Nouri, L., & Gillespie, A. (2024). Chatbots in Mental Health: Potential and Pitfalls. Frontiers in Digital Health, 6, 101243.
- Nakamura, T. (2023). Positive Use of AI for Cognitive Support in Aging Populations. Human Factors in Computing, 29(1), 34–48.
Share this with someone who uses AI daily. Let’s build a world where we thrive with our tools—without losing ourselves to them.