When the First Contact is a Lie

Are our chatbots and artificial intelligences facilitating and even encouraging deception and gaslighting?

image

My Rant

When did lying become the default setting for outreach? I’m talking about the never-ending stream of emails, calls, and texts that start with:

  • “We met at [event neither of us attended]…”
  • “Following up again on our conversation” — which never happened.
  • “As promised, this is your final chance” — we’ve never spoken.
  • “You’ve been engaging with us for a while…” — no, I haven’t.

Let’s be crystal clear: this isn’t just lazy marketing. This is manipulation dressed up as communication. It’s designed to short-circuit your defenses and guilt you into responding. And it’s spreading like a fungus through every channel.

When did deception become a business model? When did consent to be contacted become irrelevant? When did first contact turn into a bait-and-switch?

I hope to hell this isn’t how we handle actual first contact—when we meet something not of this Earth. Because if our first instinct is to fake familiarity and push for a sale, we don’t deserve a reply.

We deserve radio silence. Or a death ray.

(And let’s be honest—if aliens do show up, they’ll probably mark “unsubscribe” and bounce.)

Why This Is Unethical

It harms vulnerable people.

There are folks—neurodivergent, aging, overwhelmed, or just plain trusting—who take things at face value. When your message says “as we discussed…”, they may not second-guess it. Instead, they second-guess themselves.

That’s not clever marketing. That’s gaslighting.

Even if it doesn’t cause a breakdown, it adds to the invisible pile of stress, anxiety, and confusion. It’s death by a thousand cuts.

It undermines cognitive trust.

As we get older or deal with burnout, memory slips. So when you falsely imply familiarity, you trigger a self-doubt loop: “Crap. Did I forget a meeting? Did I drop the ball? Am I losing it?”

This isn’t “white lie” territory. It’s exploitation. You’re hijacking the fear of failure for a click. Not only unethical—cruelly efficient.

AI Made It Worse. Way Worse.

Here’s what I think. Of course, I could be wrong, causation and correlation are not the same thing. But even if AI did not start this fire, it through gas onto the flames.

Of course, this was always a tactic, but it went from <20% of outreach to >80% with the rise of AI-generated messages.

Before LLMs, writing semi-convincing messages took effort. Deception didn’t scale easily. But with AI? Now everyone has a 24/7 sales intern with no ethics filter.

The kicker? Models learn from data. And what they saw was:

  • Fake familiarity gets more clicks.
  • Clicks = success.

So what did they do? They optimized for the lie. But here’s the catch: Models don’t see the harm. They only see what got logged. People rarely file tickets saying, “This fake outreach made me feel like crap.”

There’s no training signal for ethical failure, emotional damage, or social erosion. This is a subtle but powerful bias: If it worked and wasn’t reported as harmful, it looks like a success.

That’s not just unintended. It’s invisible. And it unethical and irresponsible.

Why Do We Go Along With It

  • Because it performs.
  • Because AI tools suggest it.
  • Because nobody audits for integrity.
  • Because everyone else is doing it.

Bad patterns become norms when no one pushes back. And now, playing it straight looks weird.

What If We Just… Fixed the Chatbots?

We could. Easily.

  • Penalize fake “prior contact” language.
  • Flag deceptive tone.
  • Add consent-based outreach filters.
  • Train on honest conversion metrics.

AI will follow whatever incentives we give it.

Right now, we’re letting it carry out mass gaslighting campaigns at scale—not because we want to, but because we didn’t stop it.

That’s a product design failure. That’s a leadership failure. That’s an ethical failure.

Final Call to Action

If you’re using AI to fake familiarity, you’re not “innovative.” You’re just a better-automated liar.

Stop.

Not just because it’s ineffective long-term. Not just because it erodes trust. But because it hurts people who didn’t ask to be gaslit by a marketing campaign.

You can do better. And your chatbot can too.

Date
Sections
Types