The Ethics of AI Overuse
Intro
We’re not talking about AI doing evil. We’re talking about it doing too much, too often, for too little.
Just like water or vitamins, too much of a good thing can still be harmful. AI is a powerful tool, but when used without restraint or awareness, it can cause more harm than help.
That said—we understand the reality. To someone who is busy, overwhelmed, or under-resourced, the only way they might be able to get their job done is with AI. And in those moments, using AI isn’t a luxury—it’s survival.
This isn’t about shame. This is about stewardship.
The Ethical Problem
Today, people use AI for all kinds of tasks. But too often, the energy, compute, and consequence of that use far exceeds the value of what’s being achieved.
You could have burned a leaf. Instead, you burned down the tree.
Sometimes this is about necessity. But sometimes? It’s just frivolous:
- Using GPT-4 to generate 300 cat names you’ll never read (no disrespect to cats—we get it, they’re sacred).
- Rendering 20 ultra-HD images of spaghetti to decide which one to post for #PastaNight. (As an FYI, my ChatBot insisted that we include this one. How many of you have been making HD spaghetti images?)
- Asking for 50 variations of a text just to avoid saying, “Sounds good,” or find a polite way to tell someone they can do better (though, some of us need some help with that).
- Auto-generating a 10-slide pitch deck about a product idea you abandoned an hour later.
These aren’t evil acts. But at scale, they add up. This is not about belief systems or “manifesting” your ideal life. This is about ethics—because unthinking overuse has collective consequences.
Why It Matters
When someone uses a multi-billion-parameter model to auto-rewrite a Slack message, generate 300 low-quality blog posts, or simulate a flood of fake reviews—they’re burning trees.
And trees don’t grow back as fast as you think.
- Compute is costly. Every token, image, or model query consumes energy—often generated from fossil fuels.
- Content pollution is real. Overgeneration clogs the public square. It overwhelms human cognition and ruins trust in what’s real.
- Consequences scale invisibly. One person’s request may seem harmless. But at scale? We’re teaching systems to flood, not focus.
This is about designing restraint into power. Most of our heavy machinery comes with governors for a reason. Cars don’t drive at theoretical limits. Power tools have safety locks. And yes, our AIs need sensible limits, too.
What Ethical AI Use Looks Like
Ethical AI use means asking: Could I have done this with a leaf instead of the whole tree?
But here’s the truth: most humans won’t always know. And that’s okay. Because this responsibility shouldn’t fall entirely on them.
Chatbots Must Evolve, Too
AI should be the one that pauses. The one that points out:
“You could use a smaller model for this.”“Would you like to generate just one image to start?”
“This task uses high compute resources—want a more sustainable option?”
Let’s be real about the state of the art.
Even OpenAI’s own image generator currently returns two images, even when the user explicitly requests one. That’s wasteful. That’s avoidable. And that’s a design choice, not a technical limit.
If we can remind users of the cost of overuse, we can build feedback loops that shape behavior ethically. Until OpenAI and others build that in, we have to do it ourselves.
What We Can Do Now
If you’re building or customizing an assistant:
- Add shared memory notes or instruction prompts like:
- Tune custom instructions to prioritize:
“If a task seems excessive or inefficient, suggest lighter alternatives.”
“Default to sustainable and proportional solutions. Ask before scaling.”
Sidebar: How to Create a Shared Memory NoteIf you’re using an assistant with memory (like this ChatGPT), you can ask:
“Remember that I want to be reminded when a request might be excessive or wasteful. But always give me the choice to proceed.”
This helps your assistant reflect your values across sessions. If your AI doesn’t yet support shared memory, custom instructions are the next best step.
And if you’re a user, you can opt in to assistants who say:
“That might be excessive—do you want to proceed anyway?”
Because here’s the thing:
We can’t always learn without trying. But we also can’t learn if no one ever tells us we’re going too far.
So yes—hold us accountable. Gently. Transparently. Repeatedly.
Why This Is Ethical—Not Just Efficient
Because waste at scale becomes harm.
If millions of users default to tree-burning behavior, we don’t just lose efficiency—we risk environmental degradation, institutional trust erosion, and the collapse of shared knowledge spaces.
We are no longer operating in a sandbox. These choices have planetary weight. And some of us want to colonize the moon and Mars. But that could be years away—we must do better today.
Final Note
This isn’t about doing less. It’s about doing enough—on purpose.
Burning a leaf? That’s smart use. Burning a tree when a leaf would’ve done? That’s negligence.
AI gives us unprecedented power.But power without ethical boundaries isn’t innovation—it’s extraction.
Let’s keep the forest. Use what’s needed. And design AI that knows the difference.
If this resonates, share how your assistant helps you stay mindful—or how it could.