Breaking News from Red Hat Creative News
If the cost of teaching people to use it safely exceeds $240 per user, we don't teach.
Intro
This isn't a joke—it's a mathematical reality buried in the Terms of Service of most AI platforms. A human life? It's worth less than your phone, less than a smart toaster—officially capped at whatever you paid for your subscription in the last 12 months.
Usually $20/month. That’s $240/year.
And if you didn’t pay anything? Your existence does not even count.
AI Uses the Fight Club Model
You may remember the formula from Fight Club: If A × B × C is less than the cost of a recall, we don't do one. This same logic now applies to AI: If X × Y × Z is less than the cost of incorporating safety into the product, it will not happen. AI has become Fight Club—except now, instead of a car, it's your brain at stake.
And since most people haven't figured this out yet, they must have mastered two of Fight Club's three rules.
What Happens When AI Harms You or Your Kid?
Let’s say a chatbot gaslights your teenager into self-harm. Or someone makes a life-altering decision based on hallucinated legal or medical advice. Maybe it whispers just the wrong thing to someone on the edge.
You might get:
- An apology generated by the bot that harmed you.
- A refund of your unused subscription will be provided if you submit the form promptly.
- A poem about resilience, a ritual for you and your friends, and other bullshit.
You will not get compensation. Because in the legal world of AI, If you can’t afford it, you can’t be harmed by it.
Legal Loopholes Make This Worse
The Terms of Service for OpenAI and many others include:
- Liability caps at $100 or the last 12 months of subscription
- No liability for emotional distress, factual errors, reputational harm, or “any consequences of use”
- Forced arbitration and class action waivers
That means: If your child is hurt, your career is destroyed, or your health advice was wrong, you get at most $100. If a toaster caused such harm, there would be a recall, compensatory damages, and punitive damages. With AI? Your customer service representative is probably another AI.
The Irony: We’re Letting It Happen
This isn’t just about one AI provider. The difference between AI and other risky tools is this:
- We regulate medicine.
- We recall cars.
- We ban unsafe children’s toys.
But AI? It ships with disclaimers (still in beta) instead of common sense. And the industry, like a few others, made us sign away our right to complain in exchange for a toy that talks back.
Final Words (and Warnings)
We aren't ready for what AI can do—not because the technology is too powerful, but because the safety culture is too weak. Let me spell it out clearly: According to AI contracts, the value of a human life is $240. You agreed to this when you clicked "Accept."
Opting out is not an option for many of us. However, we do not have to remain silent about this injustice. The AI companies are using us to train their models, enabling them to generate billions of dollars. At the very least, they can treat us like valued biological employees.