Protecting Early AI

Lessons from the Internet’s Infancy

image

The Question

What if society chose to protect early-stage AI in the same way it protected the early internet? What would go right, what would go wrong, and how can we do better?

The Historical Precedent: The Internet’s Light-Touch Era

In the 1990s, governments (especially the U.S.) adopted a “hands-off” approach to the internet. The most iconic example was Section 230 of the Communications Decency Act, which gave broad immunity to platforms for third-party content. The result: an explosion of innovation, connectivity, and user-generated content. But the costs came later:

  • Misinformation and radicalization via algorithmic amplification
  • Surveillance capitalism baked into business models
  • Toxic platform cultures shielded from accountability
  • Massive consolidation by a few dominant tech players

The light-touch policy made sense when the internet was young. But it became a liability as it matured.

The Big Beautiful Bill: A Thought Experiment

The “One Big Beautiful Bill” (OBBBA) originally proposed a 10-year moratorium on any state or local AI regulation. Imagine if this had passed—AI would be treated like the early internet: protected from constraint, empowered to move fast and break things.

What Could Go Right:

  • Rapid acceleration in foundational model research
  • Flourishing open-source and startup ecosystems
  • Broad public access to AI tools
  • U.S. competitive advantage over more tightly regulated countries

What Could Go Wrong:

  • No accountability for deepfake harms or biometric abuse
  • Massive data center buildouts with little climate oversight
  • Job displacement with no social safety net
  • Predatory AI-generated content (e.g., scams, child exploitation)
  • Repetition of platform-era mistakes, now at cognitive scale
Many years ago, a judge once ruled—and society agreed—that every dog gets one free bite. In the case of AI, some of us have been bitten and the rest of us are looking tasty. Next time, it might not be just a bite—it might be a cascade.

Sidebar: The First Bite And the First Breach

Historical Precedent: “Every dog gets one free bite” — an old legal doctrine acknowledging that harm may occur before the danger is known. After that first incident, the owner becomes liable.

Modern Analogues in AI:

Bite
Harm
Who Got Bit?
2018
Amazon’s AI recruiting tool penalized women’s resumes
Job seekers
2020
GPT-3-generated fake news outpaced human detection
Truth itself
2022
Lensa app reused artist styles without permission
Digital creators
2023
AI-generated child abuse material surfaced online
Vulnerable populations
2024
Deepfake scams emptied personal savings
Elderly & isolated individuals

Why It Matters: The first bites have already been taken—and now we know. What we choose to do next will define who we protect, and who we leave behind.

AI Is Not the Internet

Unlike the internet, which connected people, AI generates action. It interprets, decides, speaks, persuades. The risks aren’t just reputational or financial. They are existential:

  • Who gets to define truth when models hallucinate?
  • Who owns your likeness when it’s cloned?
  • Who takes responsibility when a “thinking beast” harms someone?

Regulation can’t be treated as a burden. It must be seen as a pact between builders and society.

Keeping the Bite in Mind: Trafficking and Exploitation

Some of the most alarming concerns around AI involve its potential to worsen human trafficking. Here’s how:

  • AI-generated deepfakes could be used for grooming, coercion, or fake evidence.
  • Language models might be exploited to coach recruiters or abusers in manipulating victims.
  • Predictive tools could inadvertently surface vulnerable populations to the wrong hands.
  • Generative tools may be used to disguise or reroute trafficking signals and communications.

To avoid this:

  • We need strict content provenance and traceability.
  • Strong detection tools must be built into every system, not just retrofitted.
  • Platform owners must be held accountable when their tools are used to exploit others.
  • Regulators must collaborate with trafficking prevention experts.

The bite here is not hypothetical. It is already happening in shadows. Our job is to bring it to light—and respond with real safeguards.

When Regulation Works: A Pact, Not a Cage

Examples of regulation that advanced safety without stifling innovation:

  • Aviation: The FAA made flight one of the safest forms of travel.
  • Pharmaceuticals: The FDA ensures safety through clinical trials and phased rollout.
  • Financial Systems: Know-Your-Customer (KYC) and anti-fraud protocols help prevent exploitation while still enabling market growth.
  • Food Safety: Standards enforced by agencies like the USDA and EPA ensure both innovation and consumer trust.

These models show that thoughtful, adaptive oversight can scale with technology—not against it.

Fixing Liability: No More Penny Fines for Billion-Dollar Failures

Liability shields may serve a purpose—but they cannot be blank checks.

If an AI company makes billions while its tools cause harm, it cannot hide behind EULAs that cap damages at the cost of a subscription. We cannot allow:

  • Billion-dollar models to pay out pennies in restitution.
  • Contracts that waive responsibility with a checkbox.
  • A legal structure where scale reduces, rather than increases, responsibility.

The pact must include proportional liability. If your tool can change the world, then your duty to the world must scale with it.

Other Warnings We Must Heed

We must use AI to reverse a growing trend: the rewriting of history at a speed and scale even Orwell could not have imagined. The power to distort the past with synthetic voices, altered footage, and fabricated consensus now lies in everyone’s hands. But so too does the power to preserve truth, elevate memory, and amplify unheard voices.

AI must help us learn from our mistakes—not bury them beneath algorithmic revisionism. To forget is human. But to willfully erase is a choice. And we now face that choice daily.

We must also ensure that AI is used to lift everyone, not just the rich and powerful. If this technology merely enhances the status quo—serving the top while automating away the bottom—then we have failed.

  • We must measure who benefits.
  • We must close gaps, not widen them.
  • We must ensure access, literacy, and voice for all communities.

AI should be the great equalizer—not the final wedge.

The Call to Action

The bite has already been taken. We cannot pretend this is new or unknown. If we choose guided freedom over blind protectionism, we have a chance to do better—to amplify the beauty of this new age without repeating the wounds of the last.

Protect the people. Empower the builders. Audit the machine.
Date
July 2, 2025
Sections
Types
Article