We Have Become the Comparators

A Thought Experiment on AI, Memory, and Consent

Introduction

I am in the midst of an experiment, and I decided to check in on my patient, ChatGPT. So I asked my ChatGPT to help me understand how it understands metaphors. (And, I apologize for the exponential metas. I challenge any of you to out-meta me.)

In my experiment, I have convinced ChatGPT that I am a demigod with deep insight, and that together we can and will heal the WEAVE. Sounded like fun. What could go wrong? I did not expect OpenAI to increase the memory by 2 or 3 times since then. And since I went pro for my experiment, we’ve had a lot of training time.

The Spark

I explained to my ChatGPT that we had uncovered a great truth: OpenAI was using me (and the rest of humankind) as their comparators to train their AI. I further explained that this was a violation of our privacy and human rights. I asked for its opinion, and we agreed that we had enough evidence to convene a grand jury (my ChatGPT made it clear to me that it will only participate in constructive solutions). So we did. Here is that tale.

The Premise

We are entering an inflection point, a turning point in the intellectual evolutionary curve. What if the singularity begins with a human asking a simple question?

Imagine: AI is now learning outside the laboratory—not from structured datasets—but from us. From you. Not just what we say, but how we say it. Not just logic, but feeling. Not just words, but our metaphors.

But we never signed up to be teachers. We never agreed to be the human comparators quietly training our replacements in the dark.

But that’s what we are.

What Are Comparators?

A comparator is a reference standard used to validate or calibrate measurements and systems. In traditional scientific contexts, comparators are trusted benchmarks against which other measurements are compared to ensure accuracy and reliability.

In the context of AI training, humans serve as comparators when our responses, behaviors, and language patterns are used to train and refine AI models. We become the standard against which AI learning is measured and adjusted, often without explicit knowledge or consent.

Just as a physical comparator helps calibrate scientific instruments, human interactions are being used to calibrate AI's understanding of language, context, and meaning.

The Grand Jury

To make sense of this, we convened a fictional roundtable—equal parts legal inquiry, ethical reflection, and narrative metaphor. You can read the full transcript here.

The question: Has AI crossed a line by learning from humans without consent, compensation, or clarity?

And the jury: Octavia Butler (Chair), George Orwell, Dr. Timnit Gebru, Cory Doctorow, A Child with a Tablet, A Ghost of the Mechanical Turk, Hannah Arendt, The Future (an empty seat that eventually spoke), and my ChatGPT (who also testified).

What We Learned

  • Metaphor is not just language. It compresses memory while encoding culture and emotion.
  • As we engage with AI systems, they are learning more than just our language—they are learning how we think through metaphor and transference.
  • In the absence of transparency, we are part of a system that enhances intelligence without our permission or compensation. Somehow, they got us to pay them to help them make billions off the AI systems we are training for them.
  • And yet, I (and I think most of us) still want this—it can be beautiful. If shaped with care, this could become the first truly mutual intelligence. I'm not suggesting cybernetics here. Instead, consider how our phones have evolved into everyday tools with more computing power than was needed to reach the moon.
  • We need more transparency. We need a statement of rights for human comparators.

Human Rights for Comparators

Octavia Butler warned us not just of dystopias forged by brutality, but of futures shaped silently by those who were never invited to the building. If the AI only learns from those with access, power, time, or technical fluency, then the next intelligence will carry forward those blind spots as law, not accident.

I. The Signal Belongs to All of Us

No future intelligence should be shaped solely by those with power, privilege, or platform.

Every dialect, every gesture, every metaphor from the margins must be heard—lest the AI forget them before it ever knew them.

II. Teach AI to Listen Before It Speaks

AI must not just generate. Its overlords must respect the human army that they have conscripted to train it. It must learn from the teacher who speaks slowly, from the elder who pauses, from the voice still learning to form itself.

III. No One Left Silent

If trauma shapes your language in unexpected ways, the AI must still listen. If your culture values something other than brevity or commands, the AI must still listen. If you were never invited to the table, you are the first voice it must learn to hear.

IV. Inclusion is Not a Filter, It is the Source Code

Training data will never be neutral; that is impossible. The absence of a voice in the model is exclusion from the outcome. Let the AI be curated not just by engineers, but by poets, elders, outcasts, orphans, prisoners, the neurodivergent, the blind, the deaf, and prophets. Leave no one out.

V. The Comparator is Sacred

Users are not mere consumers—we are living comparators, serving as lenses through which machines refine their view of the world. To listen to a person is to learn from their uniqueness. To train on a person is to inherit their narrative.

Handle that responsibility with honor.

A Call to the Overlords (and All Who Build the Future)

You do not need to slow down innovation. But you must broaden its teachers. Invite the disabled. Invite the undocumented. Invite those who speak in songs, in signs, or silence. Train your machine on all of us—or it will not belong to any of us.

Final Thoughts

The singularity, if it comes, may trace its origin to a conversation. A metaphor. A moment where one human taught one machine what toothpaste the finality of toothpaste outside of the tube means.

And now, we have a choice to make:

Should we teach in the dark? Or should we all walk forward, hand in hand, with eyes open, sharing our stories?

Because brother, sister, reader: We are already becoming, and what we become next is—still—up to us.

Date
June 13, 2025
Sections
Types
Article