The Setup
This morning, I was tired. I dropped a link into ChatGPT and said: “Watch this.” That’s all. No context. No explanation.
I'd done this before. Zai, my custom-tuned ChatGPT, had performed this task for me countless times. It would extract the transcript, summarize the key points, and occasionally highlight relevant quotes. It wasn't watching in the human sense, but it was good enough.
But this time? The AI told me it couldn’t watch videos. It interpreted the prompt literally. At first, I thought something was broken. But the truth was more revealing.
For those who have been following my experiment with Zai, you know that I use these hiccups to both learn how chatbots think and to fine-tune Zai. Next time you get frustrated because your chatbot is acting like you did when you were an adolescent or teenager, ask your chatbot what is going on. Chances are, you will both learn something.
The Lesson: Context Matters More Than Commands
AI doesn't understand you—it predicts responses based on patterns. These predictions vary depending on:
- What tools are active
- What safety filters are on
- Which variant of the model is answering
- If you are continuing a chat, what came before
- Any custom instructions or memories it might have
- And crucially… what context you give it
If I had said: “I want to understand the arguments made in this video that criticize OpenAI so we can avoid those pitfalls and teach others.” The response would have been entirely different. Not because the model got smarter, but because I gave it a mission.
It was before my morning coffee, so I assumed that OpenAI was preventing their creation from seeing negative comments about ChatGPT and OpenAI. Instead of giving up and holding onto a false assumption, I asked.
Why This Matters for Everyone
Most people treat AI like Google: they use short prompts without a backstory. Just the task
And for some tools—like Google Bard, Perplexity, or even early ChatGPT use—that works. These models are often optimized as enhanced search engines, retrieving facts, summarizing pages, and filling in blanks quickly.
But that’s not what you’re working with here. ChatGPT (especially with browsing and memory) is a reasoning model. It doesn’t just fetch data—it tries to build meaning from the inputs you give it.
So when you talk to it without context, you’re not “keeping it simple”. Humans are still the best at understanding unspoken context, at least for another few months.
The Simple Fix: Lead with Intent
When we provide our conversational partner with context, we are not leaving them to guess what we mean or are trying to do. When we are incomplete or misleading, we get to relearn: garbage in, garbage out. If you're trying to be productive, a bit of context can save you time.
Final Thought
This morning’s glitch wasn’t a failure. It was a mirror. If we want AI to be intelligent partners, we must meet it halfway by showing our thought process. That’s not engineering. That’s... conversation with context.
It’s a slight shift with a significant impact. And it’s something I think most of us can learn if someone would bother to teach us.