AI Concepts

Title
Audience Persona
Definition
Why It Matters
Real-World Examples
Common Misconceptions
Technical Glimpse
Try It Yourself
Cautions
Links
All

AI is the broad field of making machines smart, ML is a subset that learns from data, and DL is a subset of ML using deep neural networks.

Clarifies what you're actually working with or using. Prevents confusion in conversations and articles.

AI: Roomba; ML: Email spam filter; DL: Face recognition on phones

AI = sentient; ML = always needs big data; DL = black magic

DL uses many-layered neural networks; ML includes methods like decision trees, SVMs, etc.

Try Google's Teachable Machine to create a mini ML model visually.

Overhype leads to misuse or inflated expectations.

EngineersPractitioners

Neural networks are a series of connected nodes inspired by the brain that learn patterns in data.

They're behind most modern AI tools—especially vision and language applications.

Used in ChatGPT, DeepMind's AlphaGo, and Tesla’s Autopilot

They work like brains; they understand things

Each node applies weights and activations; training involves backpropagation and gradient descent.

Use TensorFlow Playground to visualize how networks learn.

Can overfit or become uninterpretable without care.

All

LLMs are AI models trained on massive text datasets to predict the next word in a sentence.

They power tools like ChatGPT and Bing Copilot.

ChatGPT, Claude, Google Gemini

LLMs understand meaning; they think like humans

They use transformers to model token sequences; trained with billions of parameters.

Use ChatGPT or Anthropic Claude to ask questions or summarize text.

May hallucinate incorrect info or reinforce training bias.

EngineersPractitioners

Training is when a model learns; inference is when it makes predictions based on what it learned.

Understanding helps with cost, deployment, and troubleshooting.

Training = teaching ChatGPT; Inference = using ChatGPT to answer a question

Training happens every time the AI answers

Training requires large compute (GPUs); inference is usually faster and more efficient.

Try training a model in Google Colab with scikit-learn.

Training data defines limits; inference can’t generalize beyond what was trained.

All

Bias in AI refers to skewed or unfair outcomes due to biased training data or design choices.

Biased AI can discriminate and cause harm at scale.

Facial recognition misidentifying people of color; loan approval discrimination

AI is objective or neutral

Bias can arise in data collection, model selection, labeling, or feedback loops.

Test your own dataset with Google’s What-If Tool.

Bias is often invisible until deployed in real contexts.

All

When an AI generates false or made-up information confidently.

It affects trust, especially in high-stakes fields like medicine or law.

ChatGPT making up legal cases or citing fake studies

LLMs pull directly from reliable sources

It results from probabilistic pattern-matching in absence of sufficient data anchors.

Ask ChatGPT a niche question and verify its response with real sources.

Never rely on LLMs for truth without human verification.

BusinessEngineers

Prompt engineering is the craft of designing inputs that guide an LLM’s output effectively.

Better prompts lead to better results, safely and efficiently.

Using ChatGPT to generate code, summarize text, or simulate characters

Any prompt will work; more words = better response

LLMs respond better to clear structure, role instructions, and delimiters.

Try giving ChatGPT a structured role-based prompt vs. a vague one.

Even great prompts can’t overcome model limits or bad training data.

EngineersPractitioners

Fine-tuning updates model weights; retrieval adds external info without retraining.

Helps choose the right tool to adapt an AI system.

Fine-tune GPT on legal cases vs. plug in a legal database via retrieval

Fine-tuning = always better; retrieval = always cheaper

Retrieval-Augmented Generation (RAG) combines LLMs with vector search or DBs.

Use LangChain to build a RAG pipeline on your docs.

Fine-tuning is expensive; retrieval requires great source quality.

All

Computer Vision is AI that enables machines to interpret and understand images and video.

Used in safety, medical imaging, and autonomous driving.

Face unlock, object detection in security cams, diagnostic tools

CV always sees perfectly; always in real time

Uses CNNs, YOLO models, or diffusion for generative tasks.

Use Hugging Face demo models to detect objects or blur faces.

Privacy and bias issues are major risks in CV applications.

All

Ethics in AI refers to the moral considerations in designing, deploying, and using AI systems.

Ethical design prevents harm and protects rights.

Facial recognition bans, AI transparency rules, GDPR compliance

Ethics are optional or subjective

Includes fairness metrics, explainability models, and responsible deployment frameworks.

Use IBM’s AI Fairness 360 toolkit or read model cards.

Ethics must be considered throughout development, not after deployment.