Preparing for When AI Goes Dark
It Doesn’t Matter Who Struck the Match
- A teenager trying to prove something to their friends.
- A disillusioned anarchist looking to make a point.
- A member of Anonymous reviving the old banner.
- A cybercrime syndicate seeking financial leverage.
- A state-sponsored group pushing geopolitical advantage.
It doesn’t matter who struck the match. When the network goes dark, the consequences ripple across everyone.
Billy Joel’s We Didn’t Start the Fire reminds us that the fire has always been burning. It’s a rapid-fire history lesson—and a wake-up call. While we may not have started every crisis, we live with the consequences. This song is a fitting backdrop for understanding that vigilance isn’t optional. It’s inherited.
Sunbursts and Sparks: We’ve Been Here Before
- 1859 Carrington Event: A solar storm disrupted telegraph systems worldwide. Operators reported sparks flying from machines and papers catching fire. Lincoln’s communication during the last phase of his legal career was temporarily severed.
- New York City Blackout, 1977: A lightning strike triggered cascading power failures. Looting and arson followed. No internet, no ATMs—just panic.
- Morris Worm, 1988: One of the first major internet-disabling attacks. Designed by a student, it unintentionally caused massive system slowdowns, showing how fragile early networked systems were.
- Y2K, 1999: A near-miss. Global fear that computers would fail at midnight. Billions were spent on patches, and while catastrophe was avoided, it was a wake-up call.
- NotPetya, 2017: Disguised as ransomware, it wiped hard drives and crippled logistics, banking, and medical systems in Ukraine and beyond.
- CrowdStrike Outage, 2024: A routine update brought systems crashing down. Airports, hospitals, and even coffee shops shut their doors. A reminder that even “safe” software can have dangerous consequences.
AI-Specific Risks (And What to Do About Them)
- Overdependence on Automation
- Centralized Points of Failure
- Opaque Reasoning (Black Box AI)
- Training Data Poisoning
- Complacency in Crisis Response
Risk: Staff can’t operate without AI assistance.
Mitigation: Train fallback manual processes; run periodic no-AI drills.
Risk: A single service outage brings down thousands of businesses.
Mitigation: Diversify vendors, invest in offline-capable tools.
Risk: AI makes decisions we don’t understand or can’t audit.
Mitigation: Require explainability, log decisions, and document fallbacks.
Risk: Bad actors insert malicious examples into AI training sets.
Mitigation: Isolate training pipelines, verify data lineage.
Risk: Belief that AI will always recover or protect us.
Mitigation: Scenario planning and table-top exercises to maintain readiness.
Risk Mitigation Strategies and Further Reading
Taking the next step means preparing your organization deliberately. Here are some practical areas to focus on:
- Business Continuity Planning (BCP): Document key operations and define AI failure protocols.
- Resilience Engineering: Build systems to absorb, adapt, and recover from failures.
- Cybersecurity Hygiene: Regular patching, endpoint monitoring, and access control.
- Human-in-the-Loop Systems: Ensure fallback controls for high-stakes AI decisions.
- Communication Plans: Keep customers and stakeholders informed during disruptions.
NIST Guide to Business Continuity Planning
Resilience Engineering Association
OECD Guidance on AI Risk and Accountability
Don’t wait for a crisis to find your weak spots. Identify and address them now, while you still can.
We Used to Accept Risk and Close Up Shop
For generations, we accepted that sometimes things break. When the power went out, when the systems crashed, we put up the “Closed” sign and dealt with it. That was survivable when technology was a tool. But now, technology is the infrastructure. If it goes, everything goes.
For some businesses, accepting that level of risk is still feasible. A few hours offline might mean some lost revenue. For others—emergency services, public health, food logistics—the cost of downtime could be measured in lives.
It’s not about fear. It’s about being deliberate. Each organization needs to define what risk is acceptable and what needs to be mitigated.
The Divinci Dilemma: When the Machine is in the Room
In 2018, I signed a consent form the length of my arm. I agreed to have my prostate removed by the Divinci surgical robot. The waiver detailed every technical failure imaginable:
- Software glitches (including bugs and hacking)
- Mechanical arm malfunctions
- Surgeon misinterpreting feedback
- Network disruptions
- Electrical outage (including backup generator failure)
Yet, I accepted that risk. Why? Because the risk of doing nothing was worse. That’s how risk should be evaluated: not by fantasy, but by tradeoff.
Before we wrap up, we invite you to pause with The Catalyst by Linkin Park. This haunting video immerses us in a world on the brink, where systems fracture and survival hinges on adaptation. It’s more than art—it’s a visceral reminder of what happens when we depend on fragile systems without backup plans.
Conclusion: Be Deliberate, Not Doomed
This is not a call to retreat from AI or automation. It’s a reminder: the more we delegate to machines, the more critical it becomes to understand what happens when they fail.
We don’t need to go back to candles and cash boxes. But we do need:
- Manual backups
- Trained humans
- Redundant systems
- Clear escalation paths
The match may be inevitable. But we don’t have to let it burn down the house.