We think AI is unstoppable. But history tells a different story. A story of frozen dreams and broken promises.
You see it everywhere. AI creating art, writing code, acing exams. It feels inevitable, like a force of nature.
The world is pouring billions into it. The hype is deafening. They say this time, AI will change everything. Forever.
But what if we've heard this song before? What if AI has a hidden past of spectacular failures? Let's rewind the tape.
A handful of scientists at the Dartmouth Workshop believed they could create a 'thinking machine' in one summer. The US government, deep in the Cold War, gave them a blank cheque.
Their mission: A machine that could instantly translate Russian to English. Hopes were sky-high. The press called it a new era for humanity.
But in 1966, the ALPAC report dropped a bombshell. After years of funding, machine translation was a failure, riddled with comical and useless errors. The first cracks appeared.
Soon after, the Lighthill Report in the UK declared AI a failure. Governments pulled funding. Labs went quiet. 'AI' became a dirty word. The dream was frozen.
A new idea brought AI back from the dead: 'Expert Systems'. They wouldn't think for themselves, but they could digitize the knowledge of human experts, like doctors or geologists.
Companies spent billions creating 'geologists in a box'. Japan launched its 'Fifth Generation' project to build a 'thinking computer,' sparking global panic and a new arms race.
These systems were powerful but brittle. An AI doctor could diagnose a rare disease but wouldn't know a patient needs to breathe. They had zero common sense.
By 1987, the market for these expensive machines crashed. The ambitious Japanese project fizzled out. Once again, funding vanished. Déjà vu. Winter was back.
The internet. Suddenly, there was a near-infinite ocean of data—text, images, and video—to train AI on.
Raw power. GPUs, built for gaming, turned out to be perfect for the massive parallel calculations AI needed. Thank you, gamers.
In 1997, IBM's Deep Blue beat world chess champion Garry Kasparov. It wasn't 'thinking,' but it showed the world what focused, data-driven AI could achieve.
Instead of being programmed with rules, 'neural networks' learned patterns directly from data, like a child learning to recognise a face.
A neural network named AlexNet stunned the world by identifying objects in images with incredible accuracy. This was the Big Bang moment that kickstarted the AI boom we live in today.
ChatGPT, Grok, Veo. The results are mind-blowing. The hype is at an all-time high. Everyone is saying, 'this time is different'.
Yes. The technology is undeniably real. The foundation of data and computing power is immensely stronger than during previous booms.
Sky-high promises of god-like intelligence, while the reality is powerful but specialized tools. The gap between hype and what's delivered is still vast.
AI progress is not a straight line to the sky. It’s a messy cycle of hype, disappointment, and quiet, steady building in the background.
The future of AI won't be shaped by the loudest hype. It will be built by those who learn from the cycles of the past.
The challenge for us is to use these amazing tools wisely, without getting caught in the blizzard of hype, or frozen by the next inevitable winter.
Discover more curated stories