If today was ten years ago, this article would be excellent science fiction. It’s long, and written by someone I’d like to punch in the head, but it’s gotta be read and I couldn’t stop.
Anyone who wants to debunk it, tell me it’s all wrong, I’d sure appreciate that so please do, because it reads like the end of everything.
This is different from every previous wave of automation, and I need you to understand why. AI isn’t replacing one specific skill. It’s a general substitute for cognitive work. It gets better at everything simultaneously. When factories automated, a displaced worker could retrain as an office worker. When the internet disrupted retail, workers moved into logistics or services. But AI doesn’t leave a convenient gap to move into. Whatever you retrain for, it’s improving at that too.
… Imagine it’s 2027. A new country appears overnight. 50 million citizens, every one smarter than any Nobel Prize winner who has ever lived. They think 10 to 100 times faster than any human. They never sleep. They can use the internet, control robots, direct experiments, and operate anything with a digital interface. What would a national security advisor say?


Also, let me point out they didn’t properly grade the bar exam: https://www.livescience.com/technology/artificial-intelligence/gpt-4-didnt-ace-the-bar-exam-after-all-mit-research-suggests-it-barely-passed
It did excellent on the multiple choice section, but so would literally any law student using Google.
And that’s not the only lie. It can’t even repeat stuff we already know. I occasionally give a model one of my own, by new decades old, papers without the abstract and conclusions and asked what it could conclude. It got it completely wrong. Like not-even-funny wrong, wrong conclusions, wrong theory, wrong methodology.
It’s pretty fun to see AI boosters get upset at that and blame my paper for the LLM saying literally the opposite of what it says.