

To slightly expand on that, thereās also a rather well-known(?) quote by English mathematician G.H. Hardy, written in A Mathematicianās Apology in 1940:
A science is said to be useful if its development tends to accentuate the existing inequalities in the distribution of wealth, or more directly promotes the destruction of human life.
(Ironically, two of the theories which he claimed had no wartime use - number theory and relativity - were used to break Enigma encryption and develop nuclear weapons, respectively.)
Expanding further, Pavel has noted on Bluesky that Russiaās mathematical prowess was a consequence of the artillery corps requiring it for trajectory calculations.
I can think of some more realistic ideas. Like AI-generated foraging books leading to people being poisoned, or chatbot-induced psychosis leading to suicide, or AI falsely accusing someone and sending a lynch mob after them, or people becoming utterly reliant on AI to function, leaving them vulnerable to being controlled by whoever owns whatever chatbot theyāre using.
All of these require zero jumps in practicality, and as a bonus, they donāt need the āexponential growthā setup LWās AI Doomsday Scenarios⢠require.
EDIT: Come to think of it, if you really wanted to make an AI Doomsday⢠kinda movie, you could probably do an Idiocracy-style dystopia where the general masses are utterly reliant on AI, the villains control said masses through said AI, and the heroes have to defeat them by breaking the massesā reliance on AI.