MIT’s Project NANDA has a new paper: “The GenAI Divide: State of AI in Business 2025.” This got a writeup in Fortune a few days ago about how 95% of generative AI projects in business just fail. Th…
The problem isn’t AI but layman’s assumptions about what AI means.
Expert systems (bunch of if else) are AI. Chess programs are AI. Optical Character Recognition is AI. Markov chain programs are AI. LLMs are AI.
LLM AI is useful. It doesn’t need to be a self aware super human intelligence to provide tremendous efficiency gains to business by fixing grammar in inter office emails.
The problem isn’t AI but layman’s assumptions about what AI means.
Expert systems (bunch of if else) are AI. Chess programs are AI. Optical Character Recognition is AI. Markov chain programs are AI. LLMs are AI.
LLM AI is useful. It doesn’t need to be a self aware super human intelligence to provide tremendous efficiency gains to business by fixing grammar in inter office emails.
deleted by creator
The best and most concise explanation I’ve seen. Thank you.