

I’ve listened to a couple interviews with the author about this book, and I have not found them persuasive. I can accept that there’s a possibility that artificial super intelligence (ASI) could occur soonish, and is likely to occur eventually. I can accept that such an ASI could choose to do something that kills everyone, and that it would be extremely difficult to stop it.
The two other arguments necessary for the title claim, I see no reason to accept. First that any ASI must necessarily choose to kill everyone. The paper clip scenario is the basic shape of the arguments presented. I think it’s probably impossible to predict what an ASI would want, and very unlikely that it would be so simple minded as to convert the solar system into paper clips. It’s a weird proposal that an ASI must be both incomprehensibly capable and simultaneously brainless.
Second that the alignment problem can not be solved before the super intelligence problem with current trajectories. Again, this may be true, but I do not think it’s a given that the current AI techniques are sufficient for human-level, let alone super-human intelligence.
Overall, the problem is that the author argues that the risk is a certainty. I don’t know what the real risk is, but I do not believe it is 100%. Perhaps it’s a rhetorical concession, an overstatement to scare people into accepting his proposals. Whatever the reason, I’m sympathetic to the actual proposals; that we need better monitoring and safety controls on AI research and hardware, including a moratorium if necessary. The risk isn’t 100% but it’s not 0% either.
I’m just commenting on the book. I find YouTube videos pretty insufferable. I guess it’s a tangent.