

I aired some Reviewer #2 grievances in the bsky comments:
https://bsky.app/profile/ronanfarrow.bsky.social/post/3mitapp7j2s2c
āKalanick now runs a robotics startup; in his free time, he said recently, he uses OpenAIās ChatGPT āto get to the edge of whatās known in quantum physics.āā
As a physicist, I have never pressed F to doubt harder.
āIn 2022, researchers at a pharmaceutical company tested whether a drug-discovery model could be used to find new toxins; within a few hours, it had suggested forty thousand deadly chemical-warfare agents.ā To the best of my knowledge, these suggestions were never evaluated by any other researchers.
(The original paper was published as a ācommentā: https://www.nature.com/articles/s42256-022-00465-9)
Similar claims of AI-facilitated discoveries have turned out to be overblown in other fields.
https://pubs.acs.org/doi/pdf/10.1021/acs.chemmater.4c00643
āIn a 2025 study, ChatGPT passed the test more reliably than actual humans did.ā
If this is referring to Jones and Bergenās āLarge Language Models Pass the Turing Testā, thatās a preprint (arXiv:2503.23674) that has yet to pass peer review over a year after its posting.
āA classic hypothetical scenario in alignment research involves a contest of wills between a human and a high-powered A.I. In such a contest, researchers usually argue, the A.I. would surely winā
Which researchers?
(Hint: Eliezer Yudkowsky is not a researcher.)
AI: āI will convince you to let me out of this boxā
Humanity (wringing hands): āOh, where is our savior? Who will stand fast in the face of all entreaties?ā
Bartleby the Scrivener: hello
āā¦a hub of the effective-altruism movement whose commitments included supporting the distribution of mosquito nets to the global poor.ā
Phrasing like this subtly underplays how the (to put it briefly) weird people were part of EA all along.
https://repository.uantwerpen.be/docman/irua/371b9dmotoM74
āIn late 2022, four computer scientists published a paper motivated in part by concerns about ādeceptive alignment,ā ⦠one of several A.I. scenarios that sound like science fictionābut, under certain experimental conditions, itās already happening.ā
Barrett et al.'s arXiv:2206.08966? AFAIK, that was never peer-reviewed either; āpostedā is not the same as āpublishedā. And claims in this area are rife with criti-hype:
https://pivot-to-ai.com/2025/09/18/openai-fights-the-evil-scheming-ai-which-doesnt-exist-yet/
Oh, right, the āFuture of Life Instituteā. Pepperidge Farm remembers:
āIn January 2023, Swedish magazine Expo reported that the FLI had offered a grant of $100,000 to a foundation set up by Nya Dagbladet, a Swedish far-right online newspaper.ā
https://en.wikipedia.org/wiki/Future_of_Life_Institute#Activism
āTegmark also rejected any suggestion that nepotism could have played a part in the grant offer being made, given that his brother, Swedish journalist Per Shapiro ⦠has written articles for the site in the past.ā
https://www.vice.com/en/article/future-of-life-institute-max-tegmark-elon-musk/



https://www.reddit.com/r/Physics/comments/1s19uru/gpt_vs_phd_part_ii_a_viewer_reached_out_with_a/