• 0 Posts
  • 48 Comments
Joined 20 days ago
cake
Cake day: January 27th, 2026

help-circle


  • also this post which is where I got the xAI co-founder statement from, also goes over other things

    -the Anthropic team lead quitting (which we already discussed in this thread)

    -AI is apparently so good a filmmaker with 7 years of experience said it could do 90% of his work (Edit: I thought this model was unreleased, it’s not, this article covers it)

    -The Anthropic safety team + Yoshua Bengio talking about AIs being aware of when they’re being tested and adjusting their behaviour (+ other safety stuff like deepfakes, cybercrime and other malicious misuses)

    -the US government being ignorant about safety concerns and refusing to fund the AI international report (incredibly par for the course for this trash fire of an administration, they’ve defunded plenty of other safety projects as well)







  • this article involving an incredibly eyebrow-raising take from one of the people at METR (the team behind the famous ā€œtasks AI can do doubles every 7 monthsā€ graph) saying AI is eventually going to become more impactful than the invention of agriculture and more transformative than the emergence of the human species and also calls it an intelligent alien species. Immensely funny amongst the other people saying ā€œplease stop treating AI like magicā€

    the Harari guy also seems to be into transhumanism if a skim of his wikipedia page is correct. The ā€œthis is the first time in history that we have no idea what the world will look like in 10 yearsā€ thing is also an eyebrow-raiser. I could probably rattle off a couple examples (ie the two world wars)






  • also completely leaving out important context on the Iran/stuxnet example, in that it was a joint effort between two countries believed to have been in development for five years. The idea that AIs will engage in lightspeed wars and disable all critical infrastructure in a single day while speaking in alien languages and creating alliances is unreasonable extrapolation of the capabilities. Also completely ignored the segment where the Anthropic team implemented safeguards and communicated with the teams behind the software to patch out the bugs. It’s the most blatant fearmongering ever. Thank god the comments contain reasonable responses and breakdowns of the post. That channel’s way of highlighting papers just pisses me off