Thought this interview was interesting:
- He’s very bullish on AI, but specifically wants open-source AI and wants to lower dependence on the AI giants which… fine, I guess
- He seems neutral on the AI Off Switch in Firefox. Seems to understand that it’s important to users, but doesn’t say anything about its future. (He says he sees Mozilla’s job as helping “lead our users through this transition” which kinda sounds like it might go away some day)
- Talks up Mozilla Data Collective & any-LLM, which I wasn’t familiar with but sound… OK, maybe? Data Collective sounds fine.
Anything else interesting in here? I’m still watching through but lemme know what you find
60 minutes of self-congratulatory blathering from olympic-level athletes in the sport of failing upwards.
They’re speaking with the underlying presumption that agi is a real thing that will happen. It’s not, and it won’t. https://www.404media.co/google-deepmind-paper-argues-llms-will-never-be-conscious/
On the topic of Google’s influence over Mozilla being the primary source of their funding: “It’s complicated, it’s entangled, but it seems to be working alright.” JOURNALISM!
Any-LLM platform: Mozilla’s new middleware for developers allowing them to hop between models and LLM companies.
Mozilla data collective: Mozilla’s marketplace platform (platform?) for making “training data” (any content of value to LLM companies) monetizable for the owners of that content.
In resposne to the loaded question ‘hey, isn’t it bonkers that your “rabid” users forced you to build a “dont do AI shit” setting in Firefox?’:
“It’s being reactionary to our community and reactionary to our users. They’re asking us for that. And so I think in some ways, both we need to respond to that and we need to lead them through it, but we need to provide these other alternatives.”
Because, clearly, the user’s aversion to these tools is a childish phase that they need to get over.
44:10: The ex-head of self-driving at Uber talks about letting his Tesla self-drive, with his kids inside, trying to take over driving and having the car crash into a wall at 30 mph. Lesson learned: “I’m not blaming the software” and (human) “passivity is bad.” Remember when that guy got shot in the face by Dick Cheney, then he held a press conference and apoligized about getting shot?
Some grade-A Texas sharpshooting around “could AI invent punk rock?” They argue no, but they’re misunderstanding the premise. Punk didn’t get a cultural foothold because of the music alone. The philosophy, the artists, the choices, the industry and the scene behind and around the music made that happen. LLMs are a million monkeys on a million typewriters, just like artists. Given enough time and resources one will evenutally fart out something unique that resonates with the culture and captures peoples’ interests, and we all conveniently forget about all the failures. The brilliance of human artists is that our failure rate is significantly lower. The reply to this should not be “let’s keep working to make failure cheaper so that it’ll evenutally be economical to retire human creativity.”
They attest that the nightmare scenario is that we all become the fat useless humans in Wall-E (which, on its own, is a ridiculously optimistic vision, ignoring the potential hellscape of a ravaged environment, digital feudal lords and impoverished serfdom), but they refuse to stop building all the infrastructure to make those precise futures actually happen.
But it’s ok; we’re going to get lead through it.
Yeah the fat useless human is like the least bad thing about Wall-E. Man I live that movie



