• 8 Posts
  • 154 Comments
Joined 2 年前
cake
Cake day: 2023年7月1日

help-circle


  • The last time I recall having engaging, thoughtful discussions on the internet was way back in the days of forums. And that was so long ago I’m skeptical of my own memory of it.

    Lemmy comments may be different from Reddit comments, but they’re not better. I’ve concluded it’s structural. This format simply does not produce useful conversation.

    None of the other social media formats produce it either. Perhaps it’s the result of optimizing for attention, which all social media does, whether by deliberate design or natural selection. Platforms that get attention grow. Those that don’t, languish. It may be that things which gather attention to themselves best are repellent of deeper, slower, more careful thinking.

    Actually, maybe I can think of one example. I’m stretching the definition of social media, and I haven’t firsthand experience, but the way that Wikipedia operates may be a clue toward how to build a platform that produces useful dialogue.







  • Sounds like an issue of joint correlation. It makes sense that homogenous communities are better at building unions. Building solidarity with people who are different from oneself is more work than with people who are similar.

    And it’s been found that exposure to different people and cultures reduces racist beliefs, so it also makes sense that homogenous communities would be more racist.

    So the causal feature would be homogeneity, and the correlation between racism and unions would be effects.







  • If you are what you do, then what determines what you do? Random chance? I don’t see how one can argue that people don’t have an essence and explain why they act at all. Rousseau said it was benign. Hobbes said it was wretched. It has to be something. If people were perfectly free of compulsion, would they do nothing?




  • I think there may be more opportunity for success here than your argument seems to suggest.

    I agree with the focus on inequality. The sense that society is fundamentally unfair has a corrosive and a radicalising effect on politics. People can react to it in very different ways, from redistribution to out-group scapegoating, but the underlying motivation is that people see that there is vast wealth available in our society and they’re still struggling.

    Where I may disagree is that most people are non-ideological. Not everyone, but a healthy majority. They aren’t focused on the philosophical roots of a candidate’s policies. They care that the candidate

    1. Sees, likes, and cares about themselves and their group
    2. Has a vision that gives them hope for something better

    Many people can find that in candidates with a variety of ideological positions. The overlap between people who supported Bernie after the great recession, and went on to support Trump is bigger than one would expect.

    So the equation is much less zero sum. You don’t lose one reactionary for every radical you bring into your camp. There really aren’t that many committed radicals and reactionaries.

    The most toxic message today is the economic moderate. “Hey, it’s not so bad. Things could be a lot worse.” This is the zero sum relationship. You can’t keep both the people who are doing well and like how things work, and the people who are struggling and want the life they deserve. The material difference isn’t left vs right, it’s status quo versus change. There’s a lot more room for flexibility in the change camp.





  • I’ve listened to a couple interviews with the author about this book, and I have not found them persuasive. I can accept that there’s a possibility that artificial super intelligence (ASI) could occur soonish, and is likely to occur eventually. I can accept that such an ASI could choose to do something that kills everyone, and that it would be extremely difficult to stop it.

    The two other arguments necessary for the title claim, I see no reason to accept. First that any ASI must necessarily choose to kill everyone. The paper clip scenario is the basic shape of the arguments presented. I think it’s probably impossible to predict what an ASI would want, and very unlikely that it would be so simple minded as to convert the solar system into paper clips. It’s a weird proposal that an ASI must be both incomprehensibly capable and simultaneously brainless.

    Second that the alignment problem can not be solved before the super intelligence problem with current trajectories. Again, this may be true, but I do not think it’s a given that the current AI techniques are sufficient for human-level, let alone super-human intelligence.

    Overall, the problem is that the author argues that the risk is a certainty. I don’t know what the real risk is, but I do not believe it is 100%. Perhaps it’s a rhetorical concession, an overstatement to scare people into accepting his proposals. Whatever the reason, I’m sympathetic to the actual proposals; that we need better monitoring and safety controls on AI research and hardware, including a moratorium if necessary. The risk isn’t 100% but it’s not 0% either.