Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

  • 31337@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    Is MAD not well-known or taught anymore? A lot of the comments here seem to be ignoring the fact that Russia or NATO would launch a full-scale retaliation before the first-strike even made it to its destination. It would likely result in the world human population going from 8 billion to 2 billion.

    • nuke@sh.itjust.worksOPM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      My brother in Christ, this is NCD.

      Nuke all humans. Peace at last. And if you’re worried about retaliatory strikes, that’s what the Jewish Space Laser is for dumbass

  • Feathercrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 years ago

    “Some say they should disarm them, others like to posture. We have it! Let’s use it!”

    That’s an amazing quote.

    As someone who spends a decent amount of time explaining how AI is not like the movies, this study(?)/news sounds an awful lot like the movies lol

    • Meowoem@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      Because it is a movie, they’re purposely using it in a way it wasn’t intended to work - try it yourself and see how often it couches replies until you convince it to pretend to be a general or to play the part of a character.

      They’ve asked it to generate fiction, it’s given them fiction and now they’re click baiting a pointless story with a dumb headline.

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset’s authors do.

    • Hildegarde@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      These models are also trained on data that is fudimentially biased. An English generating text generator like chatGPT will be on the side of the english speaking world, because it was our texts that trained it.

      If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

      How many English sources describe the US as the biggest threat to world peace? Certainly a lot less than writings about the threats posed by other countries. LLMs will take this into account.

      The classic sci-fi fear of robots turning on humanity as a whole seems increacingly implausible. Machines are built by us, molded by us. Surely the real far future will be an autonomous war fought by nationalistic AIs, preserving the prejudices of their long extinct creators.

  • workerONE@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 years ago

    Human beings have developed logic and morality. AI does not know the difference between killing a person and changing a 1 to a 0.

    • vithigar@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 years ago

      LLM “AI” doesn’t “know” anything. It’s just statistical word vomit based on established patterns. It talks about nuclear war because a significant portion of text on the subject of world wide long term peace brings it up.

  • OutrageousUmpire@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    2 years ago

    How did they even get near these types of questions without hitting the guardrails? Claude shuts down on me if I even use the word “gun” trying to do creative writing,