Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.

(Since this is a personal blog I’ll clarify I am not the author.)

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    20 hours ago

    Oh fuck. Then it gets even worse (and funnier). Because even if that was a human contributor, Shambaugh acted 100% correctly, and this defeats the core lie outputted by the bot.

    If you got a serious collaborative project, you don’t want to enable the participation of people who act based on assumptions. Because those people ruin everything they touch with their “but I thought that…”, unless you actively fix their mistakes — i.e. more work for you.

    And yet once you construe that bloody bot’s output as if they were human actions, that’s exactly what you get — a human who assumes. A dead weight and a burden.

    It remains an open question whether it was set up to do that, or, more probably, did it by itself because the Markov chain came up with the wrong token.

    A lot of people would disagree with me here, but IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.

    • leftzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      19 hours ago

      IMO they’re the same picture. In either case, the human enabling the bot’s actions should be blamed as if those were their own actions, regardless of their “intentions”.

      Oh, definitely. It’s 100% the responsibility of the human behind the bot in either case.

      But the second option is scarier, because there are a lot more ignorant idiots than malicious bastards.

      If these unsupervised agents can be dangerous regardless of the intentions of the humans behind them, we should make the idiots using them aware that they’re playing with fire and they can get burnt, and burn other people in the process.