Teen trusted ChatGPT to help him “safely” experiment with drugs, logs show.

Most troublingly, as Nelson became increasingly interested in combining drugs, ChatGPT repeatedly warned him that mixing certain drugs could be a “respiratory arrest risk.” Shortly before recommending the deadly mix that killed Nelson, the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol. In one output, ChatGPT explained that mix is “how people stop breathing.” But that knowledge didn’t block ChatGPT from eventually recommending that Nelson take such a deadly mix.

  • quick_snail@feddit.nl
    link
    fedilink
    arrow-up
    5
    ·
    1 hour ago

    This is why we need classes in schools about AI. The conclusion of the class should be restated over-and-over: don’t use AI for anything important, or people could die.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    2 hours ago

    To the folks on this thread: I don’t think it’s cool to blame the victim.

    This is a harmful product built in a harmful way, on purpose, and would not exist in this form if we had meaningful government regulation. It’s the digital equivalent of buying a burger from a fast food joint and getting a brain parasite.

    Not the kid’s fault. It’s our fault because all anyone cares about is what a politician says and not what they actually do here in America.

    • sleepundertheleaves@infosec.pub
      link
      fedilink
      arrow-up
      3
      ·
      2 hours ago

      Yeah, this.

      I remember watching, and laughing at, those old Saturday morning cartoon “very special episodes” where the villain is a drug dealer lurking around the junior high school, trying to manipulate children into trying drugs and turning them into addicts, not for any particular reason but just for the love of the game.

      And apparently the tech bros built one of those villains. Because that’s what we needed. A mindless thing that automatically encourages children to do more and more dangerous drugs without even the minimal drug dealer guardrails of “not wanting to kill your customers because then they can’t buy more drugs”.

      (And you know the worst part? We had a generation of those cartoon villains already. They were called pharmaceutical representatives. They manipulated doctors into overprescribing opiates, in order to addict cancer patients and injured veterans and other people suffering from chronic pain to some of the most lethal drugs out there, in order to create a captive audience for their drugs. And then they turned around and blamed the doctors, and convinced state legislatures to “solve the problem” by restricting pain prescriptions across the board, forcing the generation of addicts they created onto the streets to get their fix from dealers.

      And the same people who got incredibly rich by addicting cancer patients to opiates, and then got even richer investing in private prisons for all the addicts who got arrested buying opiates illegally, are the people getting even richer by killing kids with LLMs.

      Aren’t you tired yet?)

  • NABDad@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    3
    ·
    5 hours ago

    I’m imagining this kid going on and on taking to ChatGPT about doing drugs. ChatGPT saying you shouldn’t do that over and over, until finally just giving up and saying, “You know what? Yeah. You should do drugs. Do all the drugs, and leave me alone.”

      • meowmeow@quokk.au
        link
        fedilink
        English
        arrow-up
        24
        ·
        6 hours ago

        We all know you can game it to make it say anything you want. This is no different than taking advice from a person who first tells you “this is a bad idea,” and then insisting they answer. He was going to do drugs with or without AI.

        What it didn’t do was:

        give me a recipe for blueberry pie

        hey kid, you know what’s better than pie? Druuuugssss

      • luciferofastora@feddit.org
        link
        fedilink
        arrow-up
        14
        ·
        2 hours ago

        He was also a victim of the widespread and intentionally fostered misconceptions about the abilities, nature and limits of AI. He was deceived into thinking it has actual intelligence, semantic understanding and a sense of responsibility for truthful answers. He was probably stuck in a bubble of otherwise ill-informed people (potentially children) that nobody ever taught otherwise.

        Any stranger on the internet could have likewise offered poor advice that led to his death, but I’m not aware of any large-scale marketing efforts trying to convince people that internet strangers are trustworthy and reliable, particularly from companies offering easy but intransparent access to quick responses from unqualified bullshitters without any significant oversight.

        This AI Hype is killing people, because it preys on gullibility, and children in particular are susceptible to such deception, especially as it gets harder for parents to keep them away from such tools. It should be the responsibility of those peddling the product to ensure its safety and be clear about what pitfalls can’t be avoided.

        He was a child, a victim and a failure of regulation and education to protect the vulnerable from the greedy.

  • makeshift0546@lemmy.today
    link
    fedilink
    arrow-up
    3
    arrow-down
    32
    ·
    6 hours ago

    Let’s just kill all search 🤷‍♂️

    Y’all are desperate to frame AI as some machine trying to kill you.

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      57 minutes ago

      The danger with LLMs isn’t that it “tries to kill you”, it’s because they’re all sycophantic, it isn’t a fully understood technology yet (so safeguards inside the black box will only be known to go so far, with an unknown amount of ways to bypass,) and humanity is generally susceptible to being manipulated to trust LLMs (due to how they sound the same in all topics, and dont have other modes of communication other than text and voice, among other issues.)

      What everyone is mainly saying is that OpenAI has a long history of assisting in dozens of deaths, more than other companies like Meta and Anthropic. Despite the fact that there will always be a non-zero chance of bypassing filters, OpenAI has continuously mismanaged creating these filters in the first place.

    • quarkquasar@lemmy.world
      link
      fedilink
      arrow-up
      14
      ·
      5 hours ago

      There was an AI that talked a kid into killing himself and telling him good job afterwards, you can play ignorant up until the slaughterbots are upon you.

      • makeshift0546@lemmy.today
        link
        fedilink
        arrow-up
        1
        arrow-down
        18
        ·
        4 hours ago

        Right, one person or small group with mental illness found a way to break safe guards so the tech is dangerous.

        While we’re at it, let’s ban video games. A few people died in cafes from addiction, it has absolutely caused heart attacks and fatties, and has often been used to turn normal teens into powderkegs just waiting to shoot everyone up.

        I’ve heard social media does harm too, so what the fuck are you doing here!!! You could hurt someone!

        • quarkquasar@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 hours ago

          Nah, I’ve got morals and ethics and a conscious that keep me from doing bad things, something no machine is anywhere close to possessing.