Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.

ChatGPT Health — OpenAI’s new health-focused chatbot — frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.

In the study, researchers tested ChatGPT Health’s ability to triage, or assess the severity of, medical cases based on real-life scenarios.

Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don’t provide reliable medical advice.

  • Kairos@lemmy.today
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    19
    ·
    9 days ago

    LLMs like all computer software is deterministic. It has a stable output for all inputs. LLMs as users use them have random parameters inserted to make it act nondeterministically if you assume this random info is nondeterministic.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      9 days ago

      You’re being down voted because LLMs aren’t deterministic, it’s basically the biggest issue in productizing them. LLMs have a setting called “temperature” that is used to randomize the next token selection process meaning LLMs are inherently not deterministic.

      If you se the temperature to 0, then it will produce consistent results, but the “quality” of output drops significantly.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        5
        ·
        9 days ago

        If you give whatever random data source it uses the same seed, it will output the same thing.

      • Pieisawesome@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 days ago

        It’s the temperature. If you set it to 0, no randomness is introduced.

        Of course it impairs the llm substantially, but you CAN get deterministic results.

      • Kairos@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        13
        ·
        9 days ago

        I honestly dont know. I think all that matters is the token window and a random seed used foe a random weighted choice.

        • nate3d@lemmy.world
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          1
          ·
          9 days ago

          I encourage you to do some additional research on LLMs and the underlying mathematical models before making statements on incorrect information

          The answer to this question was Temperature. It’s one of the many hyperparameters available to the engineer loading the model. Begin with looking into the difference between hyperparameters and parameters, as they relate to LLMs.

          I’m one of the contributors to the LIDA cognitive architecture. This is my space and I want to help people learn so we can begin to use this technology as was intended - not all this marketing wank.

          • Nate Cox@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            ·
            9 days ago

            Listen, this is going to sound like a loaded inflammatory question and I don’t really know how to fix that over text, but you say you’re in the space and I’m genuinely curious as to your take on this:

            Do you think it’s possible to build LLM technology in a way that:

            1. Respects copyright and ip,
            2. Doesn’t fuck up the economy and eat all the ram,
            3. Doesn’t drink all the water and subject people to Datacenter hell, and
            4. is consistently accurate and has enough data to be useful?
            • nate3d@lemmy.world
              link
              fedilink
              English
              arrow-up
              7
              ·
              edit-2
              9 days ago
              1. No. And I’ve lost my voice describing why this is the case - LLMs do not use training data in real time which is indicative of the fact that their reasoning chains are learned over many training epochs rather than something akin to a search engine which is parsing and aggregating results from direct sources. I wish I had a different answer but that is simply how the mathematics behind this kind of machine learning model work. The only way to properly manage it would be to limit and license the data appropriately during core model training, but that genie is out of the bottle.
              2. We will eventually (soon hopefully) hit critical mass where the technology isn’t delivering value on the hardware it takes to run it. The limitations, like I detailed above, are core to the technology and are not something that we’re just around the corner from solving. Those are core limitations and a different technology will be needed to move the ball forward past what is essentially a calculator with words. When this happens, we’ll see a whiplash effect where a ton of (server) hardware hits the market from the small datacenters looking to capitalize on the current rush. It’ll cripple the market for new hardware, I’d expect, as they’re going to want to get that capital back ASAP as it’s a quickly deprecating asset if just sitting idle.
              3. Similar to above, the current trajectory isn’t going to last. It’s going to hurt once the reality finally sets in for the economy.
              4. Oh yes, and it’s already been there for years! Unfortunately, these applications are not the glamorous applications like a “Her”-style chat companion, but rather precise application of specific machine learning models for specific business needs. I.e. do you really need an LLM to upload a picture to ask what kind of cat is in the picture? NO! That’s what convolutional neural networks are for, or maybe some custom vision transformers. There are dozens of types of ML models that have clear applications and with fine tuning and proper process implementation, the models can produce production-ready results as any other means of solving this issue.

              The core problem with this technology is the misuse/misunderstanding that:

              1. AI does not yet exist. Full stop.
              2. An LLM is just ONE TYPE of machine learning algorithm
              3. An LLM does not possess the ability to understand OR interpret intent
              4. An LLM CAN NOT THINK This is the point I can’t stress enough; the “thinking” models you see today are doing nothing much more than cramming additional data into it’s working context and hoping that this guides the inference to produce a higher-quality result. Once a model is loaded for inference (i.e. asking questions) it is a STATIC entity and does not change.

              Thank you for coming to my autistic TED talk <3

              Edit: Also, fantastic question and never apologize for wanting to learn; keep that hunger and run with it

              • Nate Cox@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 days ago

                Well, this was exactly the answer I expected but I’m still disappointed.

                I feel like I’m in a niche position where I want the technology to deliver on promises made (not inherently anti-AI) but even if they did I would still refuse to use them until the ethical and moral issues get solved in their creation and use (definitely anti-cramming-LLMs-into-every-facet-of-our-lives).

                I miss being excited about machine learning, but LLMs being the whole topic now is so disappointing. Give us back domain specific, bespoke ML applications.

            • Kairos@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              3
              ·
              9 days ago

              Not who you asked but

              1. Yes. Public domain only IG.
              2. Small
              3. Small
              4. No. Not while being 1.
          • chicken@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            9 days ago

            Showing that someone hasn’t answered your quiz question correctly isn’t a great way to make an argument.

            • nate3d@lemmy.world
              link
              fedilink
              English
              arrow-up
              8
              ·
              9 days ago

              You’ve missed the point - I was responding to someone answering in an authoritative manner about something of which they were mis-informed. I posed a question someone in the space would immediately know. The disappointing part is simply pasting my question into any search engine or LLM would immediately have said “Temperature.”

              This is a perfect example of how we’re using our brain less and less and simply relying on “something” else to answer it for us. Do your research. Learn and teach.

              • chicken@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                9 days ago

                Nothing Kairos is saying is misinformation though. Temperature applies randomness to a generated probability distribution for tokens. That doesn’t mean the probability distribution wasn’t generated deterministically. That doesn’t mean the randomness applied couldn’t be deterministic. How they describe it working is accurate, they don’t need to prove their qualifications and knowledge of jargon for that to be a good argument, and by focusing on that aspect of things in a way that doesn’t contradict the point, you are making a bad argument.

                What’s lost is the question of what determinism even means in this context or why a property of being deterministic would even matter. It is unclear how being deterministic or not deterministic, by any definition, would have anything to do with how good a LLM is at making correct medical decisions, like the person starting this comment chain was implying.