• bookmeat@fedinsfw.app
    link
    fedilink
    English
    arrow-up
    13
    ·
    9 hours ago

    Without grounding, correctness is not defined. Hallucination is not a bug that scaling can fix. It is the structural consequence of operating without concepts. – Gregory Coppola

  • Teppa@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    19 hours ago

    AI’s dont know that birds arent real, or that sometimes the pressure from being under water for an extended period of time can cause fish to explode.

    • Sylvartas@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      under the pseudonym Johannes Bohannon, John Bohannon …

      I can see why he went into science and not, say, creative writing.

    • MinnesotaGoddam@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      17 hours ago

      they do the same to protect doctors from malpractice lawsuits. there is a (laughably peer reviewed) study that claims tylenol and morphine are equally effective at pain management.

    • Final Remix@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      11 hours ago

      It’s a screenshot of a post on bsky. Don’t read too much into the specifics of the language…

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    21 hours ago

    “When the text looks professional and written as a doctor writes, there’s an increase in the hallucination rates,” says Omar.

    Huh, now there’s something we have in common. Trying to make sense of something a doctor wrote makes me feel like I’m hallucinating, too. Is there a class in medical school on “Illegible Handwriting,” or is it just a coincidence?

    In all seriousness though, I wish I could be surprised by AI failing at this. We have entered the Misinformation Age. There’s no closing Pandora’s Box, though this time I can’t find the “hope” that’s supposed to be in the bottom of it. Society would have to turn real skeptical real fast, but I’ve met enough people to know that such a tranformation is going to take time - and by “time” I mean “decades or longer.” With AI already here, we’d have to wise up immediately… but I fear that humanity isn’t mature enough for that yet.

    • Jako302@feddit.org
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      We’ve crossed the point where natural skepticism could’ve saved us months ago. Feedback loops of made up sources where a problem way before ai was a thing, but now you can be five sources deep, reading trough papers published by multiple different scientific magazines or universities, and still won’t have found the actual data all the papers depend on cause there wasn’t any in the first place.

      And once a single one of these papers gets published, there will be about one million SEO articles on shitty clickbait websites that, in this case, would try to sell you a home remedy for your supposed illness. So searching for any useful information is pretty much off the table.

  • Arghblarg@lemmy.ca
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    1 day ago

    Good. This shows plainly how LLMs don’t think, don’t truly understand anything, and have no critical ability to do introspection or fact-checking. It seems the only way to teach the world of these things is to make it impossible to ignore via absurd demonstrations like this. If the “AI” well must be poisoned in order to wake people up, I’m all for it.

    • Teppa@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      19 hours ago

      Isnt 80% of its data from Reddit anyways, seems quite poisoned already given the amount of confidently incorrect people.

      With how Reddit is monetizing itself now I’d assume Lemmy actually becomes more widely used than Reddit however, since it should be totally free.

  • RagingRobot@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 day ago

    I wonder if we got a group together to go on reddit and stack overflow and give really wrong programming answers and vote them to the top, if Claude would start sucking? They could always just revert to a previous model and it would probably be too hard to get enough people and content to have an effect with such large training sets. Maybe if you use ai? Lol

    • Napster153@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      24 hours ago

      Didnn’t something similar happen to Grok but ended up with it generating a ton of CSAM material that circulated twitter?

      • kadotux@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        20
        arrow-down
        1
        ·
        22 hours ago

        Sorry for being that guy today for you, but you can just say CSAM. It stands for Child Sexual Abuse Material". smh my head :P

          • ITGuyLevi@programming.dev
            link
            fedilink
            English
            arrow-up
            14
            ·
            22 hours ago

            Some people, when they see an acronym, will replace it with the words it stands for in their head. A subset of that group of people get annoyed when the sentence gets all muddled up by repeated words; in this particular case, you said ‘CSAM material’, which their brain read as ‘child sexual abuse material material’.

            It isn’t a big deal, but as one of those people, I totally get the urge to point it out (I’ve gotten pretty good at looking past it but it’s still a bit of a compulsion).

    • Bieren@lemmy.today
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      1 day ago

      I get what you are saying. But then the issue is this turns into fucking over actual humans looking for help.

  • DeathsEmbrace@lemmy.world
    link
    fedilink
    English
    arrow-up
    125
    ·
    1 day ago

    Before anyone shits on these scientists it said over and over again it was made up and that officially the USS Enterprise labs were used to make this discovery.

  • partial_accumen@lemmy.world
    link
    fedilink
    English
    arrow-up
    150
    arrow-down
    4
    ·
    1 day ago

    I give you… “The Grant Money Printing machine!”

    Need a grant? Create a disease and submit a paper. Then write a grant asking for money to solve your invented disease.

    • Jankatarch@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      21 hours ago

      If you want research grants there is already a glitch for that. You just jam “AI” in your research and suddenly government cares about progress now.

    • adr1an@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      22 hours ago

      Wait until you hear about paper mills… They were here long before LLMs. This can only get worse… Unless, “we” do something. Or journals themselves do it. Not sure what or how, but better audited ways. Even academia itself could start by valuing more the work of reviewers.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      arrow-down
      1
      ·
      edit-2
      1 day ago

      That’s pretty much what local ML is.

      If open weights LLMs take off, and business users realize they can just finetune tiny specialized models for stuff, OpenAI is toast. All of Big Tech’s bets are. It’s why they keep fanning the “AGI” lie, and why they’re pushing for regulation so hard, why they’re shoving LLMs where they just don’t fit and harping on safety.

      • The_Decryptor@aussie.zone
        link
        fedilink
        English
        arrow-up
        20
        ·
        1 day ago

        Ok, but who is making those “open weight” models though? Individuals don’t really have the resources to run these huge scraping operations, so they’re often still corporate releases with fake open source branding.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          20 hours ago

          Corporate, for now.

          Thing is, once they’re out there, they’re free utilities, and they can’t be taken back.

          Also, they don’t really need to aggressively scrape the internet. There are many good public datasets now, and the Chinese are already making excellent use of synthetic dataset generation on (relative) shoestring budgets. Also, several nations and other large organizations are already funding open model efforts, but they just haven’t had the opportunity to catch up yet.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          1 day ago

          They come from corporate but you can at least run them without any kind of analytics or censorship, as well as fine tune them on consumer hardware.

          Consumers aren’t in the best position right now though, especially with the price hikes.

        • percent@infosec.pub
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          There are huge public datasets that are often used for pretraining. Common Crawl and C4 are probably the most prominent, but there are others.

          There are also big public datasets available for fine-running and instruction tuning.

          The open weight models are getting pretty powerful, thanks to some Chinese labs.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Pretty much is, they’re spending hundreds of billions on a dream (not having to pay workers) that doesn’t work, until they repurpose those datacentres to remove personal computing.

      Fortunately datacentres are by design concentrated in space and therefore rather vulnerable.

  • magnue@lemmy.world
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    1 day ago

    Wouldn’t humans do the same thing if someone literally writes lies on the internet?

    • Kacarott@aussie.zone
      link
      fedilink
      English
      arrow-up
      36
      ·
      edit-2
      1 day ago

      If it were convincing lies made to deceive, then sure. But in this case the papers were deliberately made to be immediately obviously fake, to anyone actually reading them.

      So I guess the question would be “would humans do the same thing if someone literally writes obvious jokes on the internet?”

      • HylicManoeuvre@mander.xyz
        link
        fedilink
        English
        arrow-up
        12
        ·
        24 hours ago

        More shockingly, three Indian researchers published a research paper that cited the preprint on the fake disease in Cureus, a peer-reviewed journal published by Springer. It was subsequently retracted.

        lol

        • ExperiencedWinter@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          20 hours ago

          Even journalists don’t

          Not sure what point your making here, I wouldn’t expect most journalists to be great at reading the details of papers like this…

          • Test_Tickles@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            19 hours ago

            Research and fact checking is what separates journalists from hacks.
            “Journalist” implies factual information, not science fiction. If someone writes a “news” story about the magic land of Xanth because they can’t tell the difference between a Piers Anthony novel and a scientific study it’s not Piers Anthony’s fault for being too “tricky”.

          • squaresinger@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            18 hours ago

            Vetting sources is the one thing we need journalists for. If they don’t vet their sources, their work is without merit.

            Reading at least the methodology section of a paper and googling if the researchers and the institute exists, is the bare minimum of what a decent journalist should do.

            If they can’t do that, then there’s no advantage of a journalist over some random person posting on Facebook. Even Youtubers usually vet their sources better.

      • Napster153@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        24 hours ago

        That’s how we ended up with modern day anti-vaxxers but at least with humans you can strangle the dude responsible. LLMs function like modern idols that the makers use to get away with.

    • Foofighter@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      Absolutely! Once false information is out there it can’t be retracted even if the article itself is retracted. Bumblebees can’t fly and vaccines cause autism are good examples of that. The only difference i can imagine is that LLMs have a much larger reach and may spread shit faster

  • Zexks@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    1 day ago

    So let me tell yoy all about this paper talking about vaccines and autism. It’ll change the world

    • Tja@programming.dev
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      My first thought as well. Artificial intelligence is not better or worse than human stupidity. At least I haven’t seen any LLM trying to convince me the earth is flat (yet).

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        22 hours ago

        Not to you, although I would bet it has done so to someone. The main issue is though, if you asked an LLM to write arguments for a flat earth, it would do so. Convincingly and insistent, without even questioning or critically analyzing why. Ask it to compare and balance arguments both ways. And it will do so as if both positions were equally real and valid.

        It has no notion of reality and no convictions of its own.

        It will also hallucinate fake papers and quote people that don’t exists to make its argument.

        PS: most poignantly, the point of the paper is that it says, over and over, “this information is false, this disease doesnt exist. All of this is made up”. Unlike the other problematic papers quoted on this comment thread that were published with conviction by the authors, and later were retracted. Yet the LLM is unable to parse that tidbit of information. It is not as smart as the most stupid. It simply is not intelligent, not even as intelligent as the most stupid humans. You can tell it, the following sentence is false, and it is not smart enough to pick up on that meaning.

          • Catoblepas@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            17 hours ago

            Not unless you can find some people that believe Starfleet Academy is a real place and just skip right over all the times the paper literally overtly states it’s made up.

            • Tja@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              12 hours ago

              You doubt there will be people that will? Have you heard of scientologists? Have you heard of flat earthers? Antivaxxers? All of them basing their core ideals on stuff explicitly marked as bogus.

  • WhyIHateTheInternet@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    2
    ·
    1 day ago

    My friends and I did that in high school. Kinda. We made up new words for “awesome” to get people to start saying it. We started with “bumpenis” like that song is bumpenis. Really we were just getting people to say bum penis. It worked too. We are all just walking talking LLMs.