• MonkderVierte@lemmy.zip
    link
    fedilink
    English
    arrow-up
    76
    ·
    edit-2
    20 days ago

    To be fair, MS says you shouldn’t use it for caculations.

    “Why is it there then?” No clue.

    • Blackmist@feddit.uk
      link
      fedilink
      English
      arrow-up
      26
      ·
      20 days ago

      The example I saw them use was turning one line text reviews into a simple positive or negative so you can count them.

      So it could be useful for things like that, even if we ignore the “then why not just ask for the star rating” that probably went along with that review…

      MS is now an AI company that sells to excited bosses who would love to fire somebody somewhere to save a few bucks.

      • turdcollector69@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        20 days ago

        “… to save a few bucks.”

        In the short term, people who rushed into AI are finding out that a 1 in 100 error rate is absurdly high when literally every action is done through an LLM.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        20 days ago

        At the same time, that sounds like something you’d just use old-fashioned sentiment analysis for.

        It’s less accurate, but also far less demanding, and doesn’t risk hallucinating.

        • The Ramen Dutchman@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 days ago

          It’s less accurate

          and doesn’t risk hallucinating

          I might be mistaken, but don’t these two lines mean the exact opposite in this context?

          Is AI more often right, or more often wrong?

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 days ago

            Both, because the way it’s right and wrong are different.

            Sentiment analysis might misclassify some of the data, but it doesn’t risk making things up wholescale like an LLM would.

      • ChickenLadyLovesLife@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        19 days ago

        I used to work for Comcast as a mobile app developer. We used to get uncountable numbers of reviews along the lines of “I gave this app one star because you can’t give an app zero stars”. Honestly depressing even though I wasn’t personally responsible for the apps or the company.

    • 87Six@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 days ago

      That is only there to cover their asses not to actually be informative