AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.

Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.

Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.

  • Amberskin
    link
    fedilink
    arrow-up
    5
    arrow-down
    1
    ·
    8 days ago

    Of course they can! It’s how LLMs work! They generate a string of tokens minimising its deviation from their statistically trained parameter set! The more parameters, the closer to the training material the output will be.

    It is not a surprise. The IA scam companies are ‘improving’ their models using brute force and adding more and more parameters.

    • KittyCat@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      8 days ago

      Due to this, I wonder if the real value of this tech long term will be as an extreme lossy compression algorithm.