California legislators have begun debating a bill (A.B. 412) that would require AI developers to track and disclose every registered copyrighted work used in AI training. At first glance, this might sound like a reasonable step toward transparency. But it’s an impossible standard that could crush...
It’s saying that copyright law doesn’t apply to AI training, because none of the data is copied. It’s more akin to a person reading an impossible amount at an impossible speed, then using what they read as inspiration for their own writing. Sure, you could ask an LLM trained on, say, Edgar Allen Poe’s works to recite the entirety of The Raven, but it can only “recall” similarly to a human, and will have just as many mistakes (probably more, really) in its recitation as a human would.
It’s saying that copyright law doesn’t apply to AI training, because none of the data is copied. It’s more akin to a person reading an impossible amount at an impossible speed, then using what they read as inspiration for their own writing. Sure, you could ask an LLM trained on, say, Edgar Allen Poe’s works to recite the entirety of The Raven, but it can only “recall” similarly to a human, and will have just as many mistakes (probably more, really) in its recitation as a human would.