- cross-posted to:
- technik@feddit.org
- cross-posted to:
- technik@feddit.org
Then let it be over then.
Good. I hope this is what happens.
- LLM algorithms can be maintained and sold to corpos to scrape their own data so they can use them for in house tools, or re-sell them to their own clients.
- Open Source LLMs can be made available for end users to do the same with their own data, or scrape whats available in the public domain for whatever they want so long as they don’t re-sell
- Altman can go fuck himself
But if you stop me from criming, how will I get better at crime!?!
No amigo, it’s not fair if you’re profiting from it in the long run.
This is basically a veiled admission that OpenAI are falling behind in the very arms race they started. Good, fuck Altman. We need less ultra-corpo tech bro bullshit in prevailing technology.
To be fair copyright is a disease. But then so is billionaires, capitalism, business, etc.
I mean, if there’s a war, and you shoot somebody, does that make you bad?
Yes and no.
Good.
These fuckers are the first one to send tons of lawyers whenever you republish or use any IP of them. Fuck these idiots.
I have conflicting feelings about this whole thing. If you are selling the result of training like OpenAI does (and every other company), then I feel like it’s absolutely and clearly not fair use. It’s just theft with extra steps.
On the other hand, what about open source projects and individuals who aren’t selling or competing with the owners of the training material? I feel like that would be fair use.
What keeps me up at night is if training is never fair use, then the natural result is that AI becomes monopolized by big companies with deep pockets who can pay for an infinite amount of random content licensing, and then we are all forever at their mercy for this entire branch of technology.
The practical, socioeconomic, and ethical considerations are really complex, but all I ever see discussed are these hard-line binary stances that would only have awful corporate-empowering consequences, either because they can steal content freely or because they are the only ones that will have the resources to control the technology.
Japan already passed a law that explicitly allows training on copyrighted material. And many other countries just wouldn’t care. So if it becomes a real problem the companies will just move.
I think they need to figure out a middle ground where we can extract value from the for profit AI companies but not actually restrict the competition.
I think the answer is there just do what deepseek did.
Fuck these psychos. They should pay the copyright they stole with the billions they already made. Governments should protect people, MDF
TLDR: “we should be able to steal other people’s work, or we’ll go crying to daddy Trump. But DeepSeek shouldn’t be able to steal from the stuff we stole, because China and open source”
Good. If I ever published anything, I would absolutely not want it to be pirated by AI so some asshole can plagiarize it later down the line and not even cite their sources.
At the end of the day the fact that openai lost their collective shit when a Chinese company used their data and model to make their own more efficient model is all the proof I need they don’t care about being fair or equitable when they get mad at people doing the exact thing they did and would aggressively oppose others using their own work to advance their own.
They’re all motivated by greed.