- cross-posted to:
- aboringdystopia@lemmy.world
- microblogmemes@lemmy.world
- cross-posted to:
- aboringdystopia@lemmy.world
- microblogmemes@lemmy.world
The absolute worst part is they could have actually done some really useful things and didn’t even bother. Received an email from a new contact…click here for detailed contact card with all their info and not just a blank card with an email address.
Didn’t even try to make it useful.
We didn’t want it
They stole our content to make it
It makes shitty output
Corporate leadership uses it to take our jobs away
Grifters use it to manipulate people
Lazy people use it to fill our channels with garbage
Tech companies won’t let us turn it off
It takes our natural resources
It drives up our expenses
ELON MUSK LETS PEOPLE USE IT TO MAKE CHILD PORN
And it’s our fault that we don’t accept it? It’s our responsibility to justify its existence? Fuck you Nadella.
Did they ever have “social permission”? Seems like they just went ahead and did it and most people just have other things to worry about.
Article:
Some fun bits:
Nadella says that AI companies and policy makers must build out “a ubiquitous grid of energy and tokens,” which is the task currently making it [impossible to buy a stick of RAM at a reasonable price
Fuck AI
“The demand side of this is a little bit like, every firm has to start by using it,” said Nadella
The beatings will continue until morale improves!
“People need to say, 'Oh, I pick up this AI skill, and now I’m a better provider of some product or service in the real economy,” said Nadella.
Yes, because your average working is definitely concerned with…checks notes…. Providing products and services~making rich twats even richer~ and not trying to fucking survive and buy food
He did at least provide one real example of what he means by all this: "When a doctor can … spend more time with the patient, because the AI is doing the transcription and entering the records in the EMR system, entering the right billing code
Yes because I trust the privacy nightmare plagarism machine with my personal health records, and definitely trust the hallucinating sand to ‘enter the correct billing code’. The article author, Tyler Wilde, also ponts out that in this scenario it’s probably also going to be used to try and classify shit as more expensive shit.
To me the scariest use case is using AI to gain the trust of an individual, and then abusing that trust to influence political or financial descisions.
People are already, litterally going insane talking to ChatGPT. Wait until they figure out how to train the Fox News of LLMs.
💀

