This is satire, right?
Stop searching what I search on google
There’s no way this woman is real
LLM “creators” just really make me laugh. They aren’t “creators”. They’re just less competent than usual art directors. Because that’s what an LLM “creator” really is: an art director. A person telling someone/something else to make something to a specification.
Is one art director taking the statement “yes, just like that, but with bigger tits” and using it when instructing his own artists plagiarizing in any sense that leaves the word plagiarism with meaning? If yes, well, then, “plagiarism” has joined the term “fascist” or “commie” or any other such political epithet in meaning absolutely nothing. It has become literally as useless a word as “literally”. Of no, well, then, prompt “theft” isn’t a thing.
Exactly. At best you’re commissioning work to a machine. You didn’t provide much creativity, at best a direction and some constraints.
In the art world it’s been settled ages ago that the underlying concept isn’t protected, and few if any prompts go beyond just describing a vague concept.
This has got to be satire. Please tell me it is… Pleaaaase.
This reads like NFT bros.
My thoughts exactly, but the past few years have* really lowered my expectations of other humans
Came to post exactly this.
Gen AI is stealing other people’s work, you fucking dolt. Piss on this guy.
“Amira” is a pretty female sounding name
Does not give you the right to assume their gender.
You’re barking up the wrong tree.
piss on this guy
was the assumption that I questioned with my comment. I didn’t assume anything myself.
Didn’t the first commenter assume it, and in a far more statistically wrong way?
Is this a thread of bots or something? There’s a photo of her in the post.
Yes. All the bots become pretty obvious when you start paying attention.
Me choosing to mention the name and not the photo doesn’t make me a clanker.
That being said, she uses the same photo on her 5 picture Instagram account, which also shows an article she wrote that contains the same photo again!
So very likely that she isn’t a person, but some GenAI driven sock puppet account.
Word to the wise: just learn improv.
Hey ChatGPT, rewrite this prompt differently
People thinking they’re AI experts because of prompts is like claiming to be an aircraft engineer because you booked a ticket.
I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
I didn’t think prompt engineering was a skill until I read some of the absolute garbage some of my ostensibly degree-qualified colleagues were writing.
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
“ChatGPT, please do not lie to me.”
“I’m sorry Dave, I’m afraid I can’t do that.”
That’s incorrect because in order to lie, one must know that they’re not saying the truth.
LLMs don’t lie, they bullshit.
It’s incredible by now how many LLM users don’t know that it merely predicts the next most probable words. It doesn’t know anything. It doesn’t know that it’s hallucinating, or even what it is saying at all.
One things that is enlightening is why the seahorse LLM confusion happens.
The model has one thing to predict, can it produce a spexified emoji, yes or no? Well some reddit thread swore there was a seahorse emoji (along others) so it decided “yes”, and then easily predicted the next words to be “here it is:” At that point and not an instant before, it actually tries to generate the indicated emoji, and here, and only here it falls to find something of sufficient confidence, but the preceding words demand an emoji so it generates the wrong emoji. Then knowing the previous token wasn’t a match, it generates a sequence of words to try again and again…
It has no idea what it is building to, it is building results the very next token at a time. Which is wild how well that works, but lands frequently in territory where previously generated tokens back itself into a corner and the best fit for subsequent tokens is garbage.
Fabulating even!
Have you tried to not be depressed?
Reminds me of the very early days of the web, where you had people with the title “webmaster”. When you looked deeper into the supposed skillset, it was people that knew a bare minimum of HTML and the ability to manage a tree of files?
I’ll never forget being at an ATM and overhearing a conversation between two women in their 30s behind me - the one woman tells the other - “I’ve been thinking about what I want to do and I think I want to be a webmaster”. It just sounded like a very casual choice and one about making money, and not much deeper than that.
This was in 1999 or so. I thought - man, this industry is so fucked right now - we have hiring managers, recruiters, etc…that have almost no idea of the difference in skillsets between what I do (programming, architecture, networking, database, and then trying to QA all of that and keep it running in production, etc.) and people calling themselves “webmasters”.
Sure enough, not long after, the dotcom bubble popped. It was painful for everyone (even people that kept their distance from the dotcom thing to an extent) without question, whether you had skills or no. But I don’t think roles like “webmaster” did very well…
Easy solution here: just have AI write your prompts for you!
Ragebait. For my sanity it must be.
Based on my own in person experience with some LLM fanatics, I think this is quite probable. I’ve heard very sincere feedback from people that think they are amazing because they have “advanced prompt engineering” skills. They think “prompt engineer” will be a very selective job in and of itself and think they have an edge. They think they will be able to work on any field because the LLM will take care of domain specific stuff and their “rare mastery” of prompts will be the hot skill.
I hate the title creep of adding engineering to fucking every title [*] - and it’s not all that new, but “prompt engineering” is really far up there in the hubris of calling that “engineering”. There might not be anything overseeing the other title inflation I mention below - no real certification process or governance at all, basically - but at least in most cases, these people had to really work at what they do and learn quite a lot. I bet most people can call themselves a “prompt engineer” after sitting through a few videos on Youtube or Udemy, LOL.
[*] No one is a tester any more, oh, no, they work in “quality engineering”. Not even the title QA is grandiose enough. Same for programming - people aren’t just coders or programmers, oh no, they are software ENGINEERS. Same for working in operations or sysadmin, no one has that title, it’s site reliability ENGINEERING.
I assure you that REAL engineers that actually have the degrees and had to take exams like EIT and then work years under a real engineer to get their PE as a real engineer get a bit salty about all this title inflation. They did all this work and are suddenly neck-deep in “engineers” that are anything but. I get why they get annoyed, believe me. Someone teaches themselves something in the latest Javascript framework and a few weeks later is calling themselves a software engineer, LOL.
this has to be satire xD
…right? right?? 😭
Everything lately seems like satire, but sadly it’s the world we live in.
it’s twitter, which encourage engagements. so can be assumed as rage bait
“Stop using everyone’s words in the order everyone uses them; they are my words, and they are my order”.
It’s worse than your typical creative claim on copyright of something like a poem - because prompts are by definition functional more than creative, and typically contain too few purely expressive elements to meet creative height. They managed to put prompts in a worse position than boilerplate code in terms of protection, lol
What’s next? Getting mad at the grocery store because other people are buying the same things you do?
















