

I think this is more about plausible deniability: If people report getting wrong answers from a chatbot, this is surely only because of their insufficient “prompting skills”.
Oddly enough, the laziest and most gullible chatbot users tend to report the smallest number of hallucinations. There seems to be a correlation between laziness, gullibility and “great prompting skills”.
To put it more bluntly: Yes, I believe this is mainly used as an excuse by AI boosters to distract from the poor quality of their product. At the same time, as you mentioned, there are people who genuinely consider themselves “prompting wizards”, usually because they are either too lazy or too gullible to question the chatbot’s output.