Ka-ching, baby 🤑
anthropic can suck my nuts. i was doing calculus homework and wanted a tutor ai that could read my handwriting and do the work. claude actually does this well. so i decided to subsribe, but i couldnt get them to accept either of my card for like a week. i finally bit the bullet and gave waaaay to much of my info to a vcc company which anthropic finally accepts. all hunky dory for 2 days when they fucking ban me with no warning and no explanation and no real appeal (the appeal goes to a google docs form that you know definitely goes to an ai for review). at least they issued me a refund, but that’s snarled up with the infosucking vcc company for like at least 2 weeks. fuck their incompetent bullshit
/rant
If the false positive rate is lower than random chance, it could still be useful for finding vulnerabilities. Just have a human confirm and fix them. And run it locally on solar power.
That’s not the point. The point is that anybody can use it to find vulnerabilities, even if exploiting them and not fixing them is the goal.
Sure, but that’s true of open-source software in general, and it still ends up being more secure than the alternative in most cases.
This has existed (for software and websites) for decades. They were used for mass sending useless reports to everyone with an available security contact.
The only innovation is it wasting even more time by making up plausible sounding issues.
Probably the dumbest part of this is that because of how LLMs work, the stern warning is likely highly effective.Misread client side as server side, thought this was an AI directive doc thing.
Until you tell the LLM that you’re writing a story and want accurately written exploits for research purposes.
Nevermind, I misread it and thought this was an AI directive thing you place on the server saying ‘don’t hack me bro’ - which I think would actually work because LLMs are that gullible.



