• 0 Posts
  • 180 Comments
Joined 3 years ago
cake
Cake day: July 7th, 2023

help-circle


  • For those who have been extra pedantic and focused more on the semantics of the arguments (i.e, you)

    I’ve had 2 comments not including this one, neither of which discussed semantics. You never responded to my other comment.

    Overall though, I have not called anyone derogatory names (unlike others in this thread)

    While yes, that would indicate bad faith, never said you did that. Can’t speak to others, who shouldn’t do that.

    I have not dismissed someone’s ideas out of hand without providing sources or examples, and I feel I have engaged in a respectful and calm manner.

    It’s less that you are dismissing things or being disrespectful. It’s that your engagement has a pattern where you aren’t engaging at all with certain points that is very obvious. Your positive bias toward LLMs shows. Whether it is due to legitimate bias or stark contrast within the thread due to a very polarizing topic is tricky to parse but definitely comes off that you are invested in LLMs and are unwilling to acknowledge the downsides in a meaningful way. E.g. your outright dismissal of the lack of ethics because it doesn’t offend you personally and find those that complain to be hypocrites.

    Sorry if my approach has not been what you would have preferred, but to be honest, given that you have not actually contributed to the discussion meaningfully, I frankly don’t give a shit.

    Again, do you think you’re responding to someone else? I rattled off a pile of common complaints to which you never responded. At no point did I accuse you of anything or even remark upon your character directly other than observing the stated pattern of avoidance and deeming it disingenuous. One inference could possibly be made with my rebuttal of your ethics argument, but it’s kind of a stretch.




  • Well, you dismissed the lack of ethics of it all. Just because you do open source doesn’t mean everyone else does. And open source often acknowledges contributors, unlike LLMs. You can’t consent for other people.

    It’s hideously destructive. Wastes electricity, wastes water, plays merry hell with anywhere the damned data centers pop up.

    It’s unregulated and has already killed people. Multiple stories have come out where an LLM has encouraged suicide. Plus various dangerous outputs like the bleach as cake ingredient thing. Because…

    It isn’t intelligent, it’s just a parrot. I’ll start paying attention when it can successfully count letters in words. So would you trust a random parrot that told you about something you know nothing about?

    It doesn’t do a quarter of what it says. Translation should be its bread and butter and it can’t really manage that. There’s a reason the tech bros that hyped crypto are hyping this. Because they don’t actually know what it can or can’t do.

    It’s approaching max efficacy for current techniques. More data is better in machine learning, but it’s finding the limit and it’s way closer than the scammers want to admit.

    It’s destroying jobs before it can handle them. I’ve tried to use it before. I spent as much if not more time fixing its output than if I had done it myself. It gets to do my boilerplate sometimes now.

    It’s making worse workers. All that time agonizing over a problem was spent learning how to do it at all. Now it shits out worthless garbage that the person doesn’t know what it does or how to fix it. Job security for me I guess.

    It could be a useful technology, but the delusion that it’s capable of becoming AGI distracts from all the things it could be capable of if big companies actually tried to use them instead of the lazy implementations they’re chasing.

    Edit: I also forgot that it entrenches racism and other bad behavior. If your corpus is full of racist shit, you get a racist robot. And racist assholes make it harder to fix that because they won’t acknowledge that such things are bad and that this badness can be taught* to robots.

    Source: Data engineer













  • Value in the abstract sense of “desirable thing” not necessarily monetary.

    If I’m having a conversation within and ask them about a thing, I’d much rather an “I don’t know” than whatever the plagiarism engine’s facsimile of an opinion is.

    Lot of people have strong opinions about ai, many of them very bad. Because what should be a novelty or maybe a part of a more sophisticated system instead of the half assed implementation that it currently is. At the low low price of stealing from artists and fucking the environment.