Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

  • Seminar2250@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    edit-2
    2 days ago

    something i was thinking about yesterday: so many people i respect used to respect have admitted to using llms as a search engine. even after i explain the seven problems with using a chatbot this way:

    1. wrong tool for the job
    2. bad tool
    3. are you fucking serious?
    4. environmental impact
    5. ethics of how the data was gathered/curated to generate[1] the model
    6. privacy policy of these companies is a nightmare
    7. seriously what is wrong with you

    they continue to do it. the ease of use, together with the valid syntax output by the llm, seems to short-circuit something in the end-user’s brain.

    anyway, in the same way that some vibe-coded bullshit will end up exploding down the line, i wonder whether the use of llms as a search engine is going to have some similar unintended consequences — ā€œoh, yeah, sorry boss, the ai told me that mr. robot was pretty accurate, idk why all of our secrets got leaked. i watched the entire series.ā€

    additionally, i wonder about the timing. will we see sporadic incidents of shit exploding, or will there be a cascade of chickens coming home to roost?


    1. they call this ā€œtrainingā€ but i try to avoid anthropomorphising chatbots ā†©ļøŽ

    • megaman@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      15 hours ago

      At work, i watched my boss google something, see the ā€œai overviewā€ and then say ā€œwho knows if this is rightā€, and then read it and then close the tab.

      It made me think about how this is how like a rumor or something happens. Even in a good case, they read the text with some scepticism but then 2 days later they forgot where they heard it and so they say they think whatever it was is right.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      edit-2
      1 day ago

      Sadly web search, and the web in general, have enshittified so much that asking ChatGPT can be a much more reliable and quicker way to find information. I don’t excuse it for anything that you could easily find on wikipedia, but it’s useful for queries such as ā€œwhat’s the name of that free indie game from the 00s that was just a boss rush no you fucking idiot not any of this shit it was a game maker thing with retro pixel style or whatever ughā€ where web search is utterly useless. It’s a frustrating situation, because of course in an ideal world chatbots don’t exist and information on the web is not drowned in a sea of predatory bullshit, reliable web indexes and directories exist and you can easily ask other people on non-predatory platforms. In the meanwhile I don’t want to blame the average (non-tech-evangelist, non-responsibility-having) user for being funnelled into this crap. At worst they’re victims like all of us.

      Oh yeah and the game’s Banana Nababa by the way.

    • o7___o7@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      edit-2
      2 days ago

      Yes i know the kid in the omelas hole gets tortured each time i use the woe engine to generate an email. Is that bad?

    • jonhendry@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      Ā·
      1 day ago

      ā€œthey call this ā€œtrainingā€ but i try to avoid anthropomorphising chatbotsā€

      You can train animals, you can train a plant, you can train your hair. So it’s not really anthropomorphising.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      2 days ago

      Is there any search engine that isn’t pushing an ā€œAI modeā€ of sorts? Some are more sneaky or give option to ā€œopt outā€ like duckduckgo, but this all feels temporary until it is the only option.

      I have found it strange how many people will say ā€œI asked chatgptā€ with the same normalcy as ā€œgooglingā€ was.