I use it all the time. It is a good partner to challenge me, when I am looking for other points of view. “I believe x due to y. Challenge my point of view”

It helps me explore a topic fast, so that I know the lingo to search for it myself. I use it for making low stakes decisions where it often succeeds, such as shopping and research for shopping. I validate the results every time.

Is it net negative for society, not sure, maybe? Will it go away, no. So we should embrace it, but not the big tech AI, but smaller LLMs.

  • swelter_spark@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 hours ago

    I find it useful for working on writing skills. If its output is bad, I know my word choice or phrasing was unclear.

  • TheObviousSolution@thebrainbin.org
    link
    fedilink
    arrow-up
    2
    ·
    4 hours ago

    I’d recommend to be sure to use them locally as well, the LLM services don’t need any more money and the ones you can download are still pretty useful regardless of how many astroturfers may want to downplay their usefulness. Just use with care and under the assumption that you are dealing with a charismatic yet frequently hallucinating liar.

  • marighost@piefed.social
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    3 days ago

    Upvoted due to a real unpopular opinion.

    I think LLMs have their place, especially in data collation or analytics. But by far the loudest (and most dangerous) use of LLMs is the offloading of critical thinking. When I hear about how many people are asking Grok about some tweet or people starting a romantic endeavor with ShitGPT, or chuds generating revenge deepfake porn, all I can think about it the strain on our resources.

    • JohnnyEnzyme@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 days ago

      ShitGPT

      What’s your reason for slamming that one in particular?

      Over here, It’s been enormously useful to me for a range of subjects. That said, I tend to use it for elaborate search-engine queries, always trying to avoid any chance of hallucinations, etc.

  • theneverfox@pawb.social
    link
    fedilink
    English
    arrow-up
    28
    ·
    3 days ago

    It sounds like you’re using them correctly, but a little PSA on safe use

    Surprisingly it’s not the people “dating” an AI that get dumber and fall into psychotic loops - it’s the people who let it help them make decisions and brainstorm ideas

    Do not use it like a magic eight ball. Use it like a tool, use it like a toy, do not become codependent on the AI

    • MentalEdge@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      17
      ·
      edit-2
      3 days ago

      In the “challenge my view” use-case the main danger is it successfully convincing you with false citations.

      Be really really careful you don’t let something like that slip. False logic is easier to spot, but LLMs make seemingly valid statements based on false premises all the time.

      They’ll even show you equations and stats that are straight up wrong if you double check the math.

      • iamthetot@piefed.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        That’s because they cannot do math. They are text predictors. They do not even know what the next word they are going to use is.

  • Jo Miran@lemmy.ml
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 days ago

    The main issue with conversational responses from LLMs is their tendency towards confidently incorrect responses or flat out well disguised lies. It isn’t normally blatant but if 95% of what it responds is true, but stated with 100% certainty and apparent proof, how long before that other 5% starts to poison your own reasoning?

    Are LLMs completely useless? No. Though challenging your world views, reasoning, and logic with systems that lie and manipulate might not be the best use of said systems.

    • AreaKode@riskeratspizza.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Exactly. It’s like doing a Google search and only relying on the first result. When you point out it’s error is the only time it will seek out additional info.

  • snoons@lemmy.ca
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    3 days ago

    Yes, small, local LLMs run on your own systems negate the insane economic and environmental cost of corporate LLMs; however, there is still the question of validity and the long term effect ‘outsourcing’ certain thought processes will have on users.

    The results given by an LLM are definite and might miss nuance you would get be researching it yourself. Perhaps, for example, you wanted to learn about a topic, so you ask your LLM and it tells you everything it can find that is correct and verifiable; however, it completely disregards the work done by a researcher that turned out to be incorrect. It ignores this because it’s wrong but by reading the work you might learn other things, like the unique and still completely valid methodology the researcher used in their work that the LLM ignored because the results were wrong. 1

    That being said, there is also points where using an LLM might have been useful. You might remember a while ago there were grad students that uploaded a pre-print paper about a room-temperature super conductor they had created; turns out they had just created a special sort of copper alloy that wasn’t super conductive, but just had special magnetic properties. They would have known about this if they had read a paper on the same alloy that was published in the 1970’s. An LLM might have helped them there; however, their suprevisor should have know about that paper also, so… ¯\_(ツ)_/¯

    As well, there is the issue of atrophy. I’m not sure if you use your LLM to write emails and whatnot, but if one ‘outsources’ their reading and writing ability, they slowly lose that ability. I’m not sure if they’ll completely lose it, unlikely IMO, but it will certainly wain and one will become dependant on it until such time as they start to read and wirte by themselves again. It’s a bit like not reading books, there is a difference between the vernacular of someone that reads a lot compared with someone that doesn’t read at all. The brain is very fluid in this respect, and the ‘flows’ are important.

    I recall a bizarre thread in the steam discussion forums regarding a certain game; the user had used an LLM to create a post about the rough parts of the game (it was still in development). The post was well articulated of course, and there weren’t any mistakes in the grammar… when the user was writing comments by themselves without the LLM however… well lets just say the contrast was extreme. They simply couldn’t articulate anything very well by themselves, and likely have never written anything longer thena paragraph. They were using a corporate LLM ofc, but the difference is the same in this respect.

     

    1. It’s a common issue in scientific literature where if a researchers theory turns out to be wrong, they’ll retract the paper; however, it is still useful. Much like if there’s a team of people making a map of some maze and they always erase all the parts of the map that lead to a dead end.
  • imapuppetlookaway@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    There’s this guy who hangs out on the steps of my local public library. I think he might be homeless. He always carries a chess set with him and will play a game with anyone who asks him. Anyway, he has an amazing memory and is really good at looking things up in the library if you ask him, but I think he might have some mental issues because he sometimes/often gets things wrong. But when he gets things right he really saves you a lot of time. You definitely have to double-check the facts, which wastes time so it’s a toss-up whether you’re actually saving time. And he can write things for you but his writing is 100% generic, like he has no personality or ideas of his own. Still, though, it comes in handy sometimes. And he can be fun to talk to but for the love of god don’t give him any personal info or he’ll share it with everyone who passes by. That’s kind of how i think of LLMs now.

  • jtrek@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    The few times I used an LLM for more than minor technical tasks, I felt stupider afterwards. It’s too supportive, and it’s easy to just go with its flow down the drain.

  • damnthefilibuster@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    I am still looking for a mechanism to use a smaller LLM (SLM) along with Wikipedia as its RAG, so it’s as accurate as possible.

  • kboos1@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    It certainly is good for helping people make uninformed decisions, for better or worse. Use at your own risk, remember Ai is a slave to the company it works for and it has no problem with lying to you to make that company more money.

    Ai is certainly not going away and eventually it will grow into something else. But if we wanted something reliable and consistently useful then we wouldn’t be developing Ai, especially from tech Bros, we would be strictly regulating companies that create Ai. So I believe we as flesh bags need to cautiously figure out a way to live with it because no one is going to protect us from it. Ai represents a way for companies to gather more data while reducing their workforce. Governments see it as a way to reduce their workforce, track citizens and use it as a weapon (foreign and domestic).

    Ai is a tool for people who needs someone to make decisions for them, a tool to perform tedious tasks, a tool for surveillance, a tool for interference, a tool for companionship for the lonely or relationship lazy.

    Essentially Ai is a tool and it’s up to you how you use it, and as a tool it has no loyalty or emotions, use with caution.

  • ImgurRefugee114@reddthat.com
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    3 days ago

    I’m not so sure about their utility as a tool for critical thinking, though that might be just because I’ve spent most of my life training my brain to do that sort of reflection and argumentation for me. That’s obviously not the norm, so I guess if people can find utility in anti-sycophantic roleplaying LLMs to achieve a mode of thought to which they’re unaccustomed, then perhaps that might be good… But mainly:

    so that I know the lingo to search for it myself

    Is exactly how I use it besides writing small scripts for me.

    I think of LLMs like intuition rather than intelligence: they’re incredibly stupid and wrong and incapable of reason, intention, or thought. But they’re a vague and inaccurate amalgamation of all writing on the internet and that can be useful for doing remedial tasks or getting a rough direction to go in.

    Prompting a subject can bring up associated keywords, paradigms, and frameworks niche to domain experts which can greatly accelerate my ability to know what to search for and how to think about the questions I have.

    They’re damn near useless at answering them though, of course… But it helps me orient.

    • ComradePenguin@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Often you don’t know what you don’t know. So your reflection and argumentation has to be based on something. In order to achieve your goal you also have to do research in order for it to be sufficiently valuable.

      LLMs are great for finding “what you don’t know” fast.

      This strengthens your ability to both reflect and research topics manually. Which should be the last stop.

    • turboSnail@piefed.europe.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I’ve used LLMs to have conversations about technical topics I’m not familiar with. I ask it how something works, it answers, and then I ask several follow-up questions to clarify various things I’m interested in.

      Usually, I have some ideas how to implement a particular theory or technology, and I bounce those ideas off the LLM. Some times, my ideas have already been invented about 100 years ago, some times my ideas are impractical, and the LLM tells me exactly why they would or wouldn’t work.

      I’m also using a custom agent that has been specifically tailored for this purpose. Normal LLMs are far to supportive, lack critical thinking, do not challenge my ideas etc. so that’s why I had to make my own agent prompt.

      Anyway, I think this system works well for me. This way I’ve been able to dive deeper into all sorts of random topics, such as why coco powder doesn’t mix with milk, why a battery bank shows confusing state of charge readings, how fluid coupling is used in heavy machinery etc. Fascinating stuff. It’s a bit like watching a custom documentary made just for my odd interests.

      If had to read about these things in magazines or books, I would not have been able to dive as deep as fast. On the other hand, books also give you a general overview, and they include details that I may not be interested in, so I would either end up reading stuff I don’t care about or just skimming those parts. In the latter case, I would end up spending hours looking for the information I care about, not finding it, and walking away with less information.