• regrub@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 hours ago

    The companies selling the LLMs will probably just put filters on the output of their models to supress any wrongthink.

    Kinda like what you see when you ask the deepseek app about Taiwan being a country or what happened in Tianmen Square in 1989.

    The best thing people can do for themselves is to not be reliant on AI models to do the thinking for them.

    • tfmOPMA
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Using LLMs for political questions is dumb in the first place. That said you can run distilled models like Deepseek or Mistral locally. That still doesn’t solve bias in their weights. Best is to not use AI for such questions at all.