- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
You must log in or register to comment.
The companies selling the LLMs will probably just put filters on the output of their models to supress any wrongthink.
Kinda like what you see when you ask the deepseek app about Taiwan being a country or what happened in Tianmen Square in 1989.
The best thing people can do for themselves is to not be reliant on AI models to do the thinking for them.
Using LLMs for political questions is dumb in the first place. That said you can run distilled models like Deepseek or Mistral locally. That still doesn’t solve bias in their weights. Best is to not use AI for such questions at all.