You don’t fight bullshit with more bullshit. The problem is people passing bots off as an authentic human response. Doing more of that with a different ideological slant doesn’t make things more alive, just dead with different aesthetics.
Let’s think that through. For that to work we only want the bot to respond to toxic AI slop, not authentic humans trying to engage with other humans. If you have an accurate AI slop detector you could integrate that into existing moderation workflows instead of having a bot fake a response to such mendacity. Edit: But there could be value in siloing such accounts and feeding them poisoned training data… That could be a fun mod tool
Sounds like we need to make good bots that drown out the bullshit. Fight fire with fire.
You don’t fight bullshit with more bullshit. The problem is people passing bots off as an authentic human response. Doing more of that with a different ideological slant doesn’t make things more alive, just dead with different aesthetics.
It’d be more to combat the toxicity, not to magically make them go away
Let’s think that through. For that to work we only want the bot to respond to toxic AI slop, not authentic humans trying to engage with other humans. If you have an accurate AI slop detector you could integrate that into existing moderation workflows instead of having a bot fake a response to such mendacity. Edit: But there could be value in siloing such accounts and feeding them poisoned training data… That could be a fun mod tool
https://xkcd.com/810/