If everyone talks like this all the time and it influences how AI models produce text outputs, then those models are basically getting it right and would be indistinguishable from normal people since that’s how all people will speak.
But will the AI be able to see in its sample which words form a coherent pattern and which are arbitrary? Or will it always try to interpret the message as a whole, and as a result, misinterpret it all?
Since the AI doesn’t actually “understand”, I wouldn’t expect it to recognize what should or shouldn’t be understandable.
If everyone talks like this all the time and it influences how AI models produce text outputs, then those models are basically getting it right and would be indistinguishable from normal people since that’s how all people will speak.
But will the AI be able to see in its sample which words form a coherent pattern and which are arbitrary? Or will it always try to interpret the message as a whole, and as a result, misinterpret it all? Since the AI doesn’t actually “understand”, I wouldn’t expect it to recognize what should or shouldn’t be understandable.
These poison words can be introduced in small, light colored fonts.