OpenAI’s highly popular chatbot, ChatGPT, regularly gives false information about people without offering any way to correct it. In many cases, these so-called “hallucinations” can seriously damage a person’s reputation: In the past, ChatGPT falsely accused people of corruption, child abuse – or even murder. The latter was the case with a Norwegian user. When he tried to find out if the chatbot had any information about him, ChatGPT confidently made up a fake story that pictured him as a convicted murderer. This clearly isn’t an isolated case. noyb has therefore filed its second complaint against OpenAI. By knowingly allowing ChatGPT to produce defamatory results, the company clearly violates the GDPR’s principle of data accuracy.
Telorand@reddthat.comEnglish42·1 day ago- It doesn’t. I’m with you there.
- Many countries in Europe have very strong anti-defamation laws, unlike in the US. What you are allowed to say about people is very different from what you are allowed to say about practically anything else. Since OpenAI is in control of the model, it is their responsibility to ensure it doesn’t produce results like these.