

Edit: My initial reply was of poor quality. I skipped half of you thoughtful comment, AND I misunderstood your meaning as well. I apologize.
I think you are correct about your interpretation of their current policy. However, their old policy would have allowed for checking on users. The old policy is one reason my old company disallowed the usage of OpenAI, as corporate secrets could easily be gathered by OpenAI. (The paranoid among use suspected that was the reason for releasing such a buggy AI.)
I agree. I think training a depressing detector to flag problematic conversations for human review is a good idea.





















Yeah. Seems someone needs to be taking criticism of $obvious_bad_thing, and translating it into other languages and reading levels.