OpenAI, the maker of the most popular AI chatbot, used to say it aimed to build artificial intelligence that “safely benefits humanity, unconstrained by a need to generate financial return,” according to its 2023 mission statement. But the ChatGPT maker seems to no longer have the same emphasis on doing so “safely.”



Can’t sell your AI product to militaries if your tech uses the word “safely”.
They did sell it to the US government, where it was used to kidnap the Venezuelan president in an act of war a couple months ago