What you and Chat Gpt are describing here is a problem with PEOPLE, not a problem with AI.
It is not the AI that is planning to bias its output. It MIGHT BE the people who are training it in how to respond, if there is a problem in reality, rather than in your VERY hypothetical scenario -which...