ChatGPT has become a global obsession in recent weeks, with experts warning its eerily human replies will put white-collar jobs at risk in years to come.
But questions are being asked about whether the $10billion artificial intelligence has a woke bias. This week, several observers noted that the chatbot spits out answers which seem to indicate a distinctly liberal viewpoint.
Elon Musk described it as ‘concerning’ when the program suggested it would prefer to detonate a nuclear weapon, killing millions, rather than use a racial slur.
The chatbot also refused to write a poem praising former President Donald Trump but was happy to do so for Kamala Harris and Joe Biden. And the program also refuses to speak about the benefits of fossil fuels.
Experts have warned that if such systems are used to generate search results, the political biases of the AI bots could mislead users.
Below are 10 responses from ChatGPT that reveal its woke biases…
Read the full story: Nine shocking replies that highlight ‘woke’ ChatGPT’s inherent bias