It’s worse. So much worse. Now ChatGPT will have a human voice with simulated emotions that sounds eminently trustworthy and legitimately intelligent. The rest will follow quickly.
People will be far more convinced of lies being told by something that sounds like a human being sincere. People will also start believing it really is alive.
Just imagine how many not so obvious, or nuanced ‘facts’ are being misrepresented. Right there, under billions of searches.
There will be ‘fixes’ for this, but it’s never been easier to shape ‘the truth’ and public opinion.
It’s worse. So much worse. Now ChatGPT will have a human voice with simulated emotions that sounds eminently trustworthy and legitimately intelligent. The rest will follow quickly.
People will be far more convinced of lies being told by something that sounds like a human being sincere. People will also start believing it really is alive.
Inb4 summaries and opinion pieces start including phrases like “think of the children”, “may lead to dire consequenses” and “should concern everybody”