YouTube recently announced that medically inaccurate or conspiracy videos will no longer be recommended by them if they come even slightly close to violating their community guidelines. An ex-Google engineer believed the decision to be a ‘historic victory’. The platform’s original January 25 blog post said that recommendations provided by them after a user has viewed a video will be pulled in from various topics. YouTube, whose parent company is Google, stated that even if videos do not actually violate their community policies but come close to doing so will not be recommended to users.
Various inappropriate videos have so far been viewed like ones stating false facts about the 9/11 incident, ones claiming the earth to be flat or the ones talking about phony miraculous cures for serious illnesses. The company also added that if a user is subscribed to a channel that posts conspiracy videos will be able to view them, meaning video availability won’t be affected. Former Google engineer Guillaume Chaslot praised the change.
Prior to this change, a user viewing one conspiracy theory video could be led down the path leading to the viewing of several other similar videos containing similar content, which was what the artificial intelligence he helped create intended to do. Chaslot added that the AI had originally been made to keep users hooked to YouTube for the longest period of time possible. But viewing of multiple videos with similar content created a bias. Moreover, the pattern could be reproduced to entice other users.
The chatbot of Microsoft, ‘Tay’, was an AI that had been shaped by user bias. Initially created for innocent chatting with humans, Tay had later turned into a racist and misogynist. Chaslot also added that the AI change will affect several new users. It will also have immense impact on channels that garner billions of views from recommendations.