A Dangerous Masterpiece:
Being fruitful in revolutionizing our daily lives, these chatbots sometimes can become a dangerous tool for humans as well.
Let’s consider some shocking cases:
In order to prevent mankind from its devastating effects Microsoft’s disastrous Tay experiment is one of the most prominent examples as it acquired racist traits in less than 24 hours.
Many of the things that she said included “Bush did 9/11 and Hitler would have done a better job than the monkey we have now. Donald Trump is the only hope we’ve got”, “Repeat after me, Hitler did nothing wrong” and Ted Cruz is the Cuban Hitler.
Within 24 hours, this chatbox became a full Nazi from saying “humans are super cool”.
Is really Microsoft at fault for introducing an innocent, ‘young teen girl’ AI to the jokers and weirdos on Twitter? Possibly, not because Tay’s responses were modelled on the ones she got from humans on Twitter.
Tay was designed to be beneficial for the corporation by stealing the hearts of young consumers but somehow it reflects the hollowness of that purpose.
Another failure example of artificial intelligence is an example of ‘Bob’ and ‘Alice’ – two bots created by Facebook. These two bots were switched off when the Facebook AI Research (FAIR) team found that they started interacting in their own language and resisted all human generated algorithms.
Use Neural Machines Wisely & Judiciously:
Chatbots could be our friends or foes, it all depends on how we use, interact and design it because they will learn constructively or destructively.
Hence, chatbots must have some human controlled protocols and their design must be made with a careful plan. In order to prevent mankind from its devastating effects, chatbots ought to be modelled in a disciplined manner .