Because releasing a chatbot that's going to learn from Twitter is a great ideaWhether you agree with the analysis or not, the result is pretty hilarious. Important part below:
Quote:
Last week, Microsoft unveiled “Tay,” an artificially intelligent chatbot who could “learn” by “speaking” with actual humans on Twitter. If you’ve spent any time on Twitter, you’re probably already laughing, perhaps in an alarming, maniacal fashion. This is because everyone knows that Twitter is The Meanest Place On The Internet™, with the added bonus of being the only place in the world where you can get repeatedly harassed by strangers with names like @MagnusKillEmAllXXX or @WhitePowerTaylorSwift4U.
Those already jaded by the shenanigans of 2016 can probably guess what happened next. Within the space of 16 hours, Tay, the technological “advancement” designed to chat like a millennial—Microsoft even gave her a fetching teenage/Star Trek-ish “sexy alien” face—became obsessed with National Puppy Day, begged strangers for selfies, denied the Holocaust, called for a race war, cheerfully supported genocide, labeled Ted Cruz “a Cuban Hitler,” sexually bantered with people she called “Daddy,” and supported Donald Trump.
Tay, in other words, got the whole Twitter thing down cold. Horrified, Microsoft promptly apologized, yanked Tay offline for some “sleep,” and blamed “a coordinated attack by a subset of people” who “exploited a vulnerability in Tay.” “We,” the company’s announcement continued, “will do everything possible to limit technical exploits but also know we cannot fully predict all possibly human interactive misuses without learning from mistakes.”
Because this was definitely a coordinated attack, not the inane nature of Twitter in the first place. I struggle to imagine how this result was not entirely predictable. An exercise
Gone Horribly Right.