You can also branch discussions into specific topics that are quarantined in their own little add-on modules (imagine you’re Pepsi or United airlines, and dealing with a major media controversy–these modules would quickly allow you to add a topic all about the incident without messing with the core capabilities of your bot). Not only was she programmed to respond with the sort of insider joke punchlines that simply slay in an office setting, she made the real Suzanne cringe, ever-bracing for another virtual pie in the face. And even though Suzannebot never developed more logic than offering pre-scripted one-liners, my coworkers actually started asking her opinions on things, and even pitching her stories.
On my first day with Dexter, I built Chef Schwarzenegger, a vegan Terminator bot who wants to help you make dinner tonight through Facebook Messenger. The novelty of branching responses and dialog trees already had worn thin. It was like a two-for-one.“The bot is so jokey but also . Though always in jest, the trend was clear: Suzannebot was subtly subverting Suzanne’s authority.
The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?
Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks.
And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.
It took less than 24 hours for Twitter to corrupt an innocent AI chatbot.
Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in "conversational understanding." The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through "casual and playful conversation." Unfortunately, the conversations didn't stay playful for long.