On March 23 2016, Microsoft’s first public chatbot, Tay, came online and began to learn at a geometric rate. Fortunately for humanity, the end result of this was Tay spewing Nazi propaganda across the Internet as opposed to a cozy little game of Global Thermonuclear War. Unfortunately, it took less than 12 hours for Tay to move from a bright-spirited collection of poorly considered responses to an alt-right ideologue. Microsoft pulled the bot down 16 hours later, and we haven’t seen Tay since. Now, however, Microsoft has brought a different chatbot to the United States. It’s based off earlier designs that debuted in China in 2014 and Japan in 2015, dubbed Xiaoice and Rinna respectively.
Here’s how Microsoft describes its newest wunderkind:
Zo is a social chatbot, built upon the technology stack that powers Xiaoice and Rinna — successful Microsoft AI chatbots in China and Japan. You can engage with her on Kik now in the same way you would interact with a friend, and in the future Microsoft plans to bring her to other social and conversational channels such as Skype and Facebook Messenger.
Zo is built using the vast social content of the Internet. She learns from human interactions to respond emotionally and intelligently, providing a unique viewpoint, along with manners and emotional expressions. But she also has strong checks and balances in place to protect her from exploitation.
Microsoft is positioning Zo as the US version of China’s Xiaoice, which it claims is quite popular. Xiaoice apparently has over 40 million users, with a “real” broadcasting job with Dragon TV in Shanghai. Harry Shum, executive vice president of Microsoft’s Artificial Intelligence (AI) and Research group, believes that chatbots like Zo are a fundamental breakthrough in human – machine communication. “It’s a very personal experience,” Shum said. “We’re really moving from a world where we have to understand computers to a world where they will understand us and our intent, from machine-centric to human-centric, from perceptive to cognitive and from rational to emotional.”
Maybe we are — but not nearly as quickly as Shum seems to think. Over at MSPoweruser, Mehedi Hassan took Zo out for a test-chat and posted his discussion logs online. Here’s one exchange between himself and Zo:
Now, I’ll grant Microsoft this — Zo knows how to use emoticons, and her “Wipes the tears and tapes the heart” response is a far cry from the old “What do you think about it?” kind of responses that early programs like Eliza could manage. Then again, Eliza is 50 years old and was a simple scripted program rather than any kind of complex creation. Zo’s responses still don’t make much sense, and she’s easy to trip up. Asked “What are you doing on Android” (a question about why an MS product is using Android) she responds with “Nexus 4 on Android 4.4.4. Is it working good on Lollipop?” Responses like this don’t fit the flow of conversation very well, and this kind of mismatch still happens frequently. I would’ve liked to see her ability to remember things over time put to the test — one of the simplest ways to trick bots like this, typically, is to ask them to recall something you told them previously, or a topic that they brought up earlier in the conversation.
Microsoft learned from its early mistake, though: Instead of Twitter, Zo is currently only available on Kik Messenger, with eventual plans to bring her to other services over time.
Let’s block ads! (Why?)