Microsoft again testing the AI chat bot water
Microsoft is once again testing the AI
chat bot waters with Zo, a seemingly
reworked bot that was recently discovered on Kik by Twitter
user Tom Hounsell.
As you may
remember, Microsoft’s first at bat – a chat bot named Tay – didn’t even last a full 24 hours
before the Redmond-based company took it offline. Turns out, letting an
impressionable chat bot designed to mimic a 19-year-old girl loose across multiple
platforms on the Internet isn’t all that great of an idea.
Within hours, Tay was spewing racist comments left and right, offending hordes of
people in the process.
According to MSPoweruser,
Zo is best described as a censored version of Tay or an English variant of
Xiaoice, Microsoft’s Chinese chat bot. It’s apparently pretty good at normal
conversation although if you try to steer the conversation toward a
controversial topic like politics, it’ll politely decline to engage.
Limiting Zo to
Kik is likely a wise move as well. This gives Microsoft an opportunity to
further fine-tune the bot before opening it up to popular social networks with
more active users like Twitter and GroupMe. How long that’ll take,
however, is anyone’s guess, especially considering Microsoft hasn't yet
officially announced Zo (someone just stumbled upon it).
No comments: