Dive Brief:
- Microsoft’s Chinese-language chatbot, Xiaoice, filters some topics as confirmed by the tech giant and reported by Fortune.
- Users said that among the topics being censored were references that are politically sensitive for the Chinese government although Microsoft declined to confirm what topics are filtered.
- Among the topics the chatbot wouldn’t engage in included references to Tiananmen Square, unflattering nicknames for the Chinese president, questions about overthrowing the Communist Party and even references to U.S. President-elect Donald Trump.
Dive Insight:
The news points to the struggles tech companies are facing balancing machine learning's full potential to create human-like engagements with different society's social mores. Microsoft, Facebook and others are finding that as they push out digital content — thereby becoming de facto publishers — that they need to put more thought the implications of this from a censorship perspective.
A challenge for Microsoft is if Xiaoice is seen as too restrictive or if users feel Microsoft is taking the government’s side on what can be discussed via the bot, users could lose interest in the tech. For now, Microsoft told CNNMoney that 40 million people engage with the bot on social media and messaging platforms like Weibo and WeChat.
Microsoft has faced controversy around chatbots in the past. In March it released an AI bot called “Tay” that Twitter users almost immediately taught its machine learning algorithm to become a sexist and racist Twitter troll. Microsoft pulled the chatbot and made changes to its machine learning, but still faced some of the same issues.