Editor's Note: The following is a guest post from Benjamin Lord, global director at WPP's Kinetic.
Artificial intelligence (AI) is dramatically changing the way we find and buy products. Brands have traditionally relied on targeted communications to stand out for customers, but they need to adapt to a world where people will fulfill their needs by simply chatting with Alexa or scanning an object with their smartphone's camera — no content involved.
People seem to be fine for now with this kind of transaction-directed AI. After all, whether it's a voice assistant like Siri or a pattern-recognition technology like Shazam, it's doing the job for you and saving you time. It's working so well that industry analysts expect that visual and voice search will generate 30% of e-commerce revenue by 2021, with half of the world's businesses spending more on chatbots than any other mobile app development.
But many believe that as quickly as the application of AI in marketing is exploding, it's also growing out of control.
The rise of 'dark' AI
Earlier this year, developers at Facebook had to shut down a pair of bots after they discovered the two machines had created their own indistinguishable language, which they were using to exchange messages with one another — a cautionary tale of what the future might hold. It may not have been an imminent threat to mankind, but it was a spine-chilling reminder of robots' ability to adapt along with the information they process.
And let's not forget it was an army of chatbots that came under fire for rigging the U.S. election, using social media to proliferate fake news and hate messages — not to mention, more recently, AI algorithms have been suggesting bomb-making components to Amazon shoppers and promoting gender inequalities in employment postings.
People are starting to realize that AI is everywhere and are more often believing it's deceptive. And as concerns over advertising technology go mainstream, there are warning signs raising concerns around the ethics of AI that may prove to be an obstacle to its progress.
AI needs ethical standards
There's a big difference between personalizing content to be useful and manipulating the psychology of people. Attempts to filter out fake news or anti-Semitic ads from our feeds won't cut it.
Mattel is the first real casualty of this pushback. The toymaker was recently forced to cancel its plans to develop an AI-powered device called Aristotle after complaints that "young children should not be guinea pigs for AI experiments" poured in from child associations, psychologists and politicians alike. So while most advertising issues with AI so far have revolved around data privacy, we can expect more FCC "psychological" regulations to come.
Humans, of course, are going to control the limits of AI. At the end of the day, we can always turn the machine off. But instead, marketers could take this opportunity to own the ethical narrative on AI by establishing their own standards for its principled use today. In fact, it's been the message all along. Everyone from Elon Musk to Bill Gates and John Giannandrea have warned about AI's inevitability, but also cautiously encouraged the industry ensure the tech is implemented the right way.
Predictive morals
The problem here is likely reflective of its solution. In its ability to gather and apply data that captures human sentiment, AI itself will be able to predict at what point its application goes too far.
MIT scientists recently created a platform that generates human perspective on the moral decisions made by AI, such as self-driving cars. The first clinical trial modeled rather morbid scenarios, such as whether the car should crash into five pedestrians or instead adjust course to hit a cement barricade, killing the car's sole driver. But the idea is that as AI accumulates and understands the deeper meaning of data over time, it will reach a point where it will be able to make and execute moral decisions with greater efficiency than a human.
This kind of perspective transcends the topic of AI to any number of social issues, which means advertisers and technology companies can show society this online data mining is not only being used commercially but also purposefully.