During the Paris 2024 Olympics, Google featured an ad for its generative AI tool, Gemini. In it, a father explains how he asked Gemini to help his daughter write a letter to American Olympic hurdler Sydney McLaughlin-Levrone. The ad received swift backlash, with viewers upset that a father would teach their kid to use AI to express herself and others voicing just general discomfort.
Google pulled the ad after just days, saying in a statement to multiple outlets, “We believe that AI can be a great tool for enhancing human creativity, but can never replace it.”
The incident highlights consumers’ wariness over AI, even as companies spend billions on it. Worldwide spending on AI and related business services has reached $235 billion, according to International Data Corporation. Marketers are also spending millions to advertise AI-powered services and products, with about $200 million spent from January to early August, TV measurement firm iSpot told CX Dive.
While acceptance of the technology is slowly growing, consumers regularly indicate that they are skeptical of AI. Research, published in the Journal of Hospitality Marketing & Management, adds to that; AI terminology actually decreases customers' purchasing intention, the study found.
“Every experiment that we have seen, if you use AI, it decreases the purchasing intention,” said Mesut Cicek, assistant professor of marketing and international business at Washington State University. “We provided them some text about the product, product descriptions, and then the only difference between the descriptions is in one, it includes AI. In the other one, it doesn't include AI.”
Cicek and his colleague conducted a series of experiments. In the first, roughly 300 participants were shown a product description for a TV. The descriptions were nearly identical, but one was described as an “AI-powered TV” while the other was a “new technology TV.”
Participants were then asked questions to determine their willingness to buy the TV. Those who saw AI in the product description were less likely to make the purchase.
Researchers repeated the experiment with another 200 participants, this time with an “AI-powered car” and the results were more pronounced. Purchase intention decreased significantly.
“If it's a perceived risky product, this effect is higher,” Cicek said.
In subsequent experiments on the use of “AI” to describe services, risk played a role in purchase intention as well. AI-powered customer service was perceived as less risky, while AI-powered illness diagnosis was perceived as high risk. While both saw decreased purchasing intention, it was more pronounced for AI-powered illness diagnosis.
The role of trust
To Cicek, the most notable finding was the impact the AI term had on emotional trust, which can significantly affect consumer attitudes and behaviors.
“The main findings of this study is the use of AI decreases emotional trust,” Cicek said. “The consumers have trust issues with AI, and then also it decreases the purchase intention.”
Consumers have concerns about the privacy, security and safety of companies using AI. That, coupled with the public’s fear of the unknown and questions about the impact of AI on autonomy, can all chip away at trust.
AI is an elusive concept for consumers — and in many ways a threatening one, Audrey Chee-Read, principal analyst at Forrester, said.
“It feels more like an umbrella term that's going to take their job and take away their intellect,” Chee-Read said. “Over half of the consumers believe AI poses a significant threat to society.”
There’s two main factors to this distrust, Chee-Read said:
- The first is a perceived threat to consumers’ ethics and morals, which includes “misinformation, disinformation, copyright infringement — what does this mean for society?”
- The other is output accuracy, which considers, “Is it actually going to do the job it's supposed to do?”
Recent research from KPMG adds to those findings. Consumers' top two concerns with AI services is that they won’t be able to interact with a human and the security of personal data, according to Jeff Mango, managing director of advisory customer solutions at KPMG.
“Why are these people seeing the word AI and retracting their sale or being concerned about going forward with their sale?” Mango said. “Because both of those genuinely talk to risks they perceive. They perceive, ‘I'm not really going to get the help I need because I can't talk to a human, and I believe I need to talk to a human,’ or ‘I believe that my personal information is not secure.’”
But the AI label can be a turnoff for consumers for a simpler reason: perceived complexity.
Consumers are also less likely to buy something they view as complicated, Bruce Temkin, chief humanity catalyst at temkinsight, said.
“The general public views AI as being complicated, so attaching a generic AI label without any further explanation would likely lead many people to think that the item for sale is complex and difficult to understand or use,” Temkin said via email. “People will pay a premium for something they perceive as being easier to use, and the opposite is true, they’ll pay less for something they believe is more difficult.”
Products like an AI-powered car might be considered risky not only because of the higher price point, but also because it might appear more difficult to operate, Temkin said.
How should companies build trust?
Experts agree that the term “AI” is overused and, in some cases, has lost all meaning.
“Companies are using AI everywhere,” Cicek said, even when AI technology isn’t present.
For companies that want to build trust with consumers, accuracy and transparency are paramount.
“First and foremost, stop throwing around the term ‘AI’ like it’s a marketing nirvana,” Temkin said. “Not only can it increase risk, but it’s being so overused that it adds little value for explaining the value of your offerings. If you think AI is a differentiator, try and describe that feature more explicitly, like ‘AI-powered safety breaks.’”
At the bare minimum, CX leaders need to make sure they’re following rules and regulations, Chee-Read said. She encourages companies to develop AI governance plans and to train employees on how to responsibly use AI. On a more basic CX level, leaders need to make sure that the experience they are creating with AI is consistent with their brand and that it gives value to consumers.
CX leaders can identify how AI can solve a need.
“People don’t want to buy ‘AI,’ but they are probably willing to pay more if you can create more value for them using AI,” Temkin said. “So the strategy remains the same as always, focus on value first, and then determine the messaging that brings that value to life.”
If there is no value — or no clear value — consumers become distrustful.
“If I go to the regular average grocery store and the aisle is powered by AI, I don't know what that means,” Mango said. “I don't know why that's going to help me. I'm just lost, and so therefore I become very distrusting.”
Brands can also ease concerns through transferable trust, Mango said. If a brand has a good reputation with consumers, that reputation is likely to transfer to its use of AI.
Building this trust ought to be integral to a company’s AI approach — failure to do so can not only harm relationships, but a company’s bottom line, too.
“If trust increases, purchasing intention increases, sales increase,” Cicek said.