The following is a guest piece by John Villafranco, a partner with Kelley Drye who provides litigation and counseling services to household brands and Fortune 500 companies with a focus on advertising law and consumer protection. Opinions are the author’s own.
As artificial intelligence (AI) technology develops rapidly, the law works to keep pace in order to mitigate the risks of increasingly powerful models. Corporations using AI are quickly realizing the many opportunities afforded by the technology, but are also learning of the associated legal concerns in such areas as consumer protection, privacy and ethics.
For example, ChatGPT and other Large Language Models (LLMs) may generate outputs that are false, misleading, biased, illegal or inaccurate, and it may be difficult to trace the source of the error or hold anyone responsible for the consequences. These issues require regulatory intervention — a fact acknowledged even by Sam Altman, OpenAI’s chief executive. And any such intervention will create risks and potential liability for advertisers and marketers, while prioritizing safety.
Federal Trade Commission guidance
The Federal Trade Commission has stated, where the conduct is commercial in nature, it considers regulation to be within its domain. Where that conduct causes harm to consumers, businesses should expect the FTC to act. This applies to AI as much as it does to any other form of traditional advertising and marketing.
In recently released guidance, the FTC reminded users of AI that they should not engage in practices that do more harm than good and the technology should not be used to steer people unfairly or deceptively into harmful decisions. Areas of concern that were identified include finance, health, education, housing and employment.
The FTC guidance also noted that manipulation can be a deceptive or unfair practice under Section 5 of the FTC Act when generative AI output is driving a consumer to a particular website, service provider or product because of a commercial relationship. The guidance is consistent with the FTC’s recent focus on so-called “dark patterns” — sophisticated design practices or formats that manipulate or mislead consumers into taking actions they would not otherwise take.
Last month, with reference to the 2014 psychological thriller “Ex Machina,” the FTC cautioned companies about over-relying on chatbots and generative AI to provide customer service and resolve consumer inquiries. The FTC expressed concern about the limited ability of the technology to resolve complex problems, the potential for inaccurate or insufficient information, and associated security risks.
In a recent opinion published in the New York Times, FTC Chairwoman Lina Khan stated that the FTC is taking a close look at how they can best achieve their dual mandate to promote fair competition and to protect Americans from unfair or deceptive advertising practices. Her predominant concern when it comes to AI is to prevent it from locking in the market dominance of large incumbent technology firms, but AI-related unfair and deceptive advertising practices are clearly on the agency’s radar.
Increased privacy concerns
The use of LLMs in business products also increases existing privacy concerns, which can impair a company’s reputation or otherwise call its integrity into question with consumers and government regulators. For example, ChatGPT may:
- Intentionally or inadvertently use personal information obtained without legal basis or without adequate transparency and notice;
- Expose users’ personal data or references to third parties, which can access or analyze the inputs and outputs of the tool, potentially compromising data protection or confidentiality obligations; and
- Reveal sensitive information that users provide to the tool, either intentionally or unintentionally, such as financial data, health records, or trade secrets, which could result in liability resulting from data breaches.
Any of these outcomes could expose a company to liability under the FTC Act and state statutes prohibiting unfair and deceptive acts and practices.
Current AI complaints and lawsuits
Interested parties are watching and are prepared to take steps to hold corporations accountable. For example, the Center for Artificial Intelligence and Digital Policy (CAIDP) recently filed a complaint with the FTC, urging the agency to investigate OpenAI, alleging that its business practices are unfair and deceptive in violation of the FTC Act, and raise serious questions regarding bias, children’s safety, consumer protection, cybersecurity, deception, privacy, transparency and public safety.
A number of private lawsuits also have been filed alleging copyright violations due to AI. For example, Microsoft, GitHub and Open AI are currently the defendants in a class action out of California that claims their code-generating AI product violates copyright law by outputting licensed code without providing credit. Getty Images also filed a suit against Stability AI, alleging their AI art tool scraped images from its site.
And concern is global, not merely domestic. In March 2023, Garante, Italy’s Data Supervisory Authority, ordered OpenAI to stop processing Italian users’ data under a temporary ban, stating that ChatGPT likely violates GDPR (lack of notice to users, no legal basis for processing, failure to verify users’ age or prevent children from using the service). Additionally, Canada’s Office of the Privacy Commissioner opened an investigation into OpenAI about the “collection, use and disclosure of personal information without consent.” Privacy Commissioner Philippe Dufresne’s office has stated that staying ahead of “fast-moving technological advances” is a key area of focus.
Considerations for marketers
With so many eyes on AI, advertisers and marketers are well-advised to proceed carefully. For now, as the potential for positive impact of these advanced systems begins to be realized, that means ensuring safety is built into systems, data is used responsibly, practices are transparent, and factual accuracy is promoted through human oversight and opportunity for intervention.