Dive Brief:
- The Center for AI and Digital Policy filed a complaint on Thursday with the Federal Trade Commission, exhorting the agency to investigate OpenAI, bar commercial releases of GPT-4, and establish guardrails to protect consumers, businesses and the commercial marketplace.
- “The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair and empirically sound while fostering accountability,’” the complaint said. “OpenAI’s product GPT-4 satisfies none of these requirements. It is time for the FTC to act.”
- The submitted complaint to the FTC follows the release of an open letter by a separate group, which called for an industrywide pause on AI training.
Dive Insight:
Legislators previously called for regulatory intervention on generative AI models, joining a chorus of ethicists, technology experts and business leaders.
In late January, Rep. Ted Lieu, D-CA, used ChatGPT to write legislation calling for Congress to increase its focus on AI. Lawsuits related to intellectual property disputes, including a class-action lawsuit filed in San Francisco against GitHub, Microsoft and OpenAI, in November, furthered the push for oversight.
The FTC is already looking into competition in the technology space and last week opened an inquiry into cloud market competition, with an eye toward potential security risks.
AI is top of mind across the FTC, agency Chair Lina Khan said Monday during the 2023 annual Antitrust Enforcers Summit. "Sometimes we see claims that are not fully vetted or not fully reflecting how these technologies work. So there could be high error rates, there could be high rates of discrimination, and so we need to make sure the companies are not overselling or overstating their AI capacities.”
The FTC launched an Office of Technology in February to strengthen and support law enforcement investigations and actions, advise and engage with staff, highlight market trends and emerging technologies that impact the agency’s work.
“For more than a century, the FTC has worked to keep pace with new markets and ever-changing technologies by building internal expertise," Khan said in the announcement. "Our office of technology is a natural next step in ensuring we have the in-house skills needed to fully grasp evolving technologies and market trends as we continue to tackle unlawful business practices and protect Americans."
The office includes data scientists, data engineers, AI specialists and design ethics specialists, Khan said during the Monday summit. Within the first few days of posting job openings, the office received between 300-400 applications.
“We're also now increasingly living in a world where AI can be used to create very realistic simulations, and that creates high rates of deception, high risk of fraud,” Khan said. “That's something that we are also looking at closely.”
The FTC confirmed to CIO Dive Thursday that it had received the CAIDP complaint but provided no additional comment. CAIDP did not immediately respond to requests for comment.
Many experts, tech leaders and industry watchers are beginning to voice their concerns as advocacy groups and institutes provide an avenue for public discourse through open letters and complaints. Though getting everyone to agree is unlikely.
UNESCO requested countries fully implement its recommendation on the ethics of AI immediately. Its framework, if “adopted unanimously by the 193 member states of the organization,” would provide the necessary safeguards, UNESCO said in a release Thursday. The call to action was in response to the open letter published by the Future of Life Institute.
The letter, which called for a six-month AI training moratorium, was met with skepticism. Some tech leaders, experts and industry watchers felt that voluntary guidance wasn’t enough, and others voiced concerns about pushing the pause button on innovation and AI training.
“The basic response is that regulation is far behind the technology, and while this has always been the case, the development in large language models like ChatGPT is a phase change in capability, which we don’t fully understand,” Ramayya Krishnan, dean of Carnegie Mellon University’s Heinz College, said in an email. “Couple this with the fact that the tech is currently available only with a handful of players – OpenAI, Google, etc. – and is not open to inspection and study of emergent properties.”
Krishnan said industry needs an open consortium with the compute resource, data and governance to study the models and their properties.