FTC reported to be investigating OpenAI for consumer protection violations

OpenAI is reportedly under additional legal scrutiny, as the US Federal Trade Commission asks the company to give detailed explanations of its business practices.

openai
Andrew Neel

The US Federal Trade Commission is reportedly investigating ChatGPT maker OpenAI for potential violations of consumer protection regulations involving data privacy and false statements about third parties.

According to reports that surfaced Thursday, the FTC has filed an official request with OpenAI for a wide range of data, covering how the company markets its products, how it goes about identifying and correcting failures in the system, how it handles customer data, and more. The Washington Post published a 20-page document purporting to be the FTC’s request for information, which is referred to as a civil investigative demand, or CID. The CID contains 49 questions for OpenAI to answer about its operations, as well as 14 requests for documentation related to those questions.

The FTC has been the most active federal agency involved in this type of consumer protection action against the technology industry, in part because the legislative process generally lags far behind the pace of change in the tech business.

While the FTC declined to comnent, OPenAI CEO Sam Altman issued a response to the reports on Twitter.

"It is very disappointing to see the FTC's request start with a leak and does not help build trust. That said, it’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law. of course we will work with the FTC," Altman said. "We built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. We protect user privacy and design our systems to learn about the world, not private individuals."

Putting a check on how OpenAI operates

FTC Chairperson Lina Khan told the House Judiciary Committee Thursday that her agency is concerned that generative AI apps that are consumer-facing, like OpenAI's ChatGPT, have “no checks” on the information they harvest from consumers, according to a report from the Wall Street Journal.

The impact of generative AI on various industries has been difficult to overstate, experts agree. IDC research director Ryan O’Leary said that many of the problems involved are common to new technologies that hit the market quickly, without an extensive regulatory framework in place.

“When you first fired up ChatGPT [in its earliest versions], there were really no warnings or terms and conditions that were in your face and easy to understand that explains, once you’ve typed something into ChatGPT, where it was going,” he said. “The way that the general public interfaces with this is ‘oh, this is a machine that can tell you everything’ — they don’t know how it knows everything.”

It’s reminiscent, O’Leary added, of the lightning growth of Facebook when it really began to take off. Users, at the time, didn’t fully understand exactly what was happening with their data, leading to serious consequences.

“The government, after Cambridge Analytica, is a lot more cautious and trying to stop misunderstood adoption,” he said. Last year, Facebook parent Meta settled a class action lawsuit seeking damages for allowing UK political consulting firm Cambridge Analytica access to the private data of tens of millions of Facebook users.

The apparent CID gives OpenAI 14 days from receipt to respond to the government’s interrogatories and requests for documents. OpenAI is also dealing with several recent lawsuits from authors over its possible use of copyrighted material as training data for its large language models.

(This story has been updated with a comment that OpenAI CEO Sam Altman made on Twitter.)

Copyright © 2023 IDG Communications, Inc.

It’s time to break the ChatGPT habit