The US Government will begin to study systems that work with AI, such as ChatGPT. It is believed that they could put user information at risk.
The Federal Trade Commission (FTC) of USA will start studying at chatbot and systems that work with Artificial Intelligence (AI)due to various complaints for having violated consumer protection laws, which would have endangered the private data of the users.
The content you want to access is exclusive to subscribers.
systems like ChatGPT gained huge popularity after its free public release last November and, in the case of technology developed by OpenAI, got a record 100 million downloads in 2 months.


In this framework, the commission sent a civil inquiry lawsuit to the startup leadered by sam altmanwhere it outlines what are the practices for training AI models and the handling of users’ personal information.
artificial-intelligence.jpg

The FTC asked Open AI to provide details of all complaints it had received about its products that made “false, misleading, derogatory or harmful” statements.
Among the examples of complaints made is an incident that occurred in 2020, when the company revealed a bug that allowed users to see information about other chats and related to the payment from other users. In addition, the commission is investigating whether the company participated in unfair or deceptive practices that resulted in “reputational damage” to consumers.
The agency also required a detailed description of the data that OpenAI uses to train your products and how it’s working to prevent what’s known in the tech industry as “hallucinations,” a problem that occurs when the responses of the chatbot are well structured but are completely wrong.
If the FTC determines that a company violates consumer protection laws, it can impose fines or submit the company to a consent decreewhich can dictate how the company handles the data.
Complaints prior to ChatGPT
The United States is not the first government to focus on OpenAI, as other countries also considered the legality of data use by this type of company. The Spanish Data Protection Agency announced a few weeks ago that it began “preliminary investigative actions against the American company that owns ChatGPT.”
The misuse of data has to do with the fact that these types of companies require large amounts of data to get their models to know process human language and articulate the answers that users ask for. Until now, these entities captured data at their fingertips regardless of ownership or registration, which also aroused suspicion from the producers of the information.
Source: Ambito