US government investigates OpenAI's ChatGPT app
The investigation aims to find out if ChatGPT generates 'hallucinations' and mishandles user data.
OpenAI, a company supported by Microsoft, received a 20-page questionnaire notifying them about the investigation. The questionnaire asks OpenAI to explain any incidents in which users were wrongly criticised and to share what measures the company is taking to prevent this from happening again.
This news was first reported by The Washington Post.
ChatGPT, which was released by OpenAI last November, amazed the world with its ability to generate human-like text in just seconds. It showcased the power of large language models (LLM) and generative AI, which is a type of artificial intelligence.
However, along with the excitement about the technology's capabilities, there were reports that the models sometimes produced offensive, false, or strange content. People referred to this as "hallucinations."
Lina Khan, the chair of the FTC, addressed a congressional committee hearing and expressed concerns about ChatGPT producing potentially harmful and false information. She mentioned instances where people's private information appeared in response to someone else's inquiry, as well as cases of libel and defamation.
The FTC's investigation mainly focuses on the potential harm that this aspect could cause to users. The questionnaire also explores how OpenAI uses private data to develop its advanced model, known as GPT-4, which serves as the foundation for ChatGPT and other programs that companies can access by paying a fee to OpenAI.
It's important to note that an FTC investigation doesn't always lead to further action, and the case can be closed if the company being investigated satisfies the regulator with their response. However, if the FTC finds illegal or unsafe practices, it may demand corrective actions and even file a lawsuit.
As of now, neither OpenAI nor the FTC has commented on the investigation.
JOIN OUR PULSE COMMUNITY!
Eyewitness? Submit your stories now via social or: