On March 20, Norwegian Arve Hjalmar Holmen filed a lawsuit against OpenAI after ChatGPT generated false information about him. "What scares me the most is that someone might read this [story] and believe it to be true," he stated. When he asked the chatbot if it had any information on him, it produced a "fabricated horror story."
ChatGPT claimed that Holmen was a criminal sentenced to 21 years in prison for the murder of two of his children and for attempting to murder his third son. The story included accurate details about his life, such as the number and gender of his children, and the name of his hometown. According to the privacy advocacy group noyb, this is "undoubtedly" a violation of the European General Data Protection Regulation (GDPR), as it mixes true and false information.
Since the incident, OpenAI has updated its AI model. While ChatGPT, which has also become a search engine, no longer presents Holmen as a murderer, noyb points out that incorrect data can still be part of the language model's dataset. They emphasize that it’s impossible to know if the false information about the plaintiff has been completely removed, and they call for the Norwegian data protection authority to impose an administrative fine on OpenAI to prevent similar violations in the future.