Crypto News News

OpenAI Faces Privacy Complaint Over Chatbot Accuracy Concerns

OpenAI Faces Privacy Complaint Over Chatbot Accuracy Concerns

OpenAI, a leading developer in artificial intelligence (AI), finds itself embroiled in a privacy complaint brought forth by Noyb, a data rights protection advocacy group based in Austria.

Noyb launched the complaint on April 29, alleging that OpenAI has neglected to address erroneous information generated by its AI chatbot, ChatGPT.

The group contends that this inaction may violate privacy regulations within the European Union (EU).

The complaint stems from an incident involving an unidentified public figure who sought information about themselves from OpenAI’s chatbot.

Despite repeated requests, OpenAI purportedly declined to rectify or delete the inaccurate data, citing technical limitations.

Furthermore, the company refused to disclose details about its training data and its origins.

Maartje de Graaf, a data protection lawyer at Noyb, emphasized the importance of adherence to legal standards in technology, stating:

See Also: OpenAI Could Challenge Google And Perplexity With AI-Powered Search: Reports

“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals.

‘The technology has to follow the legal requirements, not the other way around.”

Noyb escalated the complaint to the Austrian data protection authority, urging an investigation into OpenAI’s data processing practices and the mechanisms it employs to ensure the accuracy of personal data processed by its large language models.

De Graaf underscored the current challenges faced by companies in aligning chatbot technologies like ChatGPT with EU data protection laws.

Noyb, also known as the European Center for Digital Rights, operates out of Vienna, Austria, with a mission to support European General Data Protection Regulation laws through strategic legal actions and media initiatives.

This incident adds to a growing concern surrounding the accuracy and compliance of chatbot technologies.

In a similar vein, in December 2023, a study revealed issues with Microsoft’s Bing AI chatbot, now called Copilot, providing misleading information during political elections in Germany and Switzerland.

Additionally, Google’s Gemini AI chatbot faced criticism for generating inaccurate imagery, prompting the company to issue an apology and commit to model updates.

Disclaimer: The information provided is not trading advice. Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

 

#Binance #WRITE2EARN

Disclaimer: The information provided is not trading advice, Bitcoinworld.co.in holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.