Less than a month after banning ChatGPT over privacy concerns, the Italian government reversed its stance on OpenAI’s chatbot on Friday. The decision comes after the startup responded to concerns raised about the privacy and security of user data on the artificial intelligence platform.
“ChatGPT is available again to our users in Italy,” an OpenAI spokesperson told Decrypt in an emailed statement. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”
In late March, Italy joined a handful of countries—including Russia, China, North Korea, Cuba, Iran, and Syria—that implemented bans on using ChatGPT within their borders.
Italy initially imposed the ban in response to reports that ChatGPT was collecting and storing users’ data without their consent. Such concerns led other countries—including Canada, Germany, Sweden, and France—to open their own respective investigations into the massively popular tool.
Earlier this month, Italian data protection watchdog agency Garante offered an olive branch to OpenAI that could have reopened the door for the chatbot’s return to the country.
The agency demanded that OpenAI implement age restrictions, clarify how data is processed, provide data management options, and allow users to opt out of their data being used.
While it did not expressly cite the situation in Italy, OpenAI rolled out a series of new features on Tuesday, including the ability for users to turn off their chat history as well as opt out of allowing the company to use their data in its training models.
“We have addressed or clarified the issues raised by the Garante,” the spokesperson told Decrypt, citing the publication of an article about its collection and use of training data, and making its opt-out form and privacy policy more visible to users across the platform.
The spokesperson added that the company will continue to address privacy requests through email, introduce a new form for EU users to object to the use of their data in model training, and implement an age verification tool for users in Italy during signup.
The machine learning giant has also pledged to continue improving its security measures to protect user data and deal with so-called AI “hallucinations,” when an AI produces unexpected, false, and unsubstantiated content, news, or information about people, events, or facts.
“We appreciate the Garante for being collaborative, and we look forward to ongoing constructive discussions,” OpenAI said.
Stay on top of crypto news, get daily updates in your inbox.
Source link