Why did Italy block ChatGPT?
Italy has blocked ChatGPT, saying that the tool does not respect the data protection law. They also denounce the lack of age verification filters.
The exponential growth of artificial intelligence, used in recent months for almost anything- from writing texts in just a few seconds to choosing the most beautiful towns in a country- has recently been rejected by thousands of experts in the field of technology who ask to pause its development. Among them are billionaire Elon Musk and Apple co-founder Steve Wozniak.
Why did Italy block ChatGPT?
Now, one of the world’s best-known AIs, ChatGPT, has been temporarily blocked with immediate effect by Italy. The reason given for this ban on the OpenAI technology is that it does not respect the consumer data protection law. Italy’s Personal Data Protection Guarantor confirmed in a statement that an investigation has been opened regarding the matter. While it lasts, the ban will be maintained until the tool “respects the privacy regulation”.
The guarantor highlights as key points the “lack of information to users and to all interested parties from whom OpenAI collects data”. Also, “the absence of a legal basis that justifies the massive collection and storage of personal data”. The guarantor adds that the information generated by ChatGPT “does not always correspond to reality”.
READ ALSO:
Since the launch of the tool, there have been doubts among artificial intelligence experts about the use of personal data by ChatGPT. Italy said there is the “absence of any kind of filter” as far as user verification is concerned, despite being a service intended for people over 13 years of age.
They say this situation “exposes children to receiving responses that are absolutely inappropriate” with respect to their level of development and awareness.
Italy gives OpenAI 20 days to comply
Italy has requested OpenAI to report on the measures adopted to comply with the Guarantor’s request within a maximum period of 20 days. Failure to do so could result in a financial penalty of up to €20 million.
After the news was released, a member of the data protection authority, Guido Scorza, likewise criticized the lack of information regarding the treatment of personal data of users. He also gave an example of how the program gives incorrect information.
“If I ask the chatbot when Guido Scorza joined the Privacy Guarantors Association, it tells me that it was in 2016. But I enrolled in 2020. In addition to illegal data processing, it is the case that the processing in many cases is inaccurate”.