The functionality of ChatGPT, which uses generative artificial intelligence (AI) to perform tasks, has raised privacy questions around the world. The system was released by Open AI, a US company, in 2022, with an upgraded version launched in March 2023.
This chatbot, unlike other generative AI systems, is famed for its capacity to perform tasks such as writing and answering test questions at a level above average human intelligence.
However, since generative AI systems such as ChatGPT produce responses based on existing data, there have been concerns about how the algorithm sources data and manages them.
Italian authorities recently temporarily banned ChatGPT for alleged privacy violations. Reuters also reported the possibility of other European countries taking decisive action.
Open AI has not publicly addressed the ban, but admitted that its language models are trained on various data including personal details. "While some of our training data includes personal information that is available on the public internet, we want our models to learn about the world, not private individuals," the company claimed.
Hanani Hlomani, a law graduate and Research Fellow with Research ICT Africa, said Open AI has failed to prove that there are sufficient safeguards when using the technology.
"As most of us are aware, people are already making use of these generative AIs in very sensitive situations. That said, there’s the real potential of very personal, sensitive data being collected and used for ulterior reasons," he told Africa Legal.
"Language generative models such as GPT (generative pre-trained transformers) have indeed raised privacy concerns worldwide. Privacy issues are even more critical in countries with weak data protection enforcement mechanisms," he said, adding that the models use personal data which could be used for malicious purposes like social engineering or phishing attacks.
He noted that even though over 60% of African nations have enacted data protection laws, they are only solid on paper. "The real challenge is enforcement."
Nathan-Ross Adams, a South African lawyer who specialises in AI and robotics law, believes that the Italian authorities could have responded better, choosing another route rather than banning the technology.
"For instance, one of the risks of ChatGPT is that it provides unrestricted access to children. While this concern is serious and justified, I would respond differently. I would have recommended that OpenAI be obliged to implement a means to verify the age of a person who wants to access the website."
While advising African countries to strengthen their data protection regulations, Adams warned against an “overzealous approach”, which may compromise Africa’s ability to compete in the global digital economy.
"It is possible to have mechanisms like privacy sandboxes to manage the risks of technologies like AI and promote and protect rights," he said.
To join Africa Legal's mailing list please click here