Most AI applications do not meet GDPR, transparency standards or regulations
- Study by Straits Interactive research arm shows most AI applications do not meet GDPR regulations and transparency standards.
- Most independent AI apps lack privacy policies.
- The Digital Services Act could lead to improvements in AI application standards.
When the General Data Protection Regulation (GDPR) was implemented, its main purpose was to protect the privacy and data security of citizens of the European Union (EU). Today, the GDPR has become the benchmark for data privacy regulations around the world. Almost all data privacy laws mirror the rules set up by the GDPR.
Since its inception, numerous companies have been found guilty of breaching the guidelines. The largest GDPR regulation fines involve major tech companies that have used customer data in ways that are against the rules.
The growth in AI adoption has (so far) managed to stay under the radar of GDPR regulations. But regulators are aware of the potential problems that could be caused by AI, especially in how it sources and uses data to generate insights for its users.
As such, the EU’s Digital Services Act (DSA), which comes into effect this week, will introduce new rules on content moderation, user privacy, and transparency. According to a report by Reuters, a host of internet giants, including Meta’s Facebook and Instagram platforms, ByteDance’s TikTok, and several Google services, are adapting to these new obligations – including preventing harmful content from spreading, banning or limiting certain user-targeting practices, and sharing some internal data with regulators and associated researchers.
Do GDPR and DSA regulations have an impact on AI?
Research by the Data Protection Excellence Centre, the research arm of Straits Interactive, has unveiled significant privacy concerns in generative AI desktop applications, particularly among startups and individual developers. The study, covering 113 popular apps, underscores the potential risks to which users might unwittingly expose their data.
The study, which was conducted from May to July this year, focused on apps primarily from North America (48%) and the European Union (20%). Selection criteria included recommendations, reviews, and advertisements. The apps were categorized as:
- Core apps: industry leaders in the generative AI sector.
- Clone apps: typically startups or individual developers/developer teams, created using Core Apps’ APIs (Application Programming Interfaces).
- Combination apps: Existing applications that have incorporated generative AI functionalities.
Findings from the study indicate that 12% of the apps lacked a published privacy policy. These apps are predominantly from startups and individual developers. For apps that published privacy policies, 69% identified a legal basis (such as consent and contract performance) for processing Personally Identifiable Information (PII).
Only half of the apps meant for children considered age restrictions and aligned with child privacy standards such as the Children’s Online Privacy Protection Act (COPPA) in the United States and/or the GDPR in the EU.
The study also showed that while 63% cited the GDPR, only 32% were apparently within the GDPR’s purview. The majority, which are globally accessible, alluded to the GDPR without understanding when it applies outside the EU. Only 48% of those where GDPR seemed to be relevant were compliant. Some of these apps are overlooking the GDPR’s international data transfer requirements.
35% of the apps did not specify data retention durations in their privacy policies. This is a requirement by GDPR and other regulations, especially since users often share proprietary or personal data.
Transparency in AI
Looking at transparency, it was quite limited in these apps regarding the use of AI. In fact, fewer than 10% transparently disclosed AI use or model sources. 64% of the 113 apps surveyed remained ambiguous about their AI models, and only one clarified whether AI influences user data decisions.
Currently, only OpenAI, Stability AI, Hugging Face, and as well as the big tech company programs like AWS Bedrock and GitHub disclose the existence of their AI models. Unfortunately, the remainder primarily relied on established AI APIs, such as those from OpenAI, or integrated multiple models.
The study shows a tendency among apps to collect excessive user PII, often exceeding their primary utility. With 56% using a subscription model and 31% veering towards relying on advertising revenue, user PII becomes invaluable. The range of collected data – from specific birth dates, interaction-based inferences, and IP addresses to online and social media identifiers – suggests potential ad-targeting objectives.
“This study highlights the pressing need for clarity and regulatory compliance in the generative AI app sphere. As organizations and users increasingly embrace AI, their corporate and personal data could be jeopardized by apps, many originating from startups or developers unfamiliar with privacy mandates,” commented Kevin Shepherdson, CEO of Straits Interactive.
Lyn Boxall, a legal privacy specialist at Lyn Boxall LLC and a member of the research team, pointed out that it’s significant that 63% of the apps reference the GDPR without understanding its extraterritorial implications. Boxall believes that many developers lean on automated privacy notice generators rather than actually understanding their app’s regulatory alignment.
“With the EU AI Act on the horizon, the urgency for developers to prioritize AI transparency and conform to both current and emerging data protection norms cannot be overstated,” added Boxall.
Given the pressing regulatory requirement to address the control of emerging technology, both companies and individuals need to contribute by proactively engaging in their own learning, training, and skill enhancement to stay aligned with the swiftly evolving landscape of generative AI.
As a result of these discoveries, the DPEX Centre by Straits Interactive has introduced the Certified AI Business Professional program. This initiative aims to instruct business practitioners on the responsible and ethical utilization of these tools, simultaneously enabling them to enhance their organizations’ value proposition.
Understanding the Digital Services Act
READ MORE
- Data Strategies That Dictate Legacy Overhaul Methods for Established Banks
- Securing Data: A Guide to Navigating Australian Privacy Regulations
- Ethical Threads: Transforming Fashion with Trust and Transparency
- Top 5 Drivers Shaping IT Budgets This Financial Year
- Beyond Connectivity: How Wireless Site Surveys Enhance Tomorrow’s Business Network