

Writing About AI
Uvation
Reen Singh is an engineer and a technologist with a diverse background spanning software, hardware, aerospace, defense, and cybersecurity. As CTO at Uvation, he leverages his extensive experience to lead the company’s technological innovation and development.

AI systems are inherently data-intensive, often requiring vast quantities of personal data to learn and operate effectively. Conversely, the GDPR (General Data Protection Regulation) imposes strict rules on how personal data is collected, used, and stored, giving individuals significant rights over their information. This creates a “tightrope” situation where AI’s need for data conflicts with GDPR’s mandate for transparency, data minimisation, purpose limitation, and the protection of individual privacy rights. The challenge lies in harnessing AI’s power without infringing upon fundamental privacy rights.
Key challenges include “explainability and transparency”, as many AI systems operate as “black boxes”, making it difficult to explain their complex decision-making processes, which clashes with GDPR’s requirement for clear explanations of personal data usage and automated decisions. Additionally, there are issues with “data minimisation and purpose limitation” because AI often trains on massive datasets, potentially including data collected for unrelated purposes, which can be difficult to justify under GDPR’s strict standards. “Accuracy and bias” are also concerns, as AI learns from historical data, meaning biased or flawed input can lead to unfair or inaccurate outputs, violating GDPR’s demand for accurate data.
Individuals have several crucial rights. The “Right to Access” (Article 15) allows individuals to know what personal data is being processed by AI and to understand the logic behind automated decisions affecting them. The “Right to Rectification” (Article 16) and “Erasure” (Article 17) enable individuals to demand correction or deletion of their data, though this is technically challenging with complex AI models and vast datasets. “Rights Related to Automated Decision-Making” (Article 22) generally prohibit purely AI-driven decisions with significant legal or similar effects unless safeguards like human review, contestation, or explicit consent are in place. Lastly, the “Right to Object” (Article 21) allows individuals to oppose data processing based on “legitimate interests,” especially for AI-driven direct marketing.
Organisations must embed privacy principles from the outset, a concept known as “Privacy by Design and by Default” (Article 25). This involves designing systems to minimise data use and prioritise privacy by default, using techniques like synthetic data, federated learning, and anonymisation. It is also crucial to “Conduct Thorough Data Protection Impact Assessments (DPIAs)” (Article 35) for high-risk AI processing to identify and mitigate risks early. Furthermore, organisations must “Establish a Robust Lawful Basis” (Article 6) for processing personal data with AI, carefully choosing and documenting a valid legal reason, such as legitimate interests, with a thorough assessment.
To ensure meaningful transparency, organisations should “Clearly inform individuals” how their data is used in AI systems through simple privacy notices, explaining the AI’s role, logic, and potential consequences. They must also “develop methods to provide understandable explanations” for specific AI decisions upon request. For bias mitigation and fairness, it’s essential to “Use diverse training data,” “test the AI for fairness” both before and after deployment, and “continuously monitor outputs” for discriminatory patterns. Implementing “Human Oversight” (Article 22) is also vital, ensuring qualified staff can review, intervene in, and challenge significant automated decisions.
The upcoming EU AI Act adopts a risk-based approach, categorising AI systems by risk level and imposing strict rules on “high-risk” systems while banning “unacceptable risk” AI. Crucially, the AI Act “complements GDPR but does not replace it”. GDPR remains the foundational law for personal data protection, governing how data is used, while the AI Act adds specific requirements for the safety, transparency, and governance of AI systems themselves. This synergy means GDPR covers data use, and the AI Act covers system functionality, creating a comprehensive regulatory framework.
Privacy-Enhancing Technologies (PETs) are offering promising solutions. “Federated learning” allows AI models to be trained on decentralised data across devices without centralising raw information, thus enhancing privacy. “Homomorphic encryption” enables computations on encrypted data, keeping it secure throughout processing. Additionally, “Differential privacy” adds statistical noise to datasets or query results, protecting individual identities while still allowing for useful data analysis. These technologies are key in bridging the gap between AI’s data requirements and GDPR’s strict privacy protections.
Achieving GDPR compliance in AI goes beyond merely avoiding penalties; it is a legal requirement that fosters broader benefits. By embedding privacy by design, conducting rigorous DPIAs, prioritising transparency, and actively mitigating bias, organisations can “innovate responsibly”. Building trustworthy AI is not only about respecting fundamental rights but also about “earning public trust”. When companies prioritise ethical and lawful data use, they create sustainable innovation that benefits everyone, leading to a more reliable and accepted use of AI technologies in society.
We are writing frequenly. Don’t miss that.
