

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
—In the age of artificial intelligence (AI), the integration of AI technologies into everyday life has raised significant concerns regarding data privacy. While tools like ChatGPT have transformed both personal and professional routines, their underlying mechanisms often involve extensive data collection, raising ethical and practical issues. This paper critically examines how the evolving AI technologies have disrupted the relationship with data privacy, the structural limitations of existing laws and regulations in addressing AI related privacy concerns, and explores new directions in terms of responsiveness to address these challenges in the context of AI’s rise.
Typology: Summaries
1 / 3
This page cannot be seen from the preview
Don't miss anything!
University of Chester School of Computer and Engineering Science Chester, United Kingdom Assessment Number: J Abstract —In the age of artificial intelligence (AI), the integration of AI technologies into everyday life has raised significant concerns regarding data privacy. While tools like ChatGPT have transformed both personal and professional routines, their underlying mechanisms often involve extensive data collection, raising ethical and practical issues. This paper critically examines how the evolving AI technologies have disrupted the relationship with data privacy, the structural limitations of existing laws and regulations in addressing AI- related privacy concerns, and explores new directions in terms of responsiveness to address these challenges in the context of AI’s rise. Keywords—Artificial Intelligence (AI), data privacy, personal data, data collection, user consent, deepfake, data protection, General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), controllers, processors, privacy harms, Data Protection Officer (DPO), International Association of Privacy Professionals (IAPP) I. INTRODUCTION Artificial intelligence has reshaped societies by offering unprecedented capabilities in problem solving, decision making, creativity and autonomy. However these breakthroughs come with significant trade-offs, particularly in the realm of data privacy. From ChatGPT assisting users with personal queries to facial recognition technologies deployed in public spaces, AI tools often rely on vast quantities of data based on consumers’ personal information. This reliance poses concerns about the ethical management of personal information, the potential for misuse and the adequacy of existing regulatory frameworks. To start, according to the General Data Protection and Regulation (GDPR), personal data are “any information which are related to an identified or identifiable natural person” (European Union, 2016, Art. 4, para. 1). Then, data privacy can be defined as the principle that governs who has access to collect and process personal information, as well as the degree of control individuals have over that access, including the ability to opt out of data collection (King & Meinhardt, 2024). In fact, this concept is broader than personal data as it covers any information, that, if shared without permission, could violate our right to privacy and personal freedom (King & Meinhardt, 2024). The review focuses on AI use by companies and non- governmental actors targeting consumers, excluding government and political surveillance, which is beyond its scope. As AI continues to erode data privacy’s principle, laws and regulations struggle to keep pace with its rapid advancements. As a result, this review calls for a new approach, emphasizing adaptive solutions to tackle the evolving challenges of data privacy. II. AI ERODES DATA PRIVACY’S PRINCIPLE A. AI Erodes Data Privacy’s Principle Through Service Providers AI tools undermine the core principle of data privacy. King and Meinhardt (2024) state that the use of AI tools “normalized the idea that individuals should have to opt out, rather than choose to opt in” the collection of their data (p. 24). This normalisation of data collection often occurs without explicit user consent or clear transparency. Popov (2023) reported that Zoom faced backlash over a March 2023 update to its terms, allowing AI training on customer data without explicit consent, prompting the company to clarify its policy and disable AI features by default. It could be argued that consumers have normalised the erosion of their data privacy by unquestioningly accepting and failing to review the Privacy Policies and Terms of Use presented by AI service providers. This demonstrates that their consent can easily be drawn off. AI systems collect a lot of personal data and may lack transparency regarding how it is used. An example of this is the Nest thermostat, a home appliance that gathers extensive personal data from households to optimise energy use, among other functions. Since Google, through its child branch Alphabet, has acquired the product, the data will be uploaded to Google’s servers for predictive analytics or sold to third parties. It raises a concern because although the thermostat contains its own privacy policies and terms of service agreements, the users are not fully informed of how their data is managed behind the scenes (Ijuo, 2024). Thus, users are not truly consenting to how companies process and manage their data. Overall, the AI-driven ecosystem undermines data privacy's principle by normalising data collection and bypassing core values like transparency and explicit consent. B. AI Renders Invalid Data Privacy’s Principle Through Third-parties AI technologies have not only eroded data privacy’s principle but have also rendered it obsolete. This is particularly evident in the use of deepfakes which are “images, audio recordings or videos that have been manipulated to yield fabricated images and sounds that appear to be real” (Sierra, 2020, para. 1). As an example, the application DeepNude was able to generate fake images of naked women by superimposing the face of an existing woman onto another image (Leskin, 2019). These technologies produce deceptive
content that can severely harm the reputation of individuals. Initial data privacy principle based on transparency, consent and control is being undermined in this new environment, as individuals no longer have control over how their data is used to create fake content about them without explicit permission. According to researchers at Harmonic Security, after analysis of thousands of prompts submitted by users into generative AI platforms, “customer data holds the biggest share of sensitive data prompts, at 45,77%” (Beek, 2025, para. 5). This is arguably a proof that data privacy itself is being rendered void in the context of AI technologies since users have no control over the use of their personal data by third parties. III. STRUCTURAL LIMITATIONS AND LEGAL CHALLENGES IN ADDRESSING DATA PRIVACY ISSUES CAUSED BY AI A. Structural Limitations In Data Privacy Regulations Amidst AI Advancement The rapid advancement of AI has exposed significant structural weaknesses in global data privacy regulations and laws. King and Meinhardt (2024) highlight that data protection and privacy regulations vary across regions. For instance, the European Union’s GDPR is often regarded as a comprehensive and robust regulatory framework, but its jurisdiction is confined to EU countries. In contrast, the United States operates with state laws, such as the California Consumer Privacy Act (CCPA). It can be argued that this geographical disparity highlights a critical challenge: the lack of global harmonization in data privacy protections since data flows across borders. As a direct consequence of this, the regional differences create structural limitations in safeguarding privacy rights. Furthermore, assigning responsibility in cases of privacy violations becomes more complex with AI systems. For example, under the GDPR, there is a distinction between “controllers”, the entities that determine the purposes and means of processing personal data and “processors”, the entities that process data on behalf of the controllers (European Union, 2016). This distinction is designed to clarify accountability. However, with the increasing use of AI in data processing and analysis, these roles can become blurred. In case of data security breaches, controllers, such as organisations or companies which collect data from consumers, may attempt to deflect blame onto processors, such as cloud service providers or AI companies. Conversely, processors could argue that unclear or insufficient instructions from controllers in securing the data were to blame. These ambiguities raise questions about whether current regulations are adapted to new realities introduced by AI technologies. In conclusion, the structural limitations of existing laws and regulatory frameworks make them insufficient to address the dynamic and multifaceted data privacy issues associated with AI technologies. B. Challenges In Legal Systems To Address AI-Induced Privacy Harms Lawmakers and legal systems are not mature enough to address potential data privacy harms caused by AI tools. While some laws have significant potential, their application often falls short due to a lack of expertise and creativity among legal actors. Solove (2023) research on court cases involving data privacy issues led him to observe that “layers don’t see the potential in the law” (54:54) to identify privacy harms and enforce consumers’ data privacy rights. Moreover, Solove (2023) adds that “there is also a failure of imagination of judges and legislatures” that don’t dedicate enough time “about learning all the different aspects of the law” (55:24). It can be inferred that courts struggle to recognize harms caused by AI tools. Citron and Solove (2021) have stated that “courts struggle with privacy harms because they often involve future uses of personal data that vary widely” (p. 793). This could lead to court’s cases where plaintiffs are not able to enforce their data privacy rights even when it’s supported by the laws. Without considering unique challenges posed by AI, lawyers and judges lack of technical knowledge and creativity leading them to fail to recognize potential privacy harms caused by AI (Jarovsky & Solove, 2023). Overall, legal actors fail to keep pace with the complexities of AI as they don’t leverage the potential of laws and fail to address the data privacy issues and recognize the potential harms. IV. ADAPTIVE SOLUTIONS TO TACKLE THE EVOLVING CHALLENGES OF DATA PRIVACY A. Proactive Measures For AI Companies To Address Privacy Challenges The evolution of privacy infrastructure over the years, alongside advancements in AI, offers a solid foundation to implement proactive measures to tackle the evolving challenges of data privacy. Unlike today, in the late 1990s and early 2000s, there were limited resources, few scholars, and hardly any events or guidance on privacy (Jarovsky & Solove, 2023). Given how much this has changed, AI companies must adopt proactive measures to make privacy a central part of their framework. The International Association of Privacy Professionals (IAPP) highlights the necessity of AI companies to work with data privacy professionals, such as data protection officers (DPO), to ensure their privacy policies address data privacy challenges and anticipate future privacy harms (IAPP & FTI Consulting, 2024). It could be considered from this, that AI companies must establish dedicated teams of professionals to focus on the challenges caused by the AI dynamics. Therefore, it could be concluded that AI companies are better equipped to offer legal guidance and anticipate privacy issues they may cause. B. AI’s Regulations On Data Privacy Policies Need Decentralisation And Creativity Regulating AI and safeguarding data privacy rights require a shift from traditional, centralised, top-down legal regulatory framework to decentralised flexible structures. As AI evolves rapidly, King and Meindhart (2024) pointed out that the current regulatory frameworks do not “sufficiently focus on broader data governance measures needed to regulate the data used for AI development.” (p. 27). In response, Solove (2024) advocated for a bottom-up approach that would implement multiple independent legal frameworks and privacy protection policies based on real-world concerns, rather than consolidating power into a single agency which may narrow its focus. Solove’s idea is supported by Kirchschläger, who, in 2024, emphasised the need of a broad range of stakeholders to participate in data privacy policymaking decisions including independent experts, academics and civil society organisations. Kirchschläger argues that a centralised approach will allow Big Tech companies to dominate the policymaking process which is contrary to the interest and the well-being of society as these companies are driven by profit motives. Finally, it could be concluded that, to address the risks and potential harms induced by fast-evolving AI