close
close

Is AI ignoring consent and harming users? – Opinion News

Is AI ignoring consent and harming users? – Opinion News

By Aayush Agarwal, Darshil Shah and Pavan Mamidi

Artificial intelligence (AI) is reshaping our digital world, unlocking opportunities while sparking ethical debates. As forms collect data under the guise of “informed” consent, they make lucrative inferences about our attitudes and behaviors, often without fully understanding them. This information asymmetry between users and companies raises questions about the ethics of data hoarding. Moreover, intelligent design elements push us to share more than we realize, leaving us vulnerable to exploitation. The complexity of these issues pushes us to take a closer look at the balance between innovation and ethical responsibility.

Is consent truly “informed” if users do not know the scope of inferences made by firms? In AI and digital markets, informed consent is a prerequisite between users and firms that allows firms to receive user data in exchange for a digital service that is often free, resulting in a non-price transaction with data as currency. Two aspects of legal consent are specific to digital markets: (a) whether users are adequately informed about the amount of data firms will collect (if awareness is low, the contract is flawed and more value obtained by one party results in greater user exploitation than the other); and (b) whether users are knowledgeable about the breadth and depth of inferences digital firms make about users of their data. In (b), if users are not aware that their data will be used for purposes other than the firms’ immediate service purposes, perhaps to gain a competitive advantage nearby Sundayserious consumer welfare and competition concerns arise. For example, Netflix leveraged users’ viewing data to improve the development of Netflix-produced content. This practice challenges the fundamental ethics of data use, as users inadvertently contribute to other markets.

Why do people concerned about the privacy of their data do so little to protect their privacy? Surveys show that people do not fully understand the basic concepts explained in privacy policies. Moreover, they report feeling pressured to accept these policies by online platforms (Bashir et al., 2015). Users are not sufficiently informed about what they are consenting to when using digital services; This goes against key international regulations that call on digital companies to make it easier for users to make informed decisions. As of 2021, 137 out of 194 countries have implemented data protection and privacy legislation (UNCTAD, 2021). By creating information asymmetry and undermining informed consent, digital firms may not comply with many of these countries’ regulations.

The General Data Protection Regulation (GDPR) definition of consent distinguishes “freely given consent” from any form of coerced consent. Digital platforms like Instagram engineer user choices and subtly steer interactions toward profit-oriented goals. This persistent prodding compromises the user’s autonomy.

Digital platforms use complex design elements to create the illusion that the user has control over their data and push them into choices that may not align with their best interests. Facebook and Instagram, for example, offer users a variety of privacy settings and controls, giving them the perception that they have control over their data. However, these settings can be complex and difficult to navigate; This results in users defaulting to the platform’s pre-selected options that support data collection and sharing. Moreover, platforms use behavioral science techniques to direct users to actions that will benefit the platform’s profits. One of these techniques is confirmation-shaming, in which users are subtly coerced into accepting data collection policies by framing privacy-conscious choices as inappropriate or socially undesirable.

Overcoming these ethical complexities requires a pivot not only toward prioritizing consumer well-being but also toward redefining it. Protecting user welfare should mean securing consent and ensuring that user data is not used for undisclosed, complex inferences. It involves protecting the integrity of user preferences and preventing persistent nudges towards profit-driven goals that distort genuine user experiences. Policy initiatives such as the European Union’s Artificial Intelligence Act and GDPR have set laudable benchmarks in protecting privacy and control of user data. GDPR gives individuals greater control over personal data, requires explicit consent before collecting and using their data, and imposes strict data security measures. However, the ethical challenges posed by inferred preferences and choice architectures require a broader discussion.

India It has taken steps to protect digital consumer welfare through the Digital Personal Data Protection (DPDP) Act, 2023 and the Digital Competition Bill (DCB). The DPDP Act imposes stricter transparency requirements, mandating that consent must be free, informed, specific and unambiguous. Similar to the GDPR, it seeks to reduce information asymmetry on data usage between users and companies. DCB can promote fair competition while addressing monopolistic behavior such as rent-seeking and data hoarding. Both pieces of legislation emphasize the accountability and transparency of digital firms regarding their data practices and competitive behavior. If DCB becomes law along with the DPDP Act, it could promote a more ethical AI ecosystem in India by giving users better control over their personal information.

Regulatory bodies such as the Competition Commission of India are crucial in enforcing these laws and holding firms accountable for violations. Protecting digital consumers requires a multifaceted approach covering legal, regulatory and enforcement measures.

AI systems that go beyond explicit user consent must necessarily disclose and request permission for these expanded uses. Simultaneously, it is important to create digital frameworks that prioritize user autonomy over commercial gains. Ethical AI development requires user-centered designs that respect users’ original goals, nurturing genuine interactions devoid of compulsive drives toward profit-driven goals.

The authors are senior associate and laboratory manager, research fellow, and director, respectively, at the Center for Social and Behavioral Change at Ashoka University..

Disclaimer: The views expressed are personal and do not reflect its official position or policy. FinancialExpress.com. Unauthorized reproduction of this content is prohibited.