EconPapers    
Economics at your fingertips  
 

Transparency, automated decision-making processes and personal profiling

Manuela Battaglini and Steen Rasmussen

Journal of Data Protection & Privacy, 2019, vol. 2, issue 4, 331-349

Abstract: Automated decision-making and profiling techniques provide tremendous opportunities to companies and organizations; however, they can also be harmful to individuals, because current laws and their interpretations neither provide data subjects with sufficient control over assessments made by automated decision-making processes nor with sufficient control over how these profiles are used. Initially, we briefly discuss how recent technological innovations led to big data analytics, which through machine learning algorithms can extract behaviours, preferences and feelings of individuals. This automatically generated knowledge can both form the basis for effective business decisions and result in discriminatory and biased perceptions of individuals’ lives. We next observe how the consequences of this situation lead to lack of transparency in automated decision-making and profiling, and discuss the legal framework of this situation. The concept of personal data in this section is crucial, as there is a conflict between the 29 Working Party and the European Court of Justice at the time to define the artificial intelligence (AI)-generated profiles and assessments as personal data. Depending on whether they are or are not personal data, individuals have the right to be notified (Articles 13–14 GDPR) or right to access (Article 15 GDPR) to inferenced data. The reality is that the data protection law does not protect data subjects from the assessments that companies make through big data and machine learning algorithms, as users lose control over their personal data and do not have any mechanism to protect themselves from this profiling owing to trade secrets and intellectual property rights. Finally, we discuss four possible solutions to lack of transparency in automated inferences. We explore the impact of a variety of approaches ranging from use of open source algorithms to only collecting anonymous data, and we show how these approaches, to varying degrees, protect individuals as well as let them control their personal data. Based on that, we conclude by outlining the requirements for a desirable governance model of our critical digital infrastructures.

Keywords: machine learning; transparency; GDPR; data ethics; open source; digital infrastructure (search for similar items in EconPapers)
JEL-codes: K2 (search for similar items in EconPapers)
Date: 2019
References: Add references at CitEc
Citations:

Downloads: (external link)
https://hstalks.com/article/1079/download/ (application/pdf)
https://hstalks.com/article/1079/ (text/html)
Requires a paid subscription for full access.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:aza:jdpp00:y:2019:v:2:i:4:p:331-349

Access Statistics for this article

More articles in Journal of Data Protection & Privacy from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().

 
Page updated 2025-03-19
Handle: RePEc:aza:jdpp00:y:2019:v:2:i:4:p:331-349