The problem with too much data protection [and the fallacy of individual consent]


This is a republication of the article The problem with too much data protection, with the title above.


Time
Orly Lobel
October 27, 2022





The Federal Trade Commission and other key regulatory agencies aim to strengthen protections against the collection of personal information. 

Data minimization is the default set in Europe by the GDPR, and a new bill before US Congress, the American Data Privacy and Protection Act, is similarly trying to promote the primacy of privacy.


Privacy is important when it protects people from harmful surveillance and public disclosure of personal information. 


But privacy is just one of the many values ​​of our democratic society, and prohibiting safe and fair data collection can conflict with other equally valuable social goals. 

While we have always faced difficult choices between competing values ​​- security, health, access, freedom of expression and equality — advances in technology are increasingly enabling data to be anonymized and secured to balance individual interests with the common good. 

Privileging privacy obscures the many ways in which data is a public good, rather than overtly acknowledging the need to reconcile privacy with a more comprehensive and representative collection of data. 

Too much privacy-just like too little privacy-can undermine the way we can use information to drive progressive change.


Too much privacy-just like too little privacy-can undermine the way we can use information to drive progressive change.


Credit-Micah Young-Getty Images

We rightly fear surveillance when it aims to use our personal data in harmful ways. 


But a default assumption that data collection is harmful is simply misguided. 

We should focus on regulating abuse rather than banning collection. 


But a default assumption that data collection is harmful is simply misguided. We should focus on regulating abuse rather than banning collection.


Take, for example, perhaps the most controversial technology that privacy advocates are eager to ban: facial recognition. 


20 US cities and counties have enacted bans on state facial recognition. 

In 2019, California enacted a three-year moratorium on the use of facial recognition technology in police body cameras. 

The two key concerns about facial recognition technology are its shortcomings in recognizing the faces of minority groups — leading to false alarms and arrests, for example — and its rise in population surveillance in general. 

But current proposals for bluntly banning the technology will stall improvements in its accuracy and hamper its safe integration to the detriment of vulnerable populations.


… current proposals for bluntly banning the technology will stall improvements in its accuracy and hamper its safe integration to the detriment of vulnerable populations.



These outright bans ignore that surveillance cameras can help …


These outright bans ignore that surveillance cameras can help protect victims of domestic violence from trespassers, help women build safety networks when traveling alone, and reduce instances of abuse of power by law enforcement. 


Facial recognition is increasingly aiding in the fight against human trafficking and locating missing persons — and missing children in particular — when the technology is coupled with AI, which creates maturation images to bridge the missing years. 


Facial recognition is increasingly aiding in the fight against human trafficking and locating missing persons — and missing children in particular — when the technology is coupled with AI, which creates maturation images to bridge the missing years.


There are also many useful applications of face recognition for people with disabilities, such as B. 


supporting people with impaired vision and helping to diagnose rare genetic disorders. 

As class action lawsuits and ACLU lawsuits and proposed reforms mount, we need balanced policies that allow facial recognition under safe conditions and restrictions.


Credit-Micah Young-Getty Images

We also need to recognize that privacy can come at odds with better, more accurate, and less biased automation. 


In today’s techlash, where algorithms are condemned as a high risk of bias and exclusion, the tension between the protection of personal data and the robustness of datasets must be acknowledged. 

For an algorithm to be more accurate and less biased, it needs data that reflects demographics. 


Take health and medicine for example. 


Historically, clinical trials and health data collection have favored male and white patients. 

The irony of data protection regulation as a solution to exclusion and exploitation is that it fails to address the source of many biases: partial and biased data collection. 

Advances in synthetic data technology, which allow systems to artificially generate the data that the algorithm must train on, can help ease some of these tensions between data collection and privacy.

Think again about facial recognition: we need more representative training data to ensure the technology becomes equally accurate for all identities. 

And yet we must be conscious and realistic about the need for real data for public and private innovation.


Credit-Micah Young-Getty Images

An overemphasis on privacy can hamper advances in scientific research, medicine, and public health compliance. 


Big data collected and analyzed by artificial intelligence enables earlier and more accurate diagnosis, advanced imaging, better access to and reduced costs of quality care, and the discovery of new connections between data and diseases to lead to novel treatments and cures discover. 

Simply put, if we want to support medical advances, we need more data samples from different populations. 

AI advances in radiology have not only led to better imaging, but also to reduced radiation doses and faster, safer and cheaper care. The patients who benefit the most are those who have less access to human medicine experts.


Simply put, if we want to support medical advances, we need more data samples from different populations.


In its natural state — to paraphrase the tech activists’ slogan “information wants to be free” (and to channel the title of my own book Talent wants to be free) — Data wants to be free. 


Unlike finite, tangible resources like water, fuel, land, or fish, data doesn’t run out because it’s used. 

At the same time, the benefit of data comes from its size. 

We can find new proteins for drug development, teach speech-to-text bots to understand myriad accents and dialects, and teach algorithms to validate breast mammograms or lung X-rays if we can harness the robustness of big data-millions, sometimes billions of data points. 

During the COVID-19 pandemic, governments are tracking patterns of disease spread and fighting those who provide false information and sell products under fraudulent claims of cures and protections. 

The Human Genome Project is a dazzling, paradigmatic leap in our collective knowledge and health capabilities, made possible by massive data collection. 

But there’s still a lot more health information to collect, and favoring privacy can be bad for your health.


Unlike finite, tangible resources like water, fuel, land, or fish, data doesn’t run out because it’s used. At the same time, the benefit of data comes from its size.


But there’s still a lot more health information to collect, and favoring privacy can be bad for your health.


Credit-Micah Young-Getty Images

In healthcare, this need for data may be intuitive, but the same is true if we want to understand and address the root causes of other societal ills: 

pay gaps, discriminatory hiring and promotions, and unfair lending, lending, and deposit decisions. 


In my research on gender and race pay gaps, I have shown that broader information on pay is crucial. 


Similarly, the free online sharing of information about our work experiences can improve jobs, and there are privacy initiatives that can unintentionally backfire and lead to statistical discrimination against more vulnerable populations. 

Empirical studies suggest, for example, that banning privacy policies on hiring criminal background checks may have increased racial discrimination in some cities.


Empirical studies suggest, for example, that banning privacy policies on hiring criminal background checks may have increased racial discrimination in some cities.


Privacy — and its ubiquitous offshoot, the NDA — also evolved to protect the powerful and wealthy from the public’s right to know. 



A more positive discourse on equality, health, physical integrity, economic rights and self-determination would move us beyond the thorny question of what is and is not privacy. 

As I described in a recent talk about it Dobbs v. Jackson Women’s Health Organization, Abortion rights are far more than privacy rights; they are health rights, economic rights, equality rights, dignity rights and human rights.

In most cases, data collection should not be prevented, but protected, shared and used for the benefit of all.



Privacy scientists agree that consent forms — those ubiquitous clickwrap policies — are rarely read or negotiated. 

Research also shows that most consumers are fairly agnostic about privacy settings. 


The behavioral literature calls this the privacy paradox and shows that in practice people are regularly willing to engage in a privacy calculus and sacrifice privacy for perceived benefits. 


Privacy privileging, then, is both over- and under-permissive: it neglects a whole range of values ​​and goals that we need to balance, but it also fails to offer meaningful reassurances to individuals and communities that have an undeniable history of state oppression and have the privileged elite. 

The dominance of privacy policy can distort nuanced debates about distributive justice and human rights as we continue to build our digital knowledge commons. 

Gathering vital data to address our toughest social problems is an important mandate of democracy.


While staunch privacy advocates emphasize tools like informed consent and opt-out methods, these policies rest on a fallacy of individual consent.


Gathering vital data to address our toughest social problems is an important mandate of democracy.


Originally published at https://canadatoday.news on October 27, 2022.


About the author & affiliations


Orly Lobel is an award-winning author and the Warren Distinguished Professor of Law at the University of San Diego. She is the Director of the Program of Employment and Labor Law as well as the founding faculty of the Center for Intellectual Property and Markets

Total
0
Shares
Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

Related Posts

Subscribe

PortugueseSpanishEnglish
Total
0
Share