For immediate release: November 15, 2022

978-852-6457

This is the comment that Fight for the Future submitted to the Federal Trade Commission on November 15, 2022 in response to its Advanced Notice of Proposed Rulemaking on Commercial Surveillance and Data Privacy (Docket FTC-2022-0053-0001)

Lina Khan, Chair, Federal Trade Commission
Rebecca Kelly Slaughter, Commissioner
Christine S. Wilson, Commissioner
Alvaro Bedoya, Commissioner

Federal Trade Commission
600 Pennsylvania Ave., N.W.
Washington, D.C. 20580

Dear Chair Khan and Commissioners, 

We are writing to submit a comment during this Advance Notice of Proposed Rulemaking regarding Commercial Surveillance and Data Security, and to urge you to address the unregulated collection, sharing, and retention of user data that has led to discrimination, privacy violations, and data breaches.

Harms and Abuses

Companies gather data by routinely asking users to share personal information, details about use of a given technology, and other analytics. Users may be required to agree to share information before being granted access to an application, often via agreements written in legalese that is unreasonable to expect people to fully understand. Other times, companies disguise these requests as ‘necessary’ to help improve the product or for diagnostic purposes. Research has shown that this seemingly simple request can open the door for corporations to access sensitive information, such as IP addresses and the user’s physical location. Even more concerning, some companies, including Microsoft, Apple, and Google, have built their products to continually access data, even after the user has opted out of sharing it.1 And in cases where a company has a strong network effect or a consumer has no reasonable alternative, users have little to no choice but to comply with concerning provisions that put their privacy and personal lives at risk.

Biometric surveillance is another specifically dangerous method of both data collection and automated decision making. Biometric data is the biological and behavioral characteristics, like facial image and iris scans, unique to each and every human being. Biometric surveillance technologies, such as facial or gait recognition, collect biometric data and automate the identification of individuals based on their unique characteristics. The nature of this technology and the data it amasses allows for the surveillance of individuals using billions of biometric data points in corporate databases that are widely available. The individuals and companies accessing that data are able to use it however they see fit––including but not limited to landlords tracking their tenants, employers policing their employees, stores monitoring shoppers, vigilantes targeting abortion patients, abusers stalking their victims, and wealthy individuals surveilling their critics. Although the automated decisions made with biometric data are far from accurate today, the more effective biometric surveillance gets at collecting data, identifying individuals, and tracking their movements, the greater the risk of abuse and privacy violations.2

Once corporations have collected user data, there aren’t substantial federal regulations regarding how that data can and cannot be used. As a result, many companies sell data on their users to other entities. Third-party data brokers amass such data points from many sources and build consumer profiles, which are packaged and sold to more companies or individuals.3 According to the Electronic Frontier Foundation, “Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as ‘anonymous’ location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations.”4 The sale of personal, sensitive information that can easily be de-anonymized creates significant privacy and security concerns for users and their families. 

These issues are exacerbated by the lax data security policies and practices most companies have in place. Companies have shown an inability to safeguard user information from unfair disclosure, unauthorized access, accidental loss, modification, manipulation, and corruption. The shortcomings of a given company’s security measures are often only realized after a mass security breach or violation, giving users inadequate time and generally no resources to take meaningful precautions. In 2019, Amazon Ring cameras leaked customers’ device locations. Months after this leak, it was leveraged to hack a number of devices for which hackers had not only the ability to view and control the cameras in real time, but also know the precise locations of said cameras (including one particularly terrifying incident where a man, calling himself Santa, hacked a camera to speak to a little girl in her bedroom).5 In 2017, hackers stole the data of over 57 million driver and rider accounts from Uber.6 This happened again in 2022, revealing Uber’s inability to address its weak security measures.7 It’s clear that oversight is required to secure user data.

The mass collection, exploitation, and mismanagement of individuals’ sensitive personal data by companies all qualifies as unfair and deceptive practices. Unregulated corporate surveillance and data abuses adversely impact people of color, women, members of the LGBTQ+ community, religious minorities, people with disabilities, immigrants, economically disadvantaged people, and other marginalized groups. In many instances, the danger imposed upon marginalized communities by these companies replicates and amplifies existing inequities in society, reflecting historical biases that stem from unrepresentative or incomplete data, as well as flawed information.8,9

The impact of these discriminatory surveillance technologies and data practices is alarming. Google’s algorithms target male job seekers for higher paying jobs over their female counterparts.10 Grindr shared users’ HIV status with third parties—including test dates, GPS data, and email addresses—essentially outing HIV-positive users.11 Black and brown people have been falsely arrested and imprisoned based on misidentifications by facial recognition software.12 Studies make clear that these technical inaccuracies are systemic—showing higher misclassification rates based on skin color and gender in addition to other biases built into training datasets.13 A study by the National Institute of Standards and Technology (NIST) of 189 commercial facial recognition programs found that these algorithms were significantly more likely to return false positives or negatives for Black, Asian, and Native American individuals compared to white individuals.14 Despite clear and overwhelming evidence of racial biases, companies continue to use, share, and sell racist facial recognition software in situations ranging from law enforcement usage, to test proctoring, to face-based payment systems.

The Supreme Court decision in Dobbs v Jackson Women’s Health Organization has led to an increased in fears about how data can be weaponized against pregnant people and abortion providers. In May 2022, a location data firm was exposed for selling information on patient visits to abortion clinics.15 In June, it was revealed that Facebook collected data on people visiting crisis pregnancy centers.16 Just months after the Dobbs decision, authorities in Nebraska used Facebook Messenger conversations to investigate a possible abortion.17

It is clear that the current tools and systems of surveillance and data collection are engineered only to increase profits for the corporations using them; the best interests and rights of users are not given meaningful consideration. FTC rulemaking must address the complete lifecycle of data—including collection, sharing, and retention—and the subsequent harm these practices pose to the most impacted populations as well as the wider public.

Solutions and Recommendations

The FTC should take the following actions to address these issues raised above:

  1. Mandate data minimization and transparency around data sharing: Companies’ current “accept all” and “agree and continue” consent models do not provide a meaningful way for users to provide informed consent. The majority of the public are unable to read and understand the privacy language used in such agreements. Meanwhile, the data collected does not always stay with the same entity, as companies profit from sharing individuals’ data with third-party data brokers. Most users do not realize how common this practice is, nor the potential impacts of their data changing hands. As data accrues, companies leverage it to predict individual behaviors and quite accurately infer identities, despite frequent claims of anonymity. Such data aggregation gives private entities false authority to craft discriminatory predictions about individuals and groups, which result in biased practices and policies that negatively impact marginalized and vulnerable communities.

    To address issues with data collection, the FTC should impose strict data minimization obligations on companies and ensure that individuals retain maximal control over their own personal data. Companies must also be required to proactively share how data is being used and transferred with both users and the general public.
  2. Data privacy and security: After a private entity has collected consumer data, the onus must be on that company to ensure the security of said data. Frequent cyberattacks that expose peoples’ personal, biometric, financial, and other data are a regular reminder of how the information companies collect is a target for hackers, data thieves, and other bad actors. In order to fully protect consumer data, the FTC must require all internet enabled consumer products to employ end-to-end encryption. The FTC should also require companies to regularly perform data privacy and civil rights impact audits. Such audits must be vigorous enough to ensure accountability for data security and transparency both around breaches and the company’s own practices.
  3. Limit the use of facial recognition and other automated decision making systems: Facial recognition, biometric technologies, and other automated decision making systems have been proven to be unreliable, unjust, and a threat to basic rights and safety. While they are often touted as a means to revolutionize the identification process or facilitate crime detection, they are linked with misidentification of people up to 98% of the time leading to grave results including harassment, wrongful imprisonment, and deportation.

    The FTC’s rulemaking can address this by banning the use of facial recognition technology by companies as well as creating standards and rules for the use of other automated decision making systems. 

We recognize the FTC’s full authority and capacity to ensure much needed change on these issues. We urge the FTC to address the clear dangers and atrocious harms of surveillance data abuse immediately. We also believe that the swift and ever changing nature of these technologies and practices will require the FTC to take great care in issuing a principled and far-reaching rulemaking that will shield users not only from the injuries they are experiencing today, but from those that are likely in the future.

Signed,

Fight for the Future
www.fightforthefuture.org

Notes:

  1. Should You Share Your Data With Tech Companies? https://www.consumerreports.org/privacy/should-you-share-usage-analytics-data-with-tech-companies-a3858502782/
  2. Electronic Frontier Foundation | Biometrics https://www.eff.org/issues/biometrics
  3. What Does Big Tech Actually Do With Your Data? https://www.forbes.com/sites/forbestechcouncil/2022/02/16/what-does-big-tech-actually-do-with-your-data/?sh=1f6fa323515f
  4. Gebhart, Gennie and Cyphers, Bennett, Data Brokers are the Problem, July 2021 at: https://www.eff.org/deeplinks/2021/07/data-brokers-are-problem
  5.  Man hacks Ring camera in 8-year-old girl’s bedroom, taunts her: ‘I’m Santa Claus’ https://www.nbcnews.com/news/us-news/man-hacks-ring-camera-8-year-old-girl-s-bedroom-n1100586
  6. Uber Hid 2016 Breach, Paying Hackers to Delete Stolen Data https://www.nytimes.com/2017/11/21/technology/uber-hack.html
  7. Uber Investigating Breach of Its Computer Systems https://www.nytimes.com/2022/09/15/technology/uber-hacking-breach.html
  8. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/#footnote-6 
  9. Examining the intersection of data privacy and civil rights https://www.brookings.edu/blog/techtank/2022/07/18/examining-the-intersection-of-data-privacy-and-civil-rights/
  10. Google algorithms show higher paying jobs to more men than women https://www.digitaljournal.com/social-media/google-algorithms-show-higher-paying-jobs-to-more-men-than-women/article/437802#ixzz7hWOFpJMw
  11. Grindr Admits It Shared HIV Status Of Users https://www.npr.org/sections/thetwo-way/2018/04/03/599069424/grindr-admits-it-shared-hiv-status-of-users
  12. Another Arrest, and Jail Time, Due to a Bad Facial Recognition Match https://www.nytimes.com/2020/12/29/technology/facial-recognition-misidentify-jail.html
  13. Joy Buolamwini and Timnit Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Conference on fairness, accountability and transparency: PMLR, 2018, https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
  14. NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software; Many Facial-Recognition Systems Are Biased, Says U.S. Study https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html; Federal Study Confirms Racial Bias of Many Facial-Recognition Systems, Casts Doubt on Their Expanding Use https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/
  15. Data Broker Is Selling Location Data of People Who Visit Abortion Clinics https://www.vice.com/en/article/m7vzjb/location-data-abortion-clinics-safegraph-planned-parenthood
  16. Facebook Reportedly Collects Data About Abortion Seekers https://www.cnet.com/news/social-media/facebook-reportedly-collects-data-about-abortion-seekers/
  17. Nebraska cops used Facebook messages to investigate an alleged illegal abortion https://www.npr.org/2022/08/12/1117092169/nebraska-cops-used-facebook-messages-to-investigate-an-alleged-illegal-abortion