For immediate release: July 13, 2023


This is the comment that Fight for the Future and Surveillance Technology Oversight Project (S.T.O.P.) submitted to the Consumer Financial Protection Bureau (CFPB) on July 13, 2023 in response to its Request for Information on Data Brokers and Other Business Practices Involving the Collection and Sale of Consumer Information (CFPB-2023-0020-0001)


The Surveillance Technology Oversight Project (S.T.O.P.) is a community-based civil rights group that advocates and litigates against discriminatory surveillance. Our work highlights the impact of surveillance on Muslims, immigrants, the LGBTQ+ community, Indigenous peoples, and communities of color. S.T.O.P. has significant experience advocating for New Yorkers’ civil rights in opposition to dangerous and ineffective digital technologies. 

Fight for the Future is a queer women led digital rights organization dedicated to defending our most basic rights in the digital age, protecting and expanding the Internet’s transformative power in the lives of people. Fight battles against the unprecedented threats to Internet freedom, online privacy, human liberty, and free expression by resisting censorship, advocating for free speech and expression, demanding big tech accountability, and promoting antitrust legislation.

In its Request for Information, the Consumer Financial Protection Bureau (CFPB) (the Bureau) asked for accounts of data broker practices and impacts on the public.  We respond to this request using our organizational expertise, rather than presenting a legal case for or against the Bureau’s regulatory authority. We write jointly to relay our experiences with the data broker industry, urge the Consumer Financial Protection Bureau to consider the full range of the industry’s impacts, and to support clarifying regulations that address harms such as discrimination, privacy violations, and data breaches caused by the unregulated collection, sharing, and retention of user data by the industry.

The Data Trade and its Ensuing Harms

(a) The data broker industry and its practices

Data brokers are businesses that specialize in collecting and trading users’ personal data. The Bureau should think of data brokers as entities that employs one or more of the following practices: 

  • Buying or selling data points in exchange for currency;
  • Buying or selling access to data in exchange for currency; 

Nearly every entity on the internet engages in these practices, some to a greater degree than others but crucially, the CFPB should consider distinctions in the nature of the entity and the scope of the data they trade when it develops regulations. For examples, the CFPB should consult state privacy laws, including the California Consumer Privacy Act (CCPA) and New York’s proposed Digital Fairness Act. Both bills, and others, recognize that states should regulate for-profit entities differently than universities, non-profits, and research institutions because they utilize data in different ways and pose different risks. 

Data brokers collect and trade a range of identifiable user information. This includes but is not limited to how consumers use applications; detailed location histories; demographic information,

including membership in legally protected groups, interests, affinities, and associations; and information about finances, property, healthcare, and wealth. 

Brokers source information in at least three ways, often in combination: (1) They collect data from a consumer’s interactions online. This could include, for example, an online or credit card purchase on a social media marketplace, interacting with a social media site, or submitting information in an online form. (2) Brokers may also collect information by licensing it from another broker. For instance, CoreLogic licenses data from InfoGroup (now Data Axel). (3) Some brokers scrape public sources for information and aggregate it into their proprietary products. Two examples of entities collecting information in this way are CoreLogic, which amasses real property information from property records, and Clearview AI, which scrapes social media sites and other sources to compile its database of facial images. Brokers offer their products to a variety of customers, who use that information for a variety of purposes. Equifax, for example, a credit reporting agency, maintains and trades detailed credit histories, employment, and salary data which it has collected from a variety of sources. They offer this information to lenders, credit card and insurance companies, and other businesses for marketing purposes. Thomson Reuters, which claims it is not a credit reporting agency, offers comprehensive “cradle-to-grave” dossiers on individuals through its online platform CLEAR, including names, photographs, criminal history, relatives, associates, financial information, and employment information. The company offers its products to both private and public clients.

Once companies have collected user data, there aren’t substantial regulations regarding how that data can and cannot be used. As a result, data brokers have free rein with user data which creates significant privacy and security concerns for users, their families and their communities.

(b) The industry’s impact on consumers

The industry’s practices have a significant impact on consumers and the general public whose data is being scraped and traded. The process by which they amass data points from many sources and build consumer profiles which are packaged and sold to more companies or individuals gives consumers little to no choice in how their data is being collected, stored, and shared. Consumers cannot reasonably prevent brokers from collecting their data for many reasons. First, individuals are usually unaware that their data has been collected because data is collected by capturing information from online transactions, licensing between companies, and scraping public-facing websites. Consumers have no clear way to know when their data is collected, packaged, or traded. The industry does not offer consumers meaningful notice or choice about whether to allow collection and sharing of their information. While some consumers may choose voluntarily to share their information for convenience, they are usually not offered the option affirmatively to opt-in or opt-out. When notice is provided, it is often collected through manipulative methods, like cookie banners that tell users to accept tracking or risk losing access to a website. The mechanisms sites provide for consumers who wish to opt out are often hard to find, unwieldy, and even misleading. At the bottom of a Thomson Reuters’ web page about CLEAR—only visible after scrolling past two or more pages of text—there is a link in small font that says: “For CA: Do not sell my information.” Beyond its presence in tiny font at the very bottom of this webpage, Thomson Reuters provides no notice to consumers of their right to opt-out. Nor does the company enable California consumers to make use of the link easily. This robs consumers of their ability to make informed choices about their finances and purchases.

Although some companies voluntarily limit their collection and use of data to build public goodwill or to simplify their cybersecurity, ESG, or international compliance programs, companies’ ability to collect and use data is generally unregulated in most U.S. industries. Where the U.S. does regulate data, those regulations typically concern the use of data rather than the collection of it. This means brokers can still collect any information they wish, even if there are some restrictions on using that data. Some U.S. laws give consumers the right to opt-out of data use in some circumstances, but this right is difficult to exercise. The sheer number of companies that each consumer interacts with daily makes sending opt-out requests and following up on those requests impossible for most people. Moreover, due to the lack of clear notice, consumers are generally not aware of which companies hold their data. So, even consumers that have substantially enough free time to exercise opt-out rights cannot identify and contact every company that could have their information.

This mass collection, exploitation, and mismanagement of individuals’ sensitive personal data by companies adversely impacts people of color, women, members of the LGBTQ+ community, religious minorities, people with disabilities, immigrants, economically disadvantaged people, and other marginalized groups. In many instances, the danger imposed upon marginalized communities by these companies replicates and amplifies existing inequities in society, reflecting historical biases that stem from unrepresentative or incomplete data, as well as flawed information. Demographic factors also further inhibit consumers’ ability to exercise control over their data, making privacy available only to the privileged. These factors include:

  • Being Black, Indigenous, or a Person of Color. Police are more likely to surveil non-white people and include their information in biased data-driven policing systems. Police frequently contract with data brokers for services that collect additional information about members of these already overpoliced groups. This heightens risks to these communities by increasing demand for their information, making it harder to avoid data collection and use, and exposing them to more drastic harms, including a greater risk of incarceration.
  • Being a lower income earner: People with lower income cannot afford privacy protecting services like automated or human agents that opt-out of data use on consumers’ behalf, virtual private network subscriptions, or more expensive hardware that is not subsidized with ongoing ad revenue. Likewise, those who work more than one job or take care of children have less time to read confusing data collection notices and exercise opt-out rights.

The impacts of these data practices are alarming. It is clear that they are engineered only to increase profits for the companies and so, the best interests or rights of users are not given meaningful consideration.

(c) Risks for Consumers

Data brokers are willing to sell information to anyone willing to pay. This is not true for all brokers, as recognized Credit Reporting Agencies must comply with the Fair Credit Reporting Act, and other sector-specific privacy laws may restrain the collection, use, or sale of other types of information. Other brokers choose only to sell services to law enforcement, private investigators, or fraud detection departments. Even so, harms arise regardless of the purchasing entity’s identity. In addition to exposing consumers to price discrimination and other economic harms, the trade in data entails risk to data subjects’ rights of speech and association, facilitates predictive policing, and, on its own, constitutes a substantial violation of privacy.

i. Financial harms

Retail companies use data to favor some consumers over others, targeting them for price variations. Data may show that people in a given geographic area are generally willing to pay higher prices than those in another. An e-commerce company might therefore increase the price for a given product when IP addresses associated with that region view the product online. Data may also show that a high-net worth individual is a more valuable customer and select them for significant introductory offers that are not available to lower-income buyers who are less likely to be repeat purchasers. Income is often correlated with race, and therefore income-based price discrimination can be racially discriminatory. A 2020 study found that ride hail companies’ data-driven pricing strategies led to higher prices in Black neighborhoods.

Similarly, financial institutions use data to vary interest rates, giving some consumers less favorable loan and credit terms than others. Several years ago, Meta patented a tool that analyzed a user’s network of connections to determine creditworthiness. The tool allowed a lender to examine the credit scores of a loan applicant’s “friends” and factor that information into a credit determination. It is unclear if this product was ever offered to or used by lenders, but its discriminatory potential is obvious. Under this regime, an individual’s creditworthiness would be evaluated at least partly on the basis of that person’s social associations. Due to historical discrimination, de facto segregation, and the biased data driving these processes, Black borrowers, for example, are more likely to be associated with networks of lower net-worth and less history with the financial industry than white borrowers. Holding this against Black borrowers would therefore translate directly to discriminatory lending practices.

Data driven banking can also make it harder for consumers to open new accounts. This is particularly true for people without a consistent address and/or identification documents, who are often among the most marginalized Americans. It is common for the financial services industry to use information purchased from data brokers for identity verification purposes, but doing so is unreliable. If the consumer changes their address or job frequently, there will be mismatches in the information that they provide to the financial institution and the information that the financial institution buys from a broker. When these mismatches occur, the financial institution often claims it cannot verify the individual’s identity and blocks them from opening an account. Compounding the problem, as the Bureau knows, consumers who experience housing or job instability are more likely to be underbanked and non-white.

The data trade also can affect job prospects. Consumer reporting for employment is regulated by the Fair Credit Reporting Act, but formal reports are not the only searches that affect employment. As an example, some data brokers buy ads on websites to advertise their criminal history search services. An ad may say, “Three court records found for Jane Doe. Run a criminal record search.” Even though Doe does not have a criminal background, this advertisement suggests that she does, which could affect her job prospects. Further, if the name searched for were “Tamica Smith” rather than “Jane Doe,” these ads would be significantly more likely to appear. A 2013 research study found that Black sounding names were twenty five percent more likely to appear in advertisements for criminal records search services.

ii. Speech and association

Data brokers chill speech and association. Many people are more reluctant to participate in protests, attend gatherings, or visit sensitive locations because of the risk of being tracked and identifiable. Studies show that internet users are less willing to engage in political speech after being told that an internet service provider will monitor their activity. This chilling effect causes damage to core democratic principles, and inhibits political participation and the free exchange of ideas. It is not only private companies contributing to this speech chilling activity, but also the U.S. government, which raises constitutional concerns. The U.S. buys millions of dollars’ worth of data, including location information related to hundreds of millions of mobile devices and over 90% of the world’s internet traffic. In one egregious incident, the Oregon Attorney General’s office investigated its own department head after it used internet monitoring tools to see that he had used the hashtag “#BlackLivesMatter” on Twitter and shared a logo for the rap group Public Enemy.

iii. Policing

While police regulation is not within the Bureau’s mandate, the Bureau’s efforts to regulate data brokers will have downstream effects that improve local and national policing and safeguard against abuse. Data brokers fuel police practices that circumvent the Fourth Amendment. Policing agencies buy individual data and access to databases to view detailed personal information about individuals before establishing any suspicion of criminal activity. They often manipulate this data into unwieldy predictive policing programs that try and fail to predict who will commit crimes and where they will commit them. Immigration enforcement agencies also use this data to track all Americans, regardless of immigration status, and deport those who are undocumented.

The risks of data driven policing become greater as more and more states criminalize essential and lifesaving healthcare, including abortion, gender affirming care, and hormone therapy. States like Idaho that prohibit traveling out of state for abortions can use location data to track where its residents travel and identify who traveled with them, potentially with an eye towards charging pregnant people’s loved ones as criminal accomplices for helping them during a difficult time. States may also use consumer profiles to identify and predict who is likely to use gender affirming care, as they have done in other areas of law enforcement.


(a) Clarify the Fair Credit Reporting Act

It is within the CFPB’s statutory authority to clarify FCRA regulations, and the CFPB should exercise this authority to treat brokers’ erroneous readings of the Fair Credit Reporting Act. Many data brokers argue they are not “Consumer Reporting Agencies” nor are they selling “Consumer Reports,” and are therefore not subject to FCRA regulations. Rather, brokers claim to trade “credit header” information, meaning basic information like names, addresses, phone numbers, and social security numbers. However, most uses for so-called header data bear “on a consumer’s creditworthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living;” the exact information the FCRA protects. The Bureau should begin clarifying the industry’s misunderstanding by explaining that “credit header” data is within the existing scope of FRCA regulations. This will by no means address the full range of data broker harms, and the CFPB should consider using additional powers to promulgate even stronger regulations after clarifying credit header questions.

(b) Mandate data minimization and transparency around data sharing
The industry does not provide a meaningful way for users to provide informed consent. Most users do not know how their information is collected, when it is shared, how or where to request its deletion. As data accrues, companies leverage it for profit and it is often used to craft discriminatory predictions about individuals and groups, which result in biased practices and policies that negatively impact marginalized and vulnerable communities. To address this, the CFPB should impose strict data minimization obligations on data brokers and ensure that individuals retain maximal control over their own personal data. Companies must also be required to proactively share how data is being used and transferred with both users and the general public.

(c) Ensure data privacy and security of users personal information

After a private entity has collected consumer data, the onus must be on that company to ensure the security of said data. Frequent cyberattacks that expose peoples’ personal, biometric, financial, and other data are a regular reminder of how the information companies collect is a target for hackers, data thieves, and other bad actors. In order to fully protect consumer data, the CFPB should require companies to regularly perform data privacy and civil rights impact audits. Such audits must be vigorous enough to ensure accountability for data security and transparency both around breaches and the company’s own practices.


Data brokers are buying, selling, and trading personal information without restriction, ignoring how they harm job prospects, banking, policing, and democratic freedom. These business practices won’t end so long as there is money to be made, and the market is only increasingly rewarding data-first companies. The Bureau must move to regulate the worst of these harms, as there are minimal to no incentives for businesses to meaningfully change on their own. Please reach out to S.T.O.P. Legal Fellow and Program Associate, Evan Enzer,, and Eseohe Ojo, Policy and Campaign Manager, Fight for the Future for any follow up.