Correcting misinformation: Massachusetts Social Media Bill Would Require invasive Age Verification for Most of the Internet
After the Massachusetts House passed legislation that attempts to “ban” minors from creating social media accounts, the backlash has been swift and substantive.
Numerous experts have explained why the legislation would do more harm than good by requiring every person in the state, including adults, to upload a government ID or submit to a face scan to create a social media account. Other provisions that require “parental consent” would force families to surrender even more sensitive documents, like birth certificates, to untrustworthy social media companies.
Unfortunately, it does not appear that House leaders are listening to experts. Education Committee Chair Ken Gordon recently sent this email to his House colleagues. Chair Gordon needs to read his own bill. We don’t think he is deliberately trying to mislead his colleagues, but it’s clear he does not understand what the bill actually does.
Rep Gordon correctly states that H. 5366 leaves it up to the Attorney General how to implement the “age assurance or verification system, and that “to the extent practicable the age assurance or verification system shall consist of the best technology available to reasonably and accurately identify a current or prospective user’s age.“
However, there are numerous provisions throughout this bill (and the Governor’s separate-but-similar proposal) that absolutely, unequivocally, would require companies to collect government IDs or use some other invasive method to verify the exact age (and in many cases exact identity) of every single user. It would not be “practicable” for a company to do anything else.
Example 1: the very first line of Section 2 (a)(1)
Section 2. (a)(1) To protect the health and wellness of a minor under 14 years of age, including, but not limited to, mental and behavioral health, a social media platform shall prohibit a minor under the age of 14 from being a user of a social media platform.
This language is unambiguous. Any covered platform––which as written includes Wikipedia, Bluesky, the Trevor Project’s “Trevor Space” forum, etc––must “prohibit” a minor under the age of 14 from creating an account.
A facial estimation system cannot reliably tell the difference between a 13 year old and a 17 year old. Simply asking the user for their age upon sign-up (which every major social media platform already does) would not be enough to comply with this section, because users lie about their ages.
Under the bill, the platform must “prohibit” a 13 year old from signing up, even if they say they’re 16. Even if the Attorney General decides that “self attestation” (ie asking the user how old they are upon sign-up) is a sufficient “age assurance system,” it would not be sufficient for a platform to comply with the rest of the bill. In fact, in the recent social media harm trials in California and New Mexico, a central facet of the case was the idea that simply asking a user to attest to their age is not sufficient. This bill requires ID checks. Period.
Example 2: the very second line of Section 2(a)(1)
(2) A social media platform shall: (A) terminate a user under the age of 14; (B) allow a user under the age of 14 to request to terminate the account; (C) allow the confirmed parent of a user under the age of 14 to request the termination of the user’s account; and (D) permanently delete all personal information held by the social media platform related to the terminated user unless there is a legal requirement to maintain the information.
The only way a social media company could comply with this requirement is to know the exact identity, government name, and age of every single user, and to collect even more sensitive information from parents (or people claiming to be parents). While social media companies collect enormous amounts of data they can use to make an assumption of whether a user is a minor, requiring parental consent would mean they need to know exactly who the minor is, and exactly how old they are (because if they are 17 they have a different set of rights under this bill than if they are 14).
There is no safe or meaningful way for a social media company (or Wikipedia, or the Trevor Project) to “confirm” whether someone is or is not another person’s parent without:
- Collecting a Government ID or other identity-verifying document like a social security card to know that the minor user who is having their account shut down is actually a minor and not an adult, and so that the company could try to verify that the person contacting them has a custodial relationship to the minor (or alleged minor)
- Collecting a birth certificate or some other sensitive legal document to prove that the person contacting the company is the legal parent or guardian of the minor who the company has verified owns the account. It’s unclear how this would work for kids in foster care, in custody disputes, etc.
- Requiring the user claiming to be a parent to upload a selfie or submit to a face scan that matches their identity with their government ID
The Federal Trade Commission has been trying to figure out a meaningful way to require parental consent since 1996. They still haven’t.
Parental consent requirements in state age verification laws have faced significant legal challenges, and courts have blocked or struck down such laws in several states, often because they restrict minors’ and adults’ access to fully protected online speech. While several states have enacted laws requiring parental consent for minors to access social media or other online platforms—including Arkansas, Florida, Georgia, Tennessee, and Utah—all of them are facing First Amendment lawsuits, and most have been enjoined while litigation continues. The Supreme Court has made clear that laws restricting access to fully protected speech—even when framed as parental assistance measures—must survive strict scrutiny, a standard they rarely satisfy. In Brown v. Entertainment Merchants Association (2011), for example, the Court struck down a California law restricting minors’ access to violent video games, rejecting the state’s argument that the restriction was justified as a means of helping parents.
Example 3: the very next line after that …
(b)(1) To protect the health and wellness of a minor who is 14 or 15 years of age, including, but not limited to, mental and behavioral health, a social media platform shall prohibit a minor who is 14 or 15 years of age from being a user of a social media platform unless the social media platform receives verifiable consent from the parent for the minor to become a user.
Again, there is no way that a social media platform can “prohibit” a user who is 14 or 15 years of age from creating an account without verifying (not just asking about) the age of every user.
If a company just asks a user for their age when they create an account, (which every major social media company already does) many users simply lie about their ages. Under this bill, though, the company would be liable for massive fines if they do not prohibit a minor from creating an account without “verifiable consent” from a parent.
The only way to obtain “verifiable consent” from a parent is to know exactly who the kid is, exactly who their parent is, and have some kind of documentation proving both. That’s a government ID and at least one other sensitive document. No matter how you look at it.
Many kids don’t have the same last name as their parent or legal guardian. Many kids live with their grandparent, an older sibling, or another caregiver who may or may not have the legal authority or documentation to provide consent to create an account.
Shutting down a social media account can be serious. If I were to contact Facebook claiming to be Rep Ken Gordon’s mom and saying he is a 13 year old who created an account without my permission, how would the company figure out that Rep Gordon is in fact in his 60s and I am not his parent? They would need some documentation from both of us.
The bottom line: H. 5366 as passed by the House requires identity checks. Full stop. If the goal is simply to require social media companies to ask users for their age range upon sign-up (which every major social media platform already does), the bill needs to be completely rewritten.
There are numerous other provisions in the bill that would be impossible for a social media company to comply with without verifying the age of every user, meaning collecting a government ID or face scan. And many of these provisions wouldn’t even be feasible with an age assurance tool like a face scan, because they require an actual identity check for parental consent verification purposes.
Supporters of the bill seem to think that by passing the ball to the Attorney General, they have avoided all the thorny and controversial issues that arise with age verification. They are wrong.
The way the bill is currently constructed, the “age assurance” regulations promulgated by the Attorney General would in many ways be irrelevant, because social media platforms would have use the most invasive identity verification methods in order to comply with the parental consent provisions in the bill.
The Attorney General’s office has noted that age assurance is a “rapidly developing field” and does not necessarily require an ID check or face scan. While they are right that age assurance and age verification technology is evolving, the consensus among security experts is that there is currently no safe way to to verify the age of every user. Perhaps new technologies will emerge that make it easier for platforms to conduct age verification and parental consent verification in a practical, privacy-preserving way. But that technology does not currently exist.
Massachusetts House leaders have written a bill that requires online ID checks, but they’re saying it doesn’t and that “the Attorney General and the tech companies will figure it out.” This is magical thinking. It also ignores the reality that Big Tech companies do not care about their users’ privacy or protecting vulnerable communities. They will comply with the law in the cheapest, easiest way for them that protects their profits, even if it undermines human rights or harms marginalized people.
Writing a bill that says “they’ll figure it out” guarantees even more harm to the communities that have already been hurt the most by these giant corporations.
Massachusetts lawmakers have an opportunity to lead the nation by advancing thoughtful, progressive legislation addressing the very real harm of social media giants without destroying online privacy or undermining human rights. If they want to do that, we’d be happy to help.