A line-by-line analysis of Governor Maura Healey’s proposed social media legislation
Here is Evan Greer (she/her) from Fight for the Future’s line-by-line analysis of Massachusetts Governor Maura Healey’s proposed social media legislation, included in her supplemental budget.
The text begins on page 10 here: https://www.mass.gov/doc/supp-release/download.
Summary of Concerns:
- Requires age verification (left to AG to enforce but this means an ID check or face scan) at the app / website level. This is the most invasive and unsafe form of age assurance.
- Defines social media so broadly it would include Wikipedia, the Trevor Project, etc.
- Has a bunch of requirements that companies would have to comply with within an hour or a day that would be completely impossible for, say, a person running a Mastodon instance or even a company the size of Bluesky.
- Has various provisions that would require companies to verify that someone is the parent or legal guardian of a minor user, which is impractical and ripe for abuse.
- Allows a parent (how do you verify this?) of a 15 year old to request that their account be shut down. Company has to restrict it from view within 1 hour (???) of receiving a request from a parent (or person claiming to be a parent?) and then has to shut down the account and delete all data within 3 days of receiving a request from the parent (or person claiming to be a parent. Like, your ex boyfriend who is mad at you and knows how to use Photoshop.)
- Raises constitutional issues around compelled speech.
- Requires companies to report on the number of minor users they have in a way that will force them to know the exact age of every user (ID checks) and store that data indefinitely.

I get what they are trying to do here, but this definition would include “good” algorithms, like the ones that filter out spam, gore, and porn from your feed based on user flags.

No real concerns here.

This carve-out is quite narrow and would not cover, for example, Wikipedia, the Trevor Project, or other online educational / mental health / LGBTQ support resources and communities.

No major concerns here.

This definition is wildly broad and would include Wikipedia, Discord, group apps often used by sports teams and clubs. There is no carve-out for smaller companies, non-profits, or open source projects, meaning the requirements in this bill apply equally to Big Tech giants like Meta and, say, Bluesky, a person running a Mastodon instance for journalists, or a small startup hoping to give families a better alternative to Instagram.

How would a social media company know whether a user is a “resident” of Massachusetts without either requiring a government ID or address verification, or collecting sensitive geolocation data?

No major concerns.

This is a requirement for mass surveillance and online ID checks. The bill hides the ball by leaving it up to the Attorney General, but the only way for a social media company to comply with the various age verification requirements in this bill (see more below) will be to collect a sensitive document like a government ID from every single MA user that creates an account.

An appeals process is always good. However, this again would require users to submit even more sensitive data to social media companies in order to speak online. And the requirement that platforms review and act within 3 days is doable for giants like Instagram and YouTube, but totally impossible for small, open source, volunteer run platforms, or even medium sized companies like Bluesky.

Deleting age assurance data immediately after verification is a best practice in terms of reducing (not removing) the harm of these systems. But this contradicts other requirements in the bill that would force social media companies to store this data indefinitely in case a parent requests it.

Limiting direct messaging to contacts makes sense, but this provision would prevent, say, a 17 year old from making a public post about a protest they are organizing or a political issue they want to speak out about, limiting their reach only to their friends. This provision would have stopped an activist like Greta Thunberg from going viral and having an enormous impact.

No major concerns here other than that most of these default settings should apply to ALL users, not just minors, and should not require an ID check to turn on or off. Unclear if social media platforms are supposed to keep track of school vacations, etc?

I personally don’t have a problem with this and it sounds like a good idea. My guess, though, is that constitutional experts (and lawyers for the companies) will say this is “compelled speech” and violates the First Amendment, making the law vulnerable to legal challenges.

This requires companies to collect (and store so they can prove compliance) even more user data than they already do. It’s unclear whether the proposal is intended to track use across multiple apps (which would require companies to share data with 3rd parties) or the 2 hour limit is for each app.

There is no safe or practical way for social media companies to determine who is a parent that can give consent to change default settings. This would involve uploading a birth certificate, or other extremely sensitive document, to untrustworthy companies. And it creates an easy path for abuse. If your creepy ex boyfriend wants to change your settings so they can track your location, all they’d have to do is photoshop a fake birth certificate and pretend to be your dad.
Allowing 16 year olds to change their own settings makes sense, but this provision makes it so the bill requires not just age estimation but an ID check to verify the exact age of every single user, which is the most harmful and privacy-destroying version of age verification.

Nice carve-out for anxious parents who want to track their teens’ real-time location all the time with Life 360 or similar. And/or creepy stalkers who claim to be parents who want to do the same thing. It will be very difficult, or impossible, for social media companies to tell the difference between the two.

This section is insane, ripe for abuse, and would be a gift to Big Tech giants while effectively putting smaller alternatives like Bluesky out of business.
The requirement to respond to ANYTHING within 1 hour is completely unworkable for small companies, open source projects, volunteer-led support hotlines, nonprofits like Wikipedia, etc, whereas giants like Meta and Google have the resources to do this.
And this requirement itself is wild: a parent—or someone claiming to be a parent, and again there is no good way for a company to know the difference—can simply request to get an account shut down and then the company must “hide” it within an hour and delete it completely, along with all data, in 3 days.
This will absolutely be abused. Bigots, abusers, and stalkers routinely abuse existing flagging and reporting tools to get their victims’ accounts suspended. This would allow anyone to falsely claim to be the parent of a minor user in order to get their account shut down. MAGA trolls will use this to shut down accounts critical of the administration just as they’re going viral or when they announce a protest.
It’s unclear how the minor user (or user that the person claiming to be a parent CLAIMS is a minor user!) can appeal this.
Even if used only as intended by actual parents, this provision would allow, say, an unsupportive parent of a trans 15 year old to delete their account, all their data, messages, etc, without the knowledge or consent or any recourse for the teen even to recover their data.

This is fine / good (and is already standard on most major social media platforms). But the bill’s definition of algorithmic feed would actually prevent platforms from doing anything with the information when users flag unwanted content.

Probably okay, but again could be used by a parent who is like “I don’t like that you’re looking at all this gay stuff,” etc.

I am not an expert on the lines around “compelled speech” as it relates to product warnings but I am guessing that the ACLU attorneys are gonna have a heart attack with this one. I think warnings are a good idea, but it would be extremely difficult to legislate what exactly they should say. The data on harm from social media use is significantly more nuanced and complicated than breathless headlines suggest. While a pack of cigarettes can say “The Surgeon general warns TKTK,” it’s really not clear what, say, your Bluesky account should tell you about the potential harm of arguing with people on the internet for too long.

This helps a bit with the concern above, but this should also be done in consultation with racial justice, human rights, civil liberties, and free expression experts to ensure the warnings are not discriminatory and do not have a chilling effect on speech.

I stand corrected. This addresses the concern I raised above about the definition of algorithmic feed preventing spam / moderation tools.

Transparency reporting is good, but it is entirely unclear how this data would even be useful to researchers / child protection experts / social media regulators. The requirement to publish exact numbers of users of specific ages or age ranges means that companies would be forced to use the most invasive forms of age / identify verification. If they only use age assurance or estimation, the data they publish might not be accurate, opening them up to fines. This requirement also heavily incentivizes companies to store age verification data indefinitely in case there is ever a lawsuit or questions related to their reporting.

This sounds like a fine idea, and something social media companies can and should do, but again I am pretty sure this violates the First Amendment protection against compelled speech. I am unsure what a constitutional attorney would say about whether this is not only compelled speech on the part of the company, but also on the part of the minor user who is being asked to fill out the (government mandated) survey.

Again, all of these provisions are predicated on the (wrong) idea that it is easy for a social media platform to verify that someone is the parent or legal guardian of a minor child. There is no safe or practical way to do this. Period.
This measure attempts to guard against a parent, say, accessing their kids messages, which is good. But the bill still allows a parent (or person claiming to be a parent) to shut down the account and delete those messages, turn on location tracking without the kids’ knowledge or consent, etc.

These fines would be a trivial slap on the wrist for Big Tech giants like Meta and TikTok. But they would put smaller platforms out of business, especially given that there are so many provisions in the bill that would be impossible for smaller platforms to comply with 100% of the time. There is nothing in this bill that limits the requirements or enforcement to larger companies or companies with a certain number of users. There is no carve out for nonprofits like Wikipedia or the Trevor Project.

This is part of the “smoke and mirrors” strategy of this bill. It creates a bunch of vague and unworkable legal requirements and then says “The AG will figure out how to enforce this and the tech companies will figure out how to comply,” while ignoring the practicalities and realities of how that will work.
We can’t trust tech companies. Leaving it up to them means they will implement changes in the cheapest, most profit-preserving way, even if that harms users’ privacy, safety, and rights.