Chart: Americans Agree
Social Media Bans for Kids
Will courts treat social media like tobacco?
Key Points
Across party lines, majorities of Americans support banning social media for kids under 16, but the legal path is far from settled.
The debate has shifted from harmful content to allegedly “addictive” design features like algorithmic recommendations, infinite scroll, and auto-playing videos.
The targeting of addictive features is central to Florida’s first-in-the nation ban on social media for kids.
A court challenge to the Florida law, along with separate lawsuits against major platforms, will test whether addiction-based claims can succeed where content-based regulation failed.
If courts embrace this angle, platforms will be forced to fundamentally redesign how they operate, at least for younger users.

Image by Max Fischer / Pexels
Across party lines, Americans favor the idea of banning social media for kids:
Chart: Americans Agree
Details
| Question | Australia has a ban on social media for kids under 16 years of age. Would you like to see a similar ban in the United States, or not? |
| Response | Yes/Would |
| Poll Main Page | Voters Give Democrats In Congress A Record Low Job Approval But Still Might Vote For Them In 2026 |
| Interview Period | Dec. 11, 2025 to Dec. 15, 2025 |
| Sample Size | 1,035 |
| Policy Context | When this poll was conducted in December 2025, Australia had just started enforcing a ban on social media for kids under 16. The closest initiative in the U.S. Congress was a Senate bill, the Kids Off Social Media Act, with bipartisan co-sponsors. It proposed to ban account access for anyone under 13 and restrict algorithmic recommendations for under-17s. The bill advanced out of committee in June 2025, but as of the end of 2025, it had not advanced further and there was no companion House bill. |
| Share Link | Social Media Ban for Kids Under 16 : Quinnipiac University, Dec. 17, 2025 |
Other polls have shown parents’ concerns about the effect of social media on their children. For example, in February 2025 the University of Michigan’s C.S. Mott Children’s Hospital surveyed parents of children 1 to 18 years of age. Asked to select which issues were “big problems” for children’s health, the top two selections were “Social media” and “Too much screen time / use of devices,” each chosen by 75% of parents.
One U.S. state, Florida, has already enacted a social media ban. Rather than targeting the content on social media, it targets what critics call “addictive features”: design elements meant to maximize engagement. In addition to Florida’s law, many lawsuits have taken the same approach directly against social media companies.
Although relatively new for social media, similar addiction theories were used against tobacco companies decades ago. The question is, will courts follow that lead?
The Evolving Debate about Harms
When social media became popular in the 2010s, the initial concerns centered on the content kids were seeing. Highly curated photos were blamed for rising body-image pressures. Cyberbullying became a hot topic. The dominant frame treated social media as a channel that delivered problematic material, and the question was whether that content was psychologically damaging.
Researchers found correlations between heavy social media use and higher rates of depression, anxiety, and loneliness among adolescents. But correlation did not establish causation: Did social media use worsen mental health, or were teens who were already struggling more likely to use social media? Or were the correlations caused by other factors affecting young people during the same period? Or were mental health conditions being diagnosed differently over time?
Further complicating the situation, other observers emphasized that social media can serve as a vital support system for marginalized youth communities, including LGBTQ+ teens, young people in rural areas, and ethnic minorities. For some adolescents, online platforms provided connection, identity exploration, and access to peers that did not exist in their immediate geographic communities.
Ultimately, there was no clear scientific consensus that the content on social media was clearly harmful to children. The evidence was mixed, effects appeared to vary by individual and context, and the debate remained unsettled.
In the 2020s, however, the framing of social media harms began to shift. Increasingly, concern was not just about what children saw but about how platforms were designed. Critics argued that features such as algorithmic recommendation systems, infinite scroll, autoplaying videos, streaks, and push notifications were built to maximize engagement and could encourage addictive patterns of use, especially among kids.
With this newer frame, the question was whether the underlying mechanics and business incentives of social media platforms were themselves at odds with healthy child development. Given the success of a similar approach against tobacco companies in the past, analogies that compared social media companies to cigarette manufacturers became more common.
A Parallel Legal Shift
The change in how harms were framed had legal consequences. As long as the concern was defined as problematic content, the First Amendment and Section 230 of the Communications Decency Act protected social media companies in their roles as platforms for users’ free speech.
But the newer addiction-based framing changed the game. By arguing that platforms have engineered products in ways that foreseeably encourage addictive use among minors, legal challenges could avoid free-speech questions and instead pursue consumer protection or failure-to-warn claims—areas where courts have traditionally allowed more regulation.
The Florida Ban
Which brings us to Florida. It became the first U.S. state to enforce a law banning children from popular social media platforms. The law bans minors under 14 from having accounts, and it requires parental consent for 14- or 15-year-olds to have accounts.
The Florida ban represents the new legal approach against social media. Instead of targeting speech, it defines covered platforms in part by whether they contain what the law calls “addictive features,” including:
Algorithmic recommendation systems that analyze user data to recommend what the user should watch
Infinite scroll and autoplay, which remove natural stopping points
Push notifications and other alerts designed to draw users back into the app
Personal interactive metrics such as visible counts of reactions, shares, or reposts
Live-streaming, which allows real-time broadcasting and interaction
These features are central to how modern social media operates. As a result, the Florida law would apply to the platforms currently most popular with youth, including Instagram, TikTok, Snapchat, and YouTube.
Lawsuits
The Florida ban is being challenged in court by trade associations representing technology and social media companies. The case is currently before a U.S. appeals court and could ultimately reach the Supreme Court. Since November 2025, Florida has been allowed to enforce the law while legal proceedings continue.
Meanwhile, a slew of other lawsuits are targeting the social media companies directly. They allege the companies knowingly designed systems that foster excessive use among minors and failed to adequately disclose internal research about potential harms.
For example:
In K.G.M. v. Meta et al., a California bellwether case drawn from hundreds of consolidated claims, a plaintiff alleges that companies including Meta (Facebook and Instagram) and YouTube purposely engineered addictive features that contributed to anxiety, depression, and body-image issues. TikTok and Snap settled related claims in late 2025.
Another case in California consolidates thousands of other claims against Meta, TikTok, Snapchat, YouTube, and others. The plaintiffs include individuals, parents, school districts, and some state attorneys general. They allege that the platforms were deliberately designed to be addictive and failed to warn users or their families about foreseeable harms.
The state of New Mexico is suing Meta for creating unsafe conditions for minors by facilitating connections with predators and harmful content.
Per the new approach to challenging social media, these cases focus on how the platforms work, not particular instances of objectionable content.
Practical Problems
If social media bans prevail legally, or lawsuits result in social media companies “voluntarily” agreeing to block underage use, there are still practical problems. The two biggest are related: (1) Kids will try to evade blocking mechanisms. (2) The stronger the blocking mechanism, the more it will impinge on privacy and civil liberties, not just of the kids but of all users.
For example, most major social media platforms already set minimum age requirements, typically 13 or 14 years old. But these usually rely on self-attestation without age verification. As a result, younger children are able to join simply by entering an older birthdate or checking a box saying they are of age. But if platforms require stronger proof—such as uploading a government ID, running facial-age estimation software, or verifying through a third-party database—that raises privacy and data-security concerns for everyone: Systems designed to keep out 13-year-olds may end up requiring 33-year-olds to submit sensitive personal information.
These practical problems are known in theory but have not been widely felt in practice. In Florida, the law requires platforms to use a “commercially reasonable method of age verification,” but given the newness of enforcement, there does not yet appear to be clear precedent for what qualifies and what does not.
Prospects
Going forward, there are two tracks to monitor.
First, the legal challenges to the Florida law should answer two questions:
Can states legally ban minors’ access to social media?
Will there be a way of blocking minors’ access that is effective and also acceptable from a privacy and civil liberties perspective?
It could take years, and a trip to the Supreme Court, to resolve the legal question. By that time, the practical question about blocking should be better understood, both from the experience in Florida and in places outside the U.S. that are attempting bans.
The second track is the lawsuits against the social media companies, which could cut in either direction.
If courts consistently reject addiction-based theories, then both lawsuit-driven redesign and legislative bans may hit a wall. In that case, the status quo could largely persist.
But if the social media companies start losing lawsuits, the companies may make a choice similar to the one made by cigarette manufacturers in the 1990s: stop defending the existing business model and instead reshape it—“voluntarily”—under mounting legal and financial pressure.
In that scenario, a new model of “safe social” could emerge. Features associated with addictive use would be removed, limited, or redesigned for underage users.
If that occurs, the rationale for outright bans may weaken. Florida’s law rests on the premise that mainstream platforms are built around addictive features incompatible with child well-being. If those features are materially altered for minors, courts and lawmakers may conclude that less restrictive alternatives are available.
In short, the lawsuits may do more than assign blame. They could either preserve the status quo or redefine what safe social media looks like, recasting the debate over whether bans are necessary at all.
Finally, there is a wildcard in Congress. The Kids Off Social Media Act (KOSMA) has bipartisan sponsorship in the Senate and House, plus the endorsement of 40 states’ attorneys general. It would prohibit social media accounts for children under 13 and restrict personalized recommendation systems for users under 17. It would not overrule stricter state laws like Florida’s, but if enacted, it could lessen the incentive for other states to pursue their own, more sweeping bans.
Versions of similar youth online safety proposals have stalled in prior Congresses, so KOSMA’s fate is uncertain. Regardless of KOSMA’s outcome, Florida remains the most consequential U.S. test of how far a state can go in restricting minors’ access to social media.