Instagram announced that it plans to restrict content for teen accounts based on 13+ movie ratings last October in countries including Australia, Canada, the United Kingdom and the United States. The social network giant said Thursday that it is now applying these guidelines internationally for teen accounts. The development comes after Meta was held accountable for harming teens by courts in New Mexico and Los Angeles last month.
The idea behind this enforcement was that Instagram would show less content with themes like extreme violence, sexual nudity, and graphic drug use. The company would also hide or not recommend posts with strong language, certain risky stunts, and posts showing marijuana paraphernalia.
The company also has a new setting called “Limited Content” that would have stricter content filters and would prevent teens from seeing, leaving, or receiving comments under posts.
“Just like you might see some suggestive content or hear some strong language in a movie rated for ages 13+, teens may occasionally see something like that on Instagram, but we’re going to keep doing all we can to keep those instances as rare as possible. We recognise no system is perfect, and we’re committed to improving over time,” the company said in a blog post.
Last year, when Meta rolled out these restrictions, it marketed them as PG-13-inspired limits. However, the Motion Picture Association (MPA) sent a cease-and-desist letter, demanding that Meta stops using the term, claiming that a movie rating system can’t be compared with social media content.
Meta seems to have moved away from the branding since then. In the latest blog post, the company acknowledged that, “there are differences between movies and social media” and said that the ratings reflect settings that feel closer to the “Instagram equivalent” of a movie rated appropriate for teens.
Meta has been consistently scrutinized for prioritizing product growth while ignoring teen mental health. The company has been on the defensive, such as launching new controls and limits to potentially reduce harm for teen users. In the past few months, the company has launched a way to notify parents if teens are searching for self-harm content, new parental controls for its AI experiences, and paused teen access to AI characters while it works on a new version.
Meanwhile, court filings revealed that Meta waited for years to roll out a feature like automatically blurring explicit images in direct messages while being aware of the issue for years. The company’s latest step to expand content restrictions for teens internationally could be a preventive step, as the social network may face additional scrutiny across various regions around its practices to protect children following the legal cases in New Mexico and Los Angeles.

