Newly unsealed court documents are shedding light on what Instagram knew about the harmful content on its platform and the potential impact on teenagers. This information emerges as part of ongoing litigation in the “Social Media Adolescent Addiction/Personal Injury Products Liability Litigation,” a substantial federal case involving multiple technology firms. The filings reveal internal discussions at Meta, Instagram’s parent company, and raise significant concerns about the platform’s safety measures for young users.
Internal documents indicate that Instagram was aware of the extensive exposure teenagers had to harmful content, particularly regarding *suicide* and *eating disorders*. A 56-page opposition brief filed in the case includes data suggesting that there were hundreds of thousands of mentions of suicide on Instagram. The documents highlight that certain types of harmful content were disproportionately targeting a teenage audience. One internal PowerPoint presentation noted, “Teens’ behavior on IG suggests a need for more support. We know that SSI (suicidal ideation) and ED (eating disorders) have a significantly disproportionate large teen audience.”
The findings reveal a stark contrast between Instagram’s public statements and internal knowledge. In 2019, Adam Mosseri, the head of Instagram, committed to blocking graphic self-harm content from appearing in searches, hashtags, and recommendations. Despite these promises, parents reportedly expressed concerns about the need for stronger tools to protect teenagers from harmful content. The documents also indicated that competitors like *TikTok* were perceived as providing more effective safety measures.
Concerns About Public Relations and Content Regulation
The documents also contain internal communications discussing the potential public relations fallout from media scrutiny. After a reporter from *The Telegraph* contacted Instagram in September 2020 about harmful content surfacing in searches, employees debated the implications of revealing such information. One internal message highlighted the issue: “On search we’re exposed with nowhere to hide.” The discussions considered whether restricting certain content in search results could conflict with other priorities, such as shopping features.
The release of these documents coincides with a landmark civil trial in Los Angeles, where social media companies face allegations of intentionally designing their platforms to be addictive to children and teenagers. The lawsuit was initiated by a 20-year-old woman named *Kaley* and her mother, who argue that the design of several social media platforms contributed to serious mental health issues, including an eating disorder, anxiety, and depression.
The companies involved in this broader litigation include Meta (Instagram and Facebook), *YouTube*, and *TikTok*. While Snap and TikTok have settled some claims out of court, Meta and YouTube continue to contest the allegations.
Testimonies from Tech Executives
Executives from major tech companies have appeared in court to defend their practices. Meta CEO *Mark Zuckerberg* and Instagram head *Adam Mosseri* both testified, maintaining that the company has made significant efforts to improve teen safety. According to reports from *CNN*, Mosseri acknowledged that excessive use of social media could pose problems for teenagers. He noted that scrolling for as much as 16 hours a day could be “problematic,” but insisted that it should not be classified as “clinically addictive.”
Despite this, Meta and other technology firms argue that scientific research has not definitively established a link between social media use and addiction or mental health disorders. Critics, however, contend that the design of these platforms can exacerbate harmful behaviors among users.
The outcome of this trial could have far-reaching implications for how social media platforms are regulated, particularly concerning their younger users. Some companies have already begun implementing new safety measures, including age-based content filtering systems reminiscent of movie ratings, aimed at limiting the types of posts recommended to minors.
As the trial progresses, a central question looms: Did social media companies neglect to address harmful algorithms, or did they choose not to act?
