Photo Credit: Unsplash
Ann Skeet is senior director of leadership ethics at the Markkula Center for Applied Ethics, 58ºÚÁÏÍø. Views are her own.
Two verdicts rendered by juries in two days hold Meta accountable for personal injury in California and violation of consumer protection laws in New Mexico. Google was a co- defendant in the California case, in which a 20-year-old woman claimed that the addictive design of social media apps contributed to her mental health problems. These verdicts signal a new reckoning for social media companies and, importantly, represent a failure of leadership.
In the social media corporations themselves, executives had access to meaningful information about the addictive nature of their products and their ability to engage users for extended periods of time. This research was withheld from the public, lengthening the time that consumers were unaware of its findings. In the case of Meta, the information only came to light after whistleblower Frances Haugen disclosed internal Facebook documents in 2021 that covered teenage mental health in addition to information the company had collected about hate speech, political misinformation, the promotion of ethnic violence and more.
It was two more years before dozens of US states sued Meta for harming young people by knowingly and deliberately designing features in their products that are addictive and it was 2024 before the U.S. surgeon general, Vivek Murthy, would call for a warning label on social media platforms. Now, five years since internal documents revealed the extent to which Meta knew their products caused harm, the company is finally being held accountable.
Some will argue that the penalty in the California case, $3 million, is too low to be meaningful to Meta. But there is a penalty phase of the case still to come, and a backlog of pending cases making similar claims. The penalty in the New Mexico case, which found the company deceived the public about child safety on its platforms, endangering children by exposing them to sexual exploitation, harmful content, and content from predators, was much larger, $375 million dollars. In response to the New Mexico case, Meta said it would appeal and, following the California verdict, it said it would consider its legal options.
Notably, what Meta did not say after the verdict was that it was sorry for the harms children experienced. During the California trial, Meta argued that the plaintiff’s mental health problems started before her use of social media and were due to her troubled childhood and not their products. This is a shallow and specious stance. In staking this claim as its primary defense, the company neglected to take responsibility for the impact of its products on vulnerable populations. All children are vulnerable by virtue of their age, but children who have experienced trauma or other mental health challenges are especially so. Fortunately, two juries decided that all children require special protection.
These back-to-back verdicts rendered by juries add up to another significant reputational hit for Meta, and one that I believe is more meaningful than the backlash from the Cambridge Analytica data scandal. There the issue was misuse of acquired data, but here the issues have been about hurting children, including those with special needs. The findings give parents new language to engage their children with about the use of these platforms—they are criminally addictive and even young children understand the significance of breaking the law. Though the job market is tight, these cases might sway job seekers away from considering employment at the companies who have now been found guilty. Meta has just introduced a new stock option plan for employees meant to incentivize faster growth, a move that is imprudent given that its “move fast and break things” culture has resulted in the current products’ faulty designs.
Certainly, it sends a message to customers that the company has learned little from being sued for personal injury and failing to follow customer protection laws. Meta executives are not the only ones to fail the leadership test. U.S. lawmakers have neglected to pass meaningful regulation to hold social media companies accountable for their products’ safety and protect children. And, unfortunately, action by states’ attorney generals and the surgeon general lagged the known threat.
Both polling and public commentary have captured the sentiment that we should learn from the mistakes of social media and apply those learnings to artificial intelligence. Early research and reporting indicate that AI use leads to increased incidences of depression, psychosis, and even suicides among children and adults. Executives in AI companies and legislators who want to see themselves as leaders will recognize that AI can cause harm to children and other vulnerable populations at scale. Good regulations serve the public’s interest and help companies innovating in the early days of breakthrough technologies by clearly demarcating their responsibilities. AI company executives and law makers should act now to prevent history from repeating itself.