Social media is a flawed product

Meta CEO Mark Zuckerberg leaves Superior Court in Los Angeles, California

Kyle Grillot/Bloomberg via Getty Images

I just sat down to write, but before I typed the words into my document, I pulled out my phone to check my calendar. Then I got a chat notification from a friend who sent me a link to some meme on Instagram. It could also be checked. Below the post are a series of short videos queued up, algorithmically selected to charm me: one about ravens at the Tower of London, another about Indonesian street food. I’ll stick the raven in it. Then another. I can cycle through these reels endlessly and I do. The videos are increasingly disturbing and political. You know what’s coming next. When I look at the computer again, almost 45 minutes have passed.

My day is not ruined, but I feel depressed and tired. Where did all that missing time go? How did Instagram squeeze me into watching hundreds of videos (not to mention dozens of ads) when all I wanted to do was check my calendar? And why did I feel so miserable?

The answers to these questions are currently being discussed and will come to court two California court cases brought thousands of individuals and groups against social media giants Meta (owner of Facebook and Instagram), Google (owner of YouTube), Snap (owner of Snapchat), ByteDance (owner of TikTok) and Discord. Plaintiffs in these cases — from school districts to concerned parents — allege that social media platforms pose a danger to children, cause serious psychological harm and even lead to death. Children are exposed to videos of violence, impossible beauty standards and “competitions” that encourage dangerous stunts, and are led down dark rabbit holes from which they may never return. At stake in both cases is one fundamental question: can these companies make people feel terrible?

For more than a decade, many American politicians have suggested that the answer is no. Instead of trying to regulate companies, several states in the US have passed laws that focus on how children use social apps. Some attempt to restrict access using requiring parental consent for minors to create accounts, for example. Others tried to prevent teenage bullying the “like” ban counts against posts. Many of these laws have focused on the dangers of social media content. Here in the US, it basically allows companies out of the blue. There is an infamous section of our communications decency law known as Section 230, which prevent companies from being held responsible for content posted by users.

You can understand why Section 230 seemed like a good idea when it was written in the 1990s. Back then, no one was worried about doomscrolling, algorithmic manipulation or toxic “lookmaxxer” influencers who encourage their followers hit their faces with hammers to create a more prominent jaw. Section 230 also seemed practical: reports YouTube that 20 million videos are uploaded to its service every day. The company and others like it couldn’t function if they were responsible for every illegal thing sent to them.

Behind all this law making is the fact that the US is an absolutist nation with free speech. That means it’s very easy for companies like Meta or Google to challenge laws that could limit people’s access to speech online, even if that speech is a video about how to starve yourself to lose weight. Many of these laws restricting minors’ access to social media have been struck down by judges who find them in violation of free speech. As a result, many social media companies in the US have been able to whip up free speech laws as a shield against any kind of regulation.

Until now. What is fascinating about the two current cases in California is that they deftly sidestep questions of content and free speech. Instead, he argues that the very design of social media platforms is “flawed” and therefore harmful; endless scrolling, constant notifications, auto-playing videos and algorithmic lure that feeds our fixations – these features are intentionally created by the companies themselves. And as the lawsuits allege, these “flaws” turn social media apps into “addictive” products, akin to “slot machines” that “exploit young people” by providing them with “an endless AI-driven resource for users to scroll through.” Ultimately, these lawsuits aim to force social media companies to take responsibility for the negative effects their products have on the most vulnerable consumers.

In many ways this argument is similar those brought by the US government against tobacco companies in the 1990s. The government successfully argued that the companies knew their products were harmful but covered it up. As a result, the companies paid large settlements to victims, placed warning labels on tobacco products, and changed their marketing to stop appealing to children.

They already exist leaked documents from Meta indicating that the company knew its product was addictive. A federal judge has unsealed court documents for a case in which a teenage girl killed herself after becoming addicted to social media. The documents included internal Instagram communications in which a user experience specialist allegedly wrote: “Oh my gosh yall [Instagram] is dope… We’re basically promoters.” This is one of many documents from Instagram and YouTube that lawyers say portray companies that knowingly and negligently make defective products.

Two trials are currently underway and have potential dramatically transform social media. Perhaps American law will finally recognize what many of us have known for years: the problem is not the content, but the behavior of the companies that give it to us.

Do you need to listen? British Samaritans: 116,123 (samaritans.org); US Suicide & Crisis Lifeline: 988 (988lifeline.org). Visit bit.ly/SuicideHelplines for services in other countries.

topics:

Source

Be the first to comment

Leave a Reply

Your email address will not be published.


*