Over the years, I have joined and quit-for-good Facebook, Instagram, and Twitter. I still have a LinkedIn profile, but I rarely ever look at it unless some contact messages me (and, frankly, most of them know better ways to get in touch). As a general rule, I now only read content where the author has made a bona fide investment of time and thought: books, investigative journalism, blogs, and so on. I won’t allow our daughter to use YouTube unless she’s supervised because there are still so many child predators on there. I can’t imagine allowing her to have social media accounts of her own and watching them irretrievably destroy her peaceful and loving personality. I dread those future conversations.
You don’t understand how miserable social media makes you until you finally cut it out of your life. And I don’t mean the week or two after you quit where you see something that you wish you could tell someone about and can’t and that is annoying. I am talking about the weeks and months and years afterward, where you look around at your spouse, your kid, your house, your neighbors, your career, your body, your garden, the vacations you actually experienced instead of photographed, the stack of books on your nightstand, and on and on, and you realize the enormity of the opportunity cost of your being aggressively online. You also start to understand why the otherwise good people who remain aggressively online become a bit unhinged after a while.
I’ve written before about how social media platforms seem to have a clear life cycle. They are pleasant for early adopters. You find yourself talking to interesting, intelligent, and entertaining people that you would otherwise never meet. But over time, every ecosystem you inhabit becomes polluted with trolls and people (or companies) that want to discipline, direct, and take credit for what’s happening.
Everything becomes a managed narrative or potential source of conflict. No matter how many people you unfollow, mute, block, whatever, you cannot escape the bad actors. Your choices for participation are reduced to lurking or only posting the most inane, harmless content (though some people find a way to attack that too). Initially, your contempt for these problems is directed at a specific platform. But after you ditch your third or fourth profile, fatalism takes over, either about social media or society as a whole. You wonder why it is that these companies cannot balance freedom of speech with eliminating characters who don’t have anything in particular to say but just want to provoke or harass or cause pain.
The answer is (1) they can’t, and (2) even if they could, they wouldn’t want to.
Social media platforms are engineered with one goal in mind: to make people spend ever more time online. They want a captive, addicted audience who will generate revenue for them all day, every day. Who will look at promoted content. Who will give them an endless stream of personal data to hawk. They don’t want to educate you. They don’t want to challenge you or make you think. They don’t give a rat’s ass if the content you see makes you happy or smart or miserable, suicidal, anxious, full of hate, whatever, except to the extent that you can’t turn away from content that makes you feel something and that’s profitable. They are not, and never will be, moral actors.
This article from a former Google engineer, The Toxic Potential of YouTube’s Feedback Loop, is an excellent summary of why social media platforms tend to cause otherwise normal people to morph into trolls and their online ecosystems to morph into troll colonies. The author starts with how YouTube’s massive pedophile problem was built:
In February, a YouTube user named Matt Watson found that the site’s recommendation algorithm was making it easier for pedophiles to connect and share child porn in the comments sections of certain videos. The discovery was horrifying for numerous reasons. Not only was YouTube monetizing these videos, its recommendation algorithm was actively pushing thousands of users toward suggestive videos of children ….
Unfortunately, this wasn’t the first scandal to strike YouTube in recent years. The platform has promoted terrorist content, foreign state-sponsored propaganda, extreme hatred, softcore zoophilia, inappropriate kids content, and innumerable conspiracy theories.
Having worked on recommendation engines, I could have predicted that the AI would deliberately promote the harmful videos behind each of these scandals. How? By looking at the engagement metrics.
Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user—and users like them—to find and recommend other videos that they will engage with.
In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes—that is, the more data it has—the more efficient it will become at recommending specific user-targeted content.
Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.
Failures at detecting destructive content are far from the only problem having substantially all traffic deriving from recommendation algorithms produces, however. Their tendency to create feedback loops is what’s mechanically responsible for our society’s apparent collective mental illness (emphasis mine):
Earlier this year, researchers at Google’s Deep Mind examined the impact of recommender systems, such as those used by YouTube and other platforms. They concludedthat “feedback loops in recommendation systems can give rise to ‘echo chambers’ and ‘filter bubbles,’ which can narrow a user’s content exposure and ultimately shift their worldview.”
The model didn’t take into account how the recommendation system influences the kind of content that’s created. In the real world, AI, content creators, and users heavily influence one another. Because AI aims to maximize engagement, hyper-engaged users are seen as “models to be reproduced.” AI algorithms will then favor the content of such users.
The feedback loop works like this: (1) People who spend more time on the platforms have a greater impact on recommendation systems. (2) The content they engage with will get more views/likes. (3) Content creators will notice and create more of it. (4) People will spend even more time on that content. That’s why it’s important to know who a platform’s hyper-engaged users are: They’re the ones we can examine in order to predict which direction the AI is tilting the world.
More generally, it’s important to examine the incentive structure underpinning the recommendation engine. The companies employing recommendation algorithms want users to engage with their platforms as much and as often as possible because it is in their business interests. It is sometimes in the interest of the user to stay on a platform as long as possible—when listening to music, for instance—but not always.
We know that misinformation, rumors, and salacious or divisive content drives significant engagement. Even if a user notices the deceptive nature of the content and flags it, that often happens only after they’ve engaged with it. By then, it’s too late; they have given a positive signal to the algorithm. Now that this content has been favored in some way, it gets boosted, which causes creators to upload more of it. Driven by AI algorithms incentivized to reinforce traits that are positive for engagement, more of that content filters into the recommendation systems. Moreover, as soon as the AI learns how it engaged one person, it can reproduce the same mechanism on thousands of users.
As toxic people and toxic content is promoted on these platforms, more toxic content is produced. AI routes people to that content. People who do not like this content walk away or stop interacting. The percentage of content that is toxic increases even more as normal conversations are suffocated. Pretty soon, toxic content is the only content that is being produced. Literally the only way to generate and preserve a following is to participate in the toxicity. And the cumulative effect on the country’s mental health is devastating.
This also goes to show you how naive statements like “get outside of your bubble” are. A liberal having a token conservative friend is not going to reverse the avalanche of toxic content. Media companies fully understand this too. They have started producing grotesquely slanted content, shunning nuance, and ignoring unpredictable takes because they don’t need to reach as many people as possible with what they write. They just need to feed the toxic wasteland they’ve produced on a regular basis and let the algos do the rest.
In many ways, these big tech companies resemble Wall Street banks before the financial crisis. (It’s funny how many of the costly doom loops that emerge in history have similar structures no matter what the medium may be. But I digress.) They look at the world with very short time horizons. They want as much traffic and data as they can possibly get right now, and the fact that their billions of users are increasingly becoming insane, violent, and ill-informed is not their problem. They don’t care about that the generations that are going up aggressively online are now basically unemployable. They don’t care about the volume of criminal activity they are harboring or that it impacts some of the most vulnerable populations in the country.
They will pretend that deleting troll accounts is a solution. They’ll issue a press release saying they’ve deleted two million predatory accounts in the past month knowing full well that two billion more will follow. The problem isn’t individual accounts and they know that. It’s the engineering. It’s their basic business model. It’s their economic incentives.
Very cogent post. Getting closer to cutting the Facebook thing and never did Twitter or Instagram. My digital footprint is small but does exist. I see no value in social media. Not good for society and I see the impact on my grands. So much chaos and so little value.
LikeLiked by 1 person
Great insights. I like the part about only consuming material that has had significant thought put into it.
LikeLike