Fear Speech: The Problem No One Talks About

Nothing will change until social media companies stop monetizing attention.

Men with thumbs for head walking with phones.
Shutterstock

Last month, a picture that appeared to show a large cloud of black smoke near the Pentagon — the US headquarters of the Department of Defence — went viral on Twitter.

And although experts quickly identified it as an AI-generated image, it continued to be reposted by people — including several fake accounts pretending to be news outlets with paid-for blue ticks.

It even briefly caused the stock market to tumble.

But this is hardly the first time that happened.

And it’s not just Twitter that feels increasingly chaotic and rife with fake news, misinformation rabbit holes and doctored images and videos, which add more fuel to the dumpster fire of today’s polarised society. It’s social media platforms in general.

You’d think that in light of all of this — and especially seeing how the use of generative AI to spread mistruths is only becoming more common — tech companies would do more about today’s misinformation ecosystem they’re unquestionably a massive part of.

Or that social media users would realise that treating it as your primary — or only — source of information isn’t exactly a great idea.

Well, neither of these things is happening.

And by the looks of it, social media misinformation will only keep getting more and more out of control.


Social media is the new Wild West, and yet this is where many people get their news from

In recent days, several social media companies rolled back restrictions on misinformation and hate speech — including Youtube and Meta, which owns Facebook, Instagram and WhatsApp.

And according to a recent report by the European Commission, over the last few years, they’ve also become increasingly slower to take action against hateful and misleading content, even when it comes to posts that outright call for murder or violence against specific groups of people.

Some are now understandably calling this the ‘Twitter effect.’

After all, following Elon Musk’s takeover of the platform — and contrary to what he continues to claim — the use of hateful language has substantially increased while the misinformation guidelines have gradually loosened. He also reinstated the accounts of numerous extremists, including several with ties to organised hate groups and violence.

But tech giants’ reluctance to contain harmful speech isn’t exactly new.

When the ex-Facebook data scientist turned whistleblower Frances Haugen testified in 2021, she revealed that Meta repeatedly declined to take action against inflammatory misinformation because doing so decreased engagement and, thus, their advertising revenue.

And for all we know, this likely happened — and continues to happen — across all the other platforms as well.

It’s really no wonder then that social media became what it is today. And that so much of what we see on there is just piles of mistruths with a side of conspiracy theories heavily sprinkled with hate speech, trolling, deepfakes, and god knows what else.

Even people considered as ‘stunning intellectuals’ and ‘thought leaders’ by some frequently repost content without checking whether it’s true or false first. And yes, my favourite example is definitely that time when Jordan Peterson retweeted a ‘male milking’ fetish video falsely claiming it came from the Chinese Communist Party’s sperm bank.

But this is even more concerning if you consider that many people don’t read the news anymore.

According to a recent online behaviour survey done across three countries — United States, Netherlands and Poland — hard news and political topics consumption accounts for less than 1% of all URLs visited by the participants.

Even people who claim they read the news only actually scrolled passed it on their Facebook timelines, a similar study found.

And although older age groups still also consume news via traditional channels like TV, radio or print newspapers, their younger counterparts increasingly turn to social media. Among today’s teens, for instance, Instagram, Tik Tok and Youtube are now the top three most used news sources.

But while this scaling back of moderation coupled with people’s increasing reliance on social media platforms as news sources is already frightening on its own, there’s also the issue of algorithmic gods.


The algorithms only further amplify the influence of misinformation

Some people like to think about social media algorithms as gods who decide everything from what we buy to what we eat and how we live. And it’s not a bad analogy per se.

But I prefer to think of them as drug dealers.

They usually start you off with something light — perhaps it’s just a video of a pretty young woman washing her hair in a river.

There’s nothing wrong or suspicious about it, so you go ahead and like, comment or engage with it in some other way. And even if that engagement is negative, you’ll likely keep getting similar content on your feed.

By engineering, social media algorithms strive to keep users engaged as long as possible — extending the amount of advertising we view — and they keep showing us posts we’re most likely to interact with and not necessarily most likely to… actually like, which also ends up amplifying the most outrageous and misleading content.

But the next time you log on, the algorithms will try to test you a bit. You liked the video of a woman washing her hair in nature, so perhaps you’ll also like this post about the Big Shampoo conspiracy. Or this picture of a granola-munching white family who just moved to the countryside because they thought city pollution would turn their kids gay.

And before you know it, you’re getting white supremacist rhetoric spewed like it’s a fact all over your timeline.

Yup, this actually happened to me some time ago.

But if you see things repeated over and over again — the way the algorithmic gods work — eventually, that repetition might make it look like a fact, and, boom, you’re now hooked on an extremist narrative.

As Nazi chief propagandist Joseph Goebbels once said, ‘repeat a lie often enough, and it becomes the truth.’

This is also the finding of a recent paper published by neuroscientist Tali Sharot: people are more likely to share statements on social media that they had previously been exposed to, as the more information is repeated, the more we believe it to be accurate.

It also doesn’t help that the interactions baked into the platforms further confuse our efforts to evaluate a claim’s truthfulness.

Because if you see that many others have ‘liked’ a post, you might be tempted to believe it regardless of whether it fits with your existing worldview or not. But if it already does, you’re even more likely to fall for it.

That’s why so many young men nowadays fall down the insecurity-to-misogyny-and-facism pipeline, for instance. They already feel like the odds are stacked against them, and if they see a claim that explains why that is — like the ‘90% of women only date the top 10% of men’ myth — they’ll likely believe it right away.

And unfortunately, misinformation and hateful ideologies encountered on social media platforms can even sporadically lead people to take real-world actions with disastrous and even deadly consequences.

Think of all the incel attacks, people overdosing on ivermectin or the man who murdered his children after thinking they contained ‘serpent DNA.’


Unless tech companies stop choosing profit over safety, nothing will change

Even if social media platforms were willing to work on the issue of rising misinformation enabled by their own algorithms — which they don’t seem to be — it still wouldn’t be easy to distinguish between what’s purposefully inflammatory and misleading and what isn’t.

As investigative journalist Julia Angwin pointed out in a recent op-ed for The New York Times, while hate speech is relatively easy to detect, as it often contains derogatory words that can be filtered through automated moderation systems, fear speech isn’t.

Unsurprisingly, it also prompts more engagement on social media platforms than hate speech. And that’s precisely the kind of rhetoric we’ve been seeing for a while now, particularly from one side of the political equation:

They’re coming for your children. They’re not going to stop there. They want to imprison your mind. They want to take everything that’s yours. They’re poisoning your body.

You don’t even need to say anything explicitly hateful or false to attract thousands or millions of eyeballs and clicks. All you have to do is plant the seed of fear.

Now, couple that with the increasingly accessible, cheap and realistic AI deepfake tools. And the capabilities of large language models like Chat GPT that can generate huge swaths of content in minutes. And the fact that most people don’t even read hard news anymore and instead increasingly rely on social media platforms. And the way these platforms’ algorithms keep feeding us content regardless of whether it’s true or false, as long as it is engaging or… enraging.

The biggest problem with social media misinformation is not that it exists — after all, you can find examples of ‘fake news’ throughout history — but that, by design, it gets shoved down users’ throats to the point it can effectively alter their perception of reality.

On the bright side, there’s probably no better time than today to start a new religious cult or an MLM if you’re up for scamming some vulnerable people out of their money to fund your retirement.

Besides, there are some solutions that could help.

Researchers from the University College London recently tested the idea of ‘trust’ and ‘distrust’ buttons alongside the standard ‘like’ buttons. And they found not only that people used the trust/distrust buttons more than like/dislike buttons but that incentivising accuracy cut the reach of posts containing misinformation in half.

Even a simple solution, like investing in more human fact-checkers and ways to add context, could do the trick.

But the bad news is that all of that would decrease the engagement, which in turn decreases the profit, and, well, can we expect tech giants to willingly implement safeguards that reduce the magnitude of the problem they’re directly benefitting from?

I don’t think so.


Social media holds out the promise of connection and a sense of togetherness with people from all over the world.

But, paradoxically, we were probably never this far apart.

And unless we force social media companies’ hands to stop placing profits over safety and take action against this increasing polarisation, we can’t expect things to get better anytime soon.

Because there will only likely be more echo chambers, more radical rabbit holes and more AI-generated images and videos circulating online and at one point, it will become near-impossible to distinguish between fact and fiction.

Read more of Jatie Jgln's work here and on The Noösphere. You can buy her a coffee.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Sentinel-Intelligence.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.