Fighting Disinformation: We’re Solving The Wrong Problems

Tackling disinformation and misinformation is a problem that is important, timely, hard… and, in no way new. Throughout history, different forms of propaganda, manipulation, and biased reporting have been present and deployed — consciously or not; maliciously or not — to steer political discourse and to goad public outrage. The issue has admittedly become more urgent lately and we do need to do something about it. I believe, however, that so far we’ve been focusing on the wrong parts of it.

Consider the term “fake news” itself. It feels like a new invention even though its literal use was first recorded in 1890. On its face it means “news that is untrue”, but of course, it has been twisted and abused to claim that certain factual reporting is false or manufactured — to a point where its very use might suggest that a person using it not being entirely forthright.

That’s the crux of it; in a way, “fake” is in the eye of the beholder.

Matter of trust

While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

As the world continues to assess the harrowing consequences of the pandemic, it is clear that the misinformation around and disinformation campaigns about it had a real cost, expressed in needless human suffering and lives lost.

It is tempting, therefore, to call for censorship or other sanctions against misinformation and disinformation peddlers. And indeed, in many places legislation is already in place that punishes them with fines or jail time. These places include Turkey and Russia, and it will surprise no one that media organizations are sounding alarms about them.

The Russian case is especially relevant here. On the one hand, the Russian state insists on calling their war of aggression against Ukraine a “special military operation” and blatantly lies about losses sustained by the Russian armed forces, and about war crimes committed by them. On the other hand, Kremlin appoints itself the arbiter of truth and demands that any news organizations in Russia propagate these lies on its behalf — using “anti-fake news” laws as leverage.

Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

And so we apparently find ourselves in an impossible position:

On one hand, the global pandemic, a war in Eastern Europe, and the climate crisis are all complex, emotionally charged high-stakes issues that can easily be exploited by peddlers of misinformation and disinformation, which thus become existential threats that urgently need to be dealt with.

On the other hand, in many ways, the cure might be worse than the disease. “Anti-fake news” laws can, just like libel laws, enable malicious actors to stifle truthful but inconvenient reporting, to the detriment of the public debate, and the debating public. Employing censorship to fight disinformation and misinformation is fraught with peril.

I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

Compounding the problem

The reason why misinformation and disinformation spread so fast is that our most commonly used communication tools had been built in a way that promotes that kind of content over fact-checked, long-form, nuanced reporting.

According to Washington Post, “Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content — including content likely to make them angry.”

When this is combined with the fact that “[Facebook’s] data scientists confirmed in 2019 that posts that sparked [the] angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news”, you get a tool fine-tuned to spread misinformation and disinformation. What’s worse, the more people get angry at a particular post, the more it spreads. The more angry commenters point out how false it is, the more the algorithm promotes it to others.

One could call this the “outrage dividend“, and disinformation benefits especially handsomely from it. It is related to “yellow journalism“, the type of journalism where newspapers present little or no legitimate, well-researched news while instead using eye-catching headlines for increased sales, of course. The difference is that tabloids of the early 20th century didn’t get the additional boost from a global communication system effectively designed to promote this kind of content.

I am not saying Facebook intentionally designed its platform to become the best tool a malicious disinformation actor could dream of. This might have been (and probably was) an innocent mistake, an unintended consequence of the way the post-promoting algorithm was supposed to work.

But in large systems, even tiny mistakes compound to become huge problems, especially over time. And Facebook happens to be a gigantic system that has been with us for almost two decades. In the immortal words of fictional Senator Soaper: “To err is human, but to really foul things up you need a computer.”

Of course, the solution is not as simple as just telling Facebook and other social media platforms not to do this. What we need (among other things) is algorithmic transparency, so that we can reason about how and why exactly a particular piece of content gets promoted.

More importantly, we also need to decentralize our online areas of public debate. The current situation in which we consume (and publish) most of our news through two or three global companies, who effectively have full control over our feeds and over our ability to reach our audiences, is untenable. Monopolized, centralized social media is a monoculture where mind viruses can spread unchecked.

It’s worth noting that these monopolistic monocultures (in both the policy and software sense) are a very enticing target for anyone who would be inclined to maliciously exploit the algorithm’s weaknesses. The post-promoting algorithm is, after all, just software, and all software has bugs. If you find a way to game the system, you get to reach incredibly numerous audiences. It should then come as no surprise that most vaccine hoaxes on social media can be traced back to only 12 people.

Centralization obviously also relates to the ability of billionaires to just buy a social network wholesale or the inability (or unwillingness) of mainstream social media platforms to deal with abuse and extremism. They all stem from the fact that a handful of for-profit companies control the daily communication of several billion people. This is too few companies to wield that kind of power, especially when they demonstrably wield it so badly.

Alternatives do already exist. Fediverse, a decentralized social network, does not have a single company controlling it (and no shady algorithm deciding who gets to see which posts), and does not have to come up with a single set of rules for everyone on it (an impossible task, as former Twitter CEO, Jack Dorsey, admits). Its decentralized nature (there are thousands of servers run by different people and groups, with different rules) means that it’s easier to deal with abuse. And since it’s not controlled by a single for-profit company there is no incentive to keep bad actors in so as not to risk an outflow of users (and thus a drop in stock prices).

So we can start by at least setting up a presence in the Fediverse right now (following thousands of users who migrated there after Elon Musk’s Twitter bid). And, we can push for centralized social media walled gardens to be forced to open their protocols, so that their owners no longer can keep us hostage. Just like the ability to move a number between mobile providers makes it easier for us to switch while keeping in touch with our contacts, the ability to communicate across different social networks would make it easier to transition out of the walled gardens without losing our audience.

Media funding

As far as funding is concerned, entities spreading disinformation have at least three advantages over reliable media and fact-checking organizations.

First, they can be bank-rolled by actors who do not care if they turn a profit. Secondly, they don’t have to spend any money on actual reporting, research, fact-checking, and everything else that is both required and costly in an honest news outlet. Third, as opposed to a lot of nuanced long-form journalism, disinformation benefits greatly from the aforementioned “outrage dividend” — it is easier for disinformation to get the clicks, and create ad revenues.

Meanwhile, honest media organizations are squeezed from every possible side. Not the least by the very platforms that gate-keep their reach, or provide (and pay for) ads on their websites.

Many organizations, including small public grant-funded outlets, find themselves in a position where they feel they have to pay Facebook for “reach”; to promote their posts on its platform. They don’t benefit from the outrage dividend, after all.

In other words, money that would otherwise go into paying journalists working for a small, often embattled media organization, gets funneled to one of the biggest tech companies in the world, which consciously built their system as a “roach motel” — easy to get in, very hard to get out once you start using it — and now exploits that to extract payments for “reach”. An economist might call it “monopolistic rent-seeking“.

Meanwhile, the biggest ad network operator, Google, uses their similar near-monopoly position to extract an ever larger share of ad revenues, leaving less and less on the table for media organizations that rely on them for their ads.

All this means that as time goes by it gets progressively harder to publish quality fact-checked news. This is again tied to centralization giving a few Big Tech companies the ability to control global information flow and extract rents from that.

A move to non-targeted, contextual ads might be worth a shot — some studies show that targeted advertising offers quite limited gains compared to other forms of advertising. At the same time, cutting out the rent-seeking middle man leaves a larger slice of the pie on the table for publishers. More public funding (perhaps funded by a tax levied on the mega-platforms) is also an idea worth considering.

Media education

Finally, we need to make sure our audiences can understand what they’re reading, along with the fact that somebody might have vested interests in writing a post or an article in a particular way. We cannot have that without robust media literacy education in schools.

Logic and rhetoric have long been banished from most public schools as, apparently, they are not useful for finding a job. Logical fallacies are barely (if at all) covered. At the same time both misinformation and disinformation rely heavily on logical fallacies. I will not be at all original when I say that school curricula need to emphasize critical thinking, but it still needs to be said.

We also need to update the way we teach, to fit the current world. Education is still largely built around the idea that information is scarce and the main difficulty is acquiring it (hence its focus on memorizing facts and figures). Meanwhile, for at least a decade if not more, information is plentiful, and the difficulty lies in filtering it and figuring out which information sources to trust.

Solving the right problem, together

“Every complex problem has a solution which is simple, direct, plausible — and wrong”, observed H. L. Mencken. This describes the push for seemingly simple solutions to the misinformation and disinformation crisis, like legislation making disinformation (however defined) “illegal”, well.

News and fact-checking communities have limited resources. We cannot afford to spend them on ineffective solutions — and much less on in-fighting about proposals that are both highly controversial and recognized broadly as dangerous.

To really deal with this crisis we need to recognize centralization — of social media, of ad networks, of media ownership, of power over our daily communication, and in many other areas related to news publishing — and poor media literacy among the public as crucial underlying causes that need to be tackled.

Once we do, we have options. Those mentioned in this text are just rough ideas; there are bound to be many more. But we need to start by focusing on the right parts of the problem.

 

Photo on the cover: Jenya Polosina (@polosunya).

In July, INC has published a new 44th edition of Theory on Demand, called Dispatches from Ukraine: Tactical Media Reflections and Responses, which includes the following piece. Order a physical copy or download the whole publication free of charge here.

Share