Theory Fragments of the Shadowban: Navigating Censorship of Palestinian Content

I’m not an influencer or public figure. Nor do I have the aspiration or skills to be one. I have around 1600 followers on my personal Instagram account, which are mainly friends, acquaintances, some work-related people and a few I do not know personally. When I share something on my account it’s usually a chaotic mix of personal experiences, work-related stuff, memes (a lot of memes) and political content. After posting about Palestinian justice, I’ve had friends block me without warning and had strangers tell me ‘to go to Palestine myself with my big mouth’. But, of course, this is not about me or my minor inconveniences, here in the Netherlands where I have enough privilege to be safe. What is striking, however, is that even though my account is of no public importance, every time I share pro-Palestinian content the number of people I reach with my stories drops significantly. After October 7th, it even dropped by 80% and has never fully recovered, even though Instagram indicates my account has not been flagged. And I am not the only one – it’s not just big accounts such as Eye On Palestine that are being censored.

What is happening to me and countless other Instagram users is called being shadowbanned, which Wikipedia defines as follows:

“Shadow banning is the practice of […] [partially] blocking a user [their content] from some areas of an online community in such a way that the ban is not readily apparent to the user.”

Whether it is your Instagram story views going down, your account not showing up in the search bar even if you type in your entire handle, or your content not showing up in the ‘for you feed’ of other users, and whether it is caused by automated algorithms or by human content moderators, shadowbanning can take various forms. However, the most important characteristic of shadowbanning is that the user is unaware of (the extent of and exact reasons for) it. This is what makes shadowbanning such a powerful tool of censorship, perfect for pushing one narrative and letting another one disappear without anyone noticing. As a user, how can you counter or object to this form of censorship, if the platform’s entire design is built on the premise that you are not supposed to (really) notice it has happened (or why or how it is happening exactly)?

However, the shadowbanning issue is not a new one – it just became more urgent. Pro-Palestinian content has been shadowbanned for years. The same is true for (educational) sex-positive content and for sex workers their accounts. However, Meta’s response to Palestinian content since October 7th has made its political color clearer than ever before. Great research on online censorship on social media platforms has been done, as it can take many forms – from being stuck in a filter bubble to accounts being deleted. When even Human Rights Watch publishes a report on the systemic global censorship of Palestine content, where shadowbanning is one of the 6 patterns of censorship they have identified, and states that Meta’s behavior fails to meet its human rights due diligence responsibilities, that should tell you enough.

As a part of INC’s Tactical Media Room Palestine initiative, where researchers, artists, activists and tech & cultural workers come together to join forces to investigate the internet and its infrastructures in the context of this genocide, I am sharing some of my thoughts, as a humble attempt to add to the current conversation on strategizing around shadowbanning, within the larger topic of online censorship.


(Memes by Socrates Stamatatos) 

Darkness, Light, Authority and Uncertainty  

What does this all mean for the Palestinian movement and other social movements? Before we can strategize, let’s zoom out and look at the mechanisms that are at play here first. It’s important to look at the words we use to describe what is currently happening and what power relations are hidden behind them:

Shadow (noun): an area of darkness, caused by light being blocked by something

Meta is literally using a metaphor that is as old as time; light is good and darkness is bad – and whatever is deemed bad, needs to be put away into the shadows. This  rhetoricshould sound familiar, as Netanyahu has used similar words to justify the genocide of Palestinian people – which other Western politicians, such as the frontman of the Green leftist Labor party in the Netherlands, have copied. This metaphor frames the oppressed as the oppressor and the oppressor as the oppresses – a classic tactic.

Let’s dive a little deeper, because a shadow is not the same as complete darkness or a void of nothingness. In his classic work In Praise of Shadows, first published in 1977, Junichirō Tamizaki has written a poetic stream of words where he, you guessed it, praises the shadow. He claims that shadows have a ghostly aura of depth, beauty and mystery. Shadow turns objects somber, refined and dignified, according to him, as too much light flattens and creates for a shallow environment. “Westerners, [however], attempt to expose every speck of grime and eradicate it” and see light as their most powerful ally. Tamizaki writes that their quest for a brighter light never ceases – from candles to oil lamps and from gas lights to electric lights. “So benumbed are we nowadays by electric lights that we have become utterly insensitive to the evils of excessive illumination.” What if we look at big tech platform censorship as a next step in the West its quest to eradicate everything that they deem ‘a speck of grime’? Overexposing its imperial values and interests to make sure any voice that does not align is quietly and gradually wiped out.

A shadow remains evasive, intangible, unknown and ambiguous. And that is precisely the point. The shadowban as a censorship tactic is so subtle, it is difficult to make explicit. Its elusiveness is exactly what makes it such an effective ideological weapon.

What does it mean then, to be banned into the shadows?

Ban (verb): to forbid (= to refuse or allow) something, especially officially

To be banned insinuates that there is an authority with rules and that you are being notified on the terms of your punishment for breaking those. However, in the case of shadowbanning, you are not giving information as to how you are being punished exactly and for what, for how long your punishment will last and, perhaps most importantly, how you can go in appeal. That uncertainty, that is the punishment in itself.

A Veil of Ambiguity 

One of the ‘use the word in a sentence’ examples on the shadowbanning Urban Dictionary page stood out to me:

“The new algorithm shadowbanned my dogs account.” 

This is arguably the most a-political example imaginable, as an account of a dog will most likely never be shadowbanned according to Instagram’s community guidelines. Whereas cute animal content is a pretty safe bet when it comes to being shadowbanned, Instagram’s community guidelines and terms of use are the opposite of transparent. Their rules are hard to find, splintered throughout different pages, hidden in different menus and the language used is ambiguous.

Let’s dive into a few examples, shall we.

X     “Instagram is a reflection of our diverse community of cultures, ages and beliefs. We’ve spent a lot of time thinking about the different points of view that create a safe and open environment for everyone. […] We created the Community Guidelines so you can help us foster and protect this amazing community. […] Overstepping these boundaries may result in deleted content, disabled accounts of other restrictions.”

What safety means, who is worthy of safety, and who decides this, is not stated anywhere.

X     “In some cases, we allow public awareness which would otherwise go against our Community Guidelines – if it is newsworthy and in the public interest. We only do this after weighing the public interest value against the risk of harm and we look to international human rights standards to make these judgments.”

It goes without saying how important it is that what is happening to Palestinian people needs to be witnessed. Social media platforms are extremely important for this, as most Western broadcast media outlets are still stuck on calling what is happening ‘a conflict’ or ‘a war between Israel and Hamas’.

Instagram also states that glorifying violence is never allowed. A discussion about violence, non-violence and resistance is not a possibility on this platform, as it decides for its users when violence is dignified and when it is not. Again, who makes these judgements and weighs the public interest against risk, based on what, is not stated anywhere.

X    Reasons for users to report the content of other users (Instagram counts on users to not kill the cop in their head) include nudity and harassment, among others. In this case, hate speech or symbols, violence and dangerous organizations are most relevant.

Again, what dangerous means, who causes a threat and to whom, and who decides this is not stated.

“It is never ok to encourage violence or attack anyone based on their race, ethnicity, national origin, sex, gender, gender identity, sexual orientation, religious affiliation, disabilities or diseases.” Instagram also states it follows the law and that it’s not a place to praise terrorism.

What race, ethnicity, national origin and religious affiliation Instagram associates with terrorism and what with self-defense or liberation, is not specified.

X    Instagram also states that it reduces the distribution of false information with third-party fact checkers and that they make ‘false content’ “harder to find” for other users and related content is automatically flagged too.

Who this third party is, how they are selected and what their protocols are, is not clear. Again, who decides what is false and what is true and what this is based on is not specified.

X    Clear instructions for users when they suspect they have been shadowbanned, or the word shadowban for that matter, are nowhere to be found.

Instagram decides what is good, bad, safe, dangerous, true, false, without having to specify what that means and for whom. The community guidelines then, serve as a veil of ambiguity, where ‘safety for all’ is a way to hide the platform’s politics in plain sight, in the shadows (see what I did there). Their politics are actually pretty clear when you take a second to shine some light (sorry I’ll stop now) at the way the guidelines are executed.

Stuck on the platform, stuck in speculation 

If you have been sharing pro-Palestinian and anti-Zionist content on Instagram yourself, or other content that has a risk of getting yourself shadowbanned, chances are you’ve heard about the various counter tactics (and have read the various guides) to find a way around the ambiguous community guidelines:

X    Post enough selfies and pictures of your Sunday brunch, pets and friends.

X    Don’t use the Palestinian flag or watermelon emojis, as they are flagged.

X    Use alternative spelling for words such as Palestine and genocide, such as ‘falasteen’ or ‘gncd’.

X    Don’t share URLs to petitions or sources directly in your story, as they could be flagged.

X    Interaction helps. Like, save, comment on each other’s content and add polls, stickers, and questions to your stories.

X    Change the standard setting in your profile that limits sensitive and political content on your feed.

X    Take a break from posting Palestinian content for at least 24 hours.

X    Screenshot posts instead of sharing them directly from the account that posted them.

X    Etc.

Given the fact that both the guidelines and its consequences are completely untransparent, our countertactics on the platform can be nothing else than speculative. We have to guess what works. We have to try what works. And keep adjusting our tactics to the best of our ability. The algorithms and content moderation rules to catch content that is deemed a shadowban keep changing as Meta keeps monitoring its users. The watermelon emoji worked for as a metaphor for a while, but not anymore, for example. Because of this, we will always be one step behind, we will always be reactive.

Not to mention the social tensions that arise when navigating this, while we’re stuck on the platform. If we start judging people for ‘not posting enough Palestine content’, or start feeling guilty if we share a cute picture of our cat in an attempt to be able to reach more people later, what are we even doing? And what is the point in sharing information with the 15 people who still see your content, if they already agree with you anyway? If anything, is this not nothing else but a distraction from the real cause?

I am shadowbanned, therefor I am 

With all this in mind, let me propose a new definition of the shadowban for those on the right side of history:

shadowban (noun): a censorship tactic used by big tech social media platforms where the reach of users and/or their content is limited without notifying the user on the reasons for and the terms of their punishment and appeal options, with the aim to distract and to push conservative, capitalist, imperialist, colonial, white supremacist, patriarchal, Western norms, veiled under ambiguous community guidelines that claim to have the intention to provide a safe environment for its users, leaving the user in uncertainty, reactively speculating

“I posted an Instagram story about Free Palestine and got shadowbanned.”

“I am shadowbanned on Instagram. – Oh, that must mean you’re on the right side of history then.”

Moving away from reactivity

If we look at shadowbanning with this lens, the question then is; why do we keep trying to share information on big tech platforms such as Instagram in the first place? How can we move away from a reactive approach and mobilize ourselves more productively?

One of the Tactical Media Room Palestine members told me about a conversation they had with a few other researchers and activists working on similar topics.[1] One of the frameworks that has come up in their conversations I find very useful in this context, is acknowledging the difference between organizing and mobilizing. When you organize with a critical mass of likeminded people, you need to think of privacy concerns and you do not have to worry about discussions about basis of political ideals. You don’t have to establish that being anti-Zionist and not wanting innocent people to die is not the same as being antisemitic, and that it’s safer to communicate on Signal than on WhatsApp, for example. When you mobilize, however, you need to reach larger groups of people that might be less informed or do not agree with your views yet, who are, still in fact located on big tech platforms. If we want to reach them, and bring about real change (which needs numbers), we cannot ignore this fact.

Using counter-shadowban techniques, as a way to fight from within the system, is useful and extremely important to get as much information out there as possible to as many people as possible. However, they are only useful if our strategies also include gradually getting as much people away from the platforms that are inherently designed with harmful ideals. Yes, as activists we have to be pragmatic and reach people on the platforms they are active on. But we also need to create awareness. BDS has a #NoTechforApartheid campaign, targeting Google and Amazon, for example. Whereas in the past, we have mainly called for switching to platform alternatives, perhaps it is time to consider boycott strategies as well? What boycotting would look like in the context of big tech platforms is something we would need to figure out together – theoretically, but especially practically. Because the platforms we use for justice and liberation should be explicitly anti-imperialist, -colonial, -capitalist, -white supremacist, -patriarchal, etc. The shadowban is just one of many reasons that show us we cannot be dependent on platforms that are not.

(Meme by @softcore_trauma)

[1] The two people who had this conversation are known to me, but I was not able to reach them to confirm whether they wanted their information published. For privacy reasons, I decided to be better be safe than sorry and to not publish them. If I hear from them and they express they want their names published, I will, of course, update this article.

Share