In Defense of Algorithmic Bias: The Palestinian Struggle and the Deception of “Neutral” Tech

I found myself staring at a computer screen, my face blurred and tracked by a green rectangle that followed me everywhere I moved. Behind the screen, a pre-recorded video showed similar rectangles over other people’s faces, next to the bold caption: “SHOCKING FOOTAGE OF A 25-YEAR-OLD BEING KIDNAPPED FROM A MUSIC FESTIVAL IN ISRAEL.” This was the scene at Milipol Paris, one of the world’s largest security expos, where I was watching myself being surveilled by one of Israel’s top surveillance companies—just one month after the full-scale invasion of Gaza. Their polished marketing highlighted precision, efficiency, and a confusing claim: their facial recognition technology was “anti-bias”.

What caught my attention was how this concept of “anti-bias” or “bias-free” technology was being reframed. In decolonial, black, and feminist studies, the call for bias-free algorithms is a response to the way algorithmic systems often perpetuate societal inequities, disproportionately harming marginalized groups. Examples of this include research from Carnegie Mellon University, which found that Google’s online advertising system displayed high-paying job positions to males more frequently than to women, and a study showing that a widely used U.S. healthcare algorithm favored white patients over Black patients when predicting who should receive additional care. Yet here, that discourse had been co-opted by Israeli surveillance companies, using the language of neutrality to market their facial recognition technologies as “anti-bias” and equitable—offering the promise of more just policing. While bias-free algorithms are often seen as a means of promoting equity in fields like healthcare, education, and hiring—where equal treatment is essential—their application in surveillance paints a different, more disturbing picture.

For instance, Corsight AI, the Israeli producer of facial recognition technology I visited at Milipol, proudly brags about its “anti-bias algorithms” and its top ranking in identifying individuals with darker skin tones. They also emphasize their technology’s ability to recognize people with facial coverings. Although this may seem like a showcase of “inclusive technology,” “diversity done right,” or even a nod to “decoloniality,” the reality is far more sinister. Indeed, Corsight’s technology is actively employed as a tool of mass surveillance against Palestinians in Gaza. After the Israeli ground invasion of Gaza in October 2023, Israeli soldiers, equipped with cameras powered by Corsight’s software, have routinely scanned, documented, and detained individuals at checkpoints and along major roads in Gaza—all without their knowledge or consent. Therefore, thanks to Corsight’s technology, Israel’s control of Palestinians has dramatically expanded.

Another Israeli company, Oosto (formerly AnyVision), similarly markets its facial recognition technology as ethical and bias-mitigating. Oosto emphasizes its commitment to reducing demographic bias, particularly with respect to skin tone, and has established an ethics review board to underscore its adherence to “ethical AI” principles. However, the company’s actions tell a different story. The company has been involved in multiple projects with the Israeli military, including installing facial recognition systems at military checkpoints in the West Bank, where Palestinians must present work permits to enter the Green Line and Israeli settlements. Oosto’s technology has also been integrated into existing CCTV networks throughout the West Bank, enhancing the military’s ability to monitor and control Palestinian movements beyond the checkpoints. Crucially, the reason Oosto’s algorithms can appear “bias-free” is precisely because they have been trained and tested on the Palestinian population—without their knowledge or consent. This unethical experimentation allows the company to refine its technology under real-world conditions, perfecting its accuracy on a specific demographic.

In an oppressive system like the Israeli occupation, the promise of bias-free surveillance does not signify fairness or justice. It means that the technology is uniformly efficient at surveilling, identifying, and controlling all Palestinians, regardless of whether they are activists, civilians, or children. For a group already subject to widespread state violence, unlawful detentions, and disproportionate use of force, being subject to “equal” scrutiny and repression is not a step towards justice but towards a more totalizing and indiscriminate system of control.

Here, algorithmic bias can paradoxically serve as a form of resistance. If facial recognition algorithms were less effective at identifying Palestinians—due to factors such as darker skin tones, religious headwear, or keffiyehs—this inefficiency could serve as a shield against excessive surveillance. Activists and civilians alike might evade detection, avoiding the severe consequences of false identification or detention. Hito Steyerl’s video work How Not to Be Seen offers a conceptual parallel, portraying invisibility paradoxically as a form of resistance against the pervasive gaze of surveillance. Bias, which can grant invisibility, therefore introduces inefficiency and unpredictability to the surveillance apparatus, making it harder for the state to maintain its oppressive control. It creates cracks in the facade of seamless surveillance, offering room for resistance.

While misidentification in surveillance systems can have serious consequences, such as wrongful arrests or unjustified detentions, these concerns take on a different dimension in a system as skewed as the Israeli occupation. Misidentification can lead to tragic errors, but a bias-free system could be far worse, ensuring that every Palestinian, irrespective of their actions or affiliations, is equally surveilled, identified, and controlled. The companies’ use of “anti-bias” as a marketing tool is therefore not about reducing discrimination; it is about making the apparatus of repression more efficient and comprehensive.

The case of Corsight and Oosto technology is particularly disturbing because it highlights how a company can co-opt the language of social justice to justify and market its tools of repression, much like how corporations employ greenwashing to appear environmentally friendly while engaging in climate-damaging practices. Traditionally, discussions on algorithmic bias have been about addressing inequalities and advocating for fairer systems. In fields like healthcare, bias-free algorithms help address inequalities by ensuring that people of all races and genders receive accurate diagnoses. But in the context of surveillance, bias-free technology serves a darker purpose: to make the machinery of control as efficient and impartial as possible

This contradiction between the promise of bias-free technology and the realities of systemic oppression exposes a fundamental truth about these technologies: they cannot be disentangled from the political systems they serve. Bias, traditionally seen as a technical flaw to be corrected, becomes a political challenge in contexts of surveillance and control. It reveals the limits of the rhetoric of neutrality, forcing us to confront the oppressive uses of so-called “unbiased” technologies. Only by analyzing these narratives can we start to break down the control systems they support and envision technologies that promote justice instead of oppression.

Share