Freedom of Face: Nxt Museum Q&A with “Coded Bias” director Shalini Kantayya

Who wants a ‘perfect’ algorithm?

On March 23 2021, the Nxt Museum hosted a Q&A with filmmaker and activist Shalini Kantayya. She is the director of the investigative documentary “Coded Bias” (2020) which exposes and explores the biases of artificial intelligence technologies prevalent yet mostly invisible in daily life. During the event moderated by filmmaker and Next Museum curator Bogomir Doringer, Kantayya stated that in the discussions surrounding the ethics of AI, the solution is not always technological, and it will likely not materialise in the never-ending race towards optimum efficiency. A different approach is needed to tackle the inequalities and biases built into AI.

In her film, Kantayya gives the spotlight to computer scientist and digital activist Joy Buolamwini, whose research into face detection algorithms helped expose their biases on the basis of race and sex. Alongside Buolamwini, multiple scientists and government watchog groups in the documentary explain and expose the biases that are programmed consciously and unconsciously into algorithms that perform inaccurately when interacting with photographs of women and people of colour.

Shalini Kantayya and Bogomir Doringer discussing algorithmic biases exposed in Kantayya’s film “Coded Bias.”

Kantayya explains that the aforementioned inaccuracy is not the only issue. It is, of course, a symptom of the monopoly and inclusion problem of Silicon Valley. Kantayya notes that employing people of different marginalised experiences makes the biases in Big Tech visible, and provides the companies with teams of talented and passionate people able to diversify the data feeding the biased machines. However, she also states a change in employment strategy is not a solution to the continuing threats surveillance and its algorithms pose.

Doringer and Kantayya discussed the importance and logistics of campaigning for a more humane future of technologies. Kantayya stressed that “we cannot miss our humanity” as we continue to develop machines and ideas, and that we need science communicators and artists to work together to facilitate AI literacy. This collaboration will provide an easy-to-digest and engaging way of understanding the science behind it. Alongside education, Kantayya identified engaged and organised citizens as another crucial force behind positive change, as they have the capability to put pressure on lawmakers to recognise the threats of algorithmic biases.

Buolamwini’s recommended reading on real-world impacts of algorithmic bias – click on image to read more on her “Gender Shades” page.

Inequalities became amplified in 2020, with a new wave of civil rights activism sparked by the murder of George Floyd by police enforcement in the United States. Kantayya notes it took both tragedy and civil unrest for some Big Tech companies to acknowledge their use and abuse of facial recognition; only in 2020, two years after Buolamwini released her first study on the topic. Still, companies like Amazon, Google and Facebook continue to collect data and predict our patterns using biased machines. “We do not want perfect invasive surveillance,” says Kantayya when discussing the aims of companies to increase the accuracy of their algorithms. “We do not want a perfect algorithm.” Buolamwini agrees, sharing in a talk that it’s not just about the accuracy of the system itself; it’s about deciding what kinds of systems we want in the first place. If a system (algorithm) functions efficiently in a surveillance state built on oppression based on race, gender, and class, whom does it serve, if not the oppressor?

Towards the end of the Q&A, Shalini Kantayya encouraged viewers to recognise data rights as human rights. She also shared resources for further education and research on the topic (linked below). Participants were invited to unleash new imaginations about how these technologies can be used in a human rights framework. Big Tech companies must be held accountable for their active participation in racial profiling and the continued surveillance of citizens. Accessing and sharing education tools is a good step individuals can make to start organising and secure a more humane future of AI.

You can watch “Coded Bias” here for free until the 27th of March 2021.

Links for further research and engagement:

Big Brother Watch UK

Algorithm Justice League founded by Joy Buolamwini

Gender Shades

INC Longform on biometric art as a critical practice

 

Share