Program of Synthetic Vision/Images of Power (Framer Framed, 27/28.6.24)

SYNTHETIC VISION/IMAGES OF POWER
Truth, Evidence, Labour & Knowledge in the Age of AI

Dates: 27 & 28 June 2024

Venue: Framer Framed | Oranje-Vrijstaatkade 71, 1093 KS Amsterdam

Organisers/Conveners: Francesco Ragazzi + Donatella Della Ratta + Rocco Bellanova + Rebecca Stein

We are thrilled to invite you to the conference Synthetic Vision / Images of PowerTruth, Evidence, Labour & Knowledge in the Age of AI which explores the transformations induced by Artificial Intelligence (AI) within the interplay of power, knowledge, and images. Presenters will focus in particular on the unprecedented possibilities of these systems to generate synthetic vision, that is, the ability to “see” algorithmically, and synthetic image generation, or the ability to create new images through prompts. These transformations affect three key dimensions of the image. Firstly, in what we term image-truth, we are interested in the truth-value of images within contexts such as authentication, or recognition, in domains such as facial, emotion and crowd recognition. But it also concerns the generation of synthetic images to mimic reality in large datasets necessary to train other algorithms. In our second exploration, we focus on image-evidence, specifically addressing the capaity to recognize or generate images for evidentiary purposes. This dimension holds relevance in the realms of journalism, with considerations surrounding information, propaganda, and the identification of ‘fake news,’ as well as within the legal sphere, encompassing both the utilization of images as judicial evidence and their role in event reconstruction. Third, we are interested in the figure of image-labour, i.e., computer vision as a result of a visible and often invisible labour on the image, of programmers, annotators and operators.

The aim of this event is to explore the transformations induced by Artificial Intelligence (AI) within the interplay of power, knowledge, and images. We are interested in particular in the unprecedented possibilities of these systems to generate synthetic vision, that is, the ability to “see” algorithmically, and synthetic image generation, that is, the ability to create new images through prompts. These transformations affect three key dimensions of the image. Firstly, in what we term image-truth, we are interested in the truth-value of images within contexts such as authentication, or recognition, in domains such as facial, emotion and crowd recognition. But it also concerns the generation of synthetic images to mimic reality in large datasets necessary to train other algorithms. In our second exploration, we focus on image- evidence, specifically addressing the capacity to recognize or generate images for evidentiary purposes. This dimension holds relevance in the realms of journalism, with considerations surrounding information, propaganda, and the identification of ‘fake news,’ as well as within the legal sphere, encompassing both the utilization of images as judicial evidence and their role in event reconstruction. Third, we are interested in the figure of image-labour, i.e., computer vision as a result of a visible and often invisible labour on the image, of programmers, annotators and operators.

Machine learning techniques – currently labelled as a subset of “artificial intelligence”, in fact, a sophisticated set of statistical calculations (see Francis Hunger) – have substantially transformed the relations between power, knowledge and images. These transformations manifest in two fundamental dimensions. First, there is the capacity of machines to “comprehend” images, translating visual content into textual form, exemplified in applications like facial, emotion, or crowd recognition. Second, there are machines capable of translating textual prompts into entirely new synthetic images, as seen in technologies like Midjourney, Stable Diffusion, or DALL-E.

These transformations affect three key dimensions of the image, prompting an exploration of various questions arising from the widespread adoption of AI image recognition and generation technologies. We aim to scrutinize how the dimensions of truth, evidence, and labour– particularly in the context of war, conflict, and state violence in various forms – are shifting amidst the growth and spread of synthetic vision and images.

Image-truth

Firstly, in what we term image-truth, we are interested in the truth-value of images as authentication and recognition devices, in domains such as for example facial, emotion and crowd recognition. But it also concerns the generation of synthetic images to mimic reality in large datasets necessary to train other algorithms. What counts as “true” in synthetic vision and image generation? What skin colours, what emotions, what types of movement count as “suspect”, while others are processed as “normal”? Does the rise of synthetic vision and images change the relation between the visual and the factual/fictional, or it is just that it makes the lines between the latter more blurred for a critical mass of people, and not just for ‘the usual suspects’ (experts, journalists, activists, etc)?

Synthetic vision and image generation redefine the relationship between spectatorship and its object, as what was formerly known as the ‘spectator’ now contributes to make the image itself without even being aware of this. The datasets used to train A.I. image generators, in fact, are often constituted by data collected from what users post on a daily basis on their social media feeds, without knowing that those images will be used for such things. How do we redefine the question of looking and witnessing in a moment when all lookers and witnesses are in fact makers of images? And what about the feedback loop between images that are generated from pre-existing data or even synthetic images themselves? Can we speak of a degeneration of images, rather than a generation, since these tools just repurpose and re-circulate what is already available, effectively enfolding and circulating intentions and aesthetics from one context to the other?

Image-evidence

Our second focus is on image-evidence, specifically addressing the capacity to recognize or generate images for evidentiary purposes. This dimension holds relevance in the realms of journalism, with considerations surrounding information, propaganda, and the identification of ‘fake news,’ within the legal sphere, encompassing both the utilization of images as judicial evidence and their role in event reconstruction. But, especially in an historical context in which visual clues and representations are considered crucial across diverse disciplines, it also matters for knowledge production, such as academic or scientific knowledge.

Who is the public(s) of synthetic images, to whom are they addressed? Are they just empowering the general public with the ‘magical’ skill of image creation, or do they speak to specific communities of ‘specialists’ of the factual (journalists, activists, etc.) in that they call upon new ways of approaching the question of truth and reality? The recent case of Amnesty International using an A.I. generated image for their report on violation of human rights in Colombia, and the polemics that this choice has raised, taking the attention away from the horrific content of the dossier, shows that there is a high risk that we fall in the trap of the ‘technological fetish’. What are the stories that get to be forgotten, or overlooked, or neglected, within the general hype for A.I. image generators and the seemingly inevitable process of ‘fading away’ of empirical reality?

In the moment in which Midjourney and the likes create a sheer amount of synthetic images that a human eye cannot distinguish from an indexical object, an army of ‘reality defender’ apps has invaded the market, to offer validation tools and detect A.I. made images. But is a technological fix what we really need? Or is it maybe something else that we should put at centre stage in the debate around synthetic images? The strategy of ‘technophilia’, successfully used by the work of Forensic Architecture, mobilises aesthetics of technology as proof/evidence. Can we find other ways to ‘prove’ and ‘show’ that are less oriented toward fetishizing technology and more directed to challenging the modalities under which it operates, as to re-humanize human rights?

We furthermore contemplate the role of artificial intelligence in shaping the production and presentation of critical research pertaining to images and facilitated by images. In the context of wartime, conflict and/or human rights abuses, the media forensic work of Forensic Architecture (FA) is increasingly turning to AI as a tool in its investigative arsenal for knowledge production. We are interested in the intermingling of the aesthetic and the investigative that is at work for example in FA, or Adam Harvey’s VFRAME project and the open-source intelligence (OSINT) community more broadly, and particularly so when their investigations move into museum spaces. We ask: are we witnessing a wholesale rethinking of the forensic image, and therein, of the very notion of the evidentiary as knowledge in the field of war and state violence? And to what political ends? For whom – what communities, legal bodies, human rights institutions – does this regime of knowledge serve? And who is in its blind spots? What new forms of justice and accountability does the synthetic image qua evidentiary form, in this context, make possible, and what are alternative forms of knowledge production through images?

Image-labour

Thirdly, which closely connects to the very point of the ‘degeneration’ of the images, synthetic images raise a question of labour – and of a neo-colonial labour, as part of the gig-economy. Who makes the datasets, and under what conditions of exploitation? Recently, the trials against Sama production and the court case against Meta in Africa have unveiled to what extent these technological devices rely on exploited, gendered and racialized labour most likely situated in the non-West (but also with underprivileged communities in the Global North, e.g. refugees and prison labour). Very few people -among them Timnit Gebru, Tarleton Gillespie, Sarah T. Roberts – have shed a light on this labour issue which gets totally overlooked by the hype surrounding A.I., and all the discussions about how to make it more ‘ethical’. But can such technology ever be ethical if built around such processes of exploitation? Is there a different way to build datasets, maybe in a transparent way, based on consent and volunteer work, as it happens with some ‘open A.I.s’?

Interdisciplinarity

The aim of the event is to bring together scholars working with artistic/multimodal methods, and artists whose work is approached from a research perspective. The objective of this interdisciplinarity is to explore the complementarity of multiple regimes of knowledge production (conceptual, visual, procedural/code based) in exploring critically the issues at stake.

Programme

DAY ONE | Thursday 27 June 2024
15:00 Introduction (Francesco Ragazzi)
15:30 Terror Element Anna Engelhardt & Mark Cinkevich 16:00 – 18:30 Mapping Synthetic Vision

  • 16:00 Mapping Unknowns and Uncurtains in Security Vision (Francesco Ragazzi, Francesco Luzzana, Erica Gargaglione)
  • 16:20 Algorithmic Security Vision: Diagrams of Computer Vision Politics (Ruben van de Ven)
  • 17:00 Synthetic Battlefield in the Time of Dynamic Maps (Svitlana Matviyenko)
  • 17:20 Discussion (chair: Donatella Della Ratta)

    18:30 Dinner for speakers

    DAY TWO | Friday 28 June 2024
    9:30 – 12:00 Synthetic Images in Conflict

  • 09:30 Cutting through algorithmic violence (Rocco Bellanova)
  • 09:50 From AI to paper: de-materializing and re-materializing evidence (video

    essay) (Kevin B Lee)

  • 10:10 Synthetic Realism: Exploring the Aesthetics and Politics of Generative AI in

    a time of widespread violence (Donatella Della Ratta)

  • 10:30 AI in Gaza: Image-evidence, digital suspicion, colonial history (Adi

    Kuntsman and Rebecca Stein)

  • 10:50 Discussion (chair: Francesco Ragazzi)

    Afternoon
    13:00 – 15:30 Strategies of Resistance

    • 13:00 AI to Subvert and Expose Evidence (Paolo Cirio)
    • 13:20 Synthetic Vision for Subversion (Jonathan Luke Austin & Maevia Griffiths)
  • 13:40 Image-Life (Shintaro Miyazaki)
  • 14:00 Mobile Lies: A Kinopolitics of Emotion Datasets (Cyan Bae)
  • 14:20 Discussion (chair: Rocco Bellanova)

    16:00 – 18:30 Strategies of Resistance II

  • 16:00 A Tale of Two Data Centers (Marloes de Valk)
  • 16:20 Permacomputing in the arts (Aymeric Mansoux)
  • 16:40 Algorithmic accountability (Evaline Schot)
  • 17:00 Social media vs. war crimes investigations (Maria Mingo)
  • 17:20 Discussion (chair: Rebecca Stein)

    18:30 -18:45 Concluding remarks (Donatella Della Ratta) 18:45 – 20:00 Closing drinks

    Organisation and funding

    The public event is co-convened by Francesco Ragazzi, Donatella Della Ratta, Rocco Bellanova and Rebecca Stein. It has received funding from: the European Research Council (ERC) under the European Union’s Horizon 2020 and Horizon Europe research and innovation programme – ERC Consolidator Grant SECURITY VISION (grant agreement No 866535) and ERC Starting Grant DATAUNION (grant agreement No 101043213); the ReCNTR: Leiden University’s Research Center on Multimodal and Audiovisual Methods in the social sciences, humanities and the arts, and the Duke University Office of Global Affairs. It is also supported by the Institute of Networked Cultures.

ABSTRACTS

Terror Element (Anna Engelhardt & Mark Cinkevich)

In this artist talk, Anna Engelhardt and Mark Cinkevich will present the research behind Terror Element, a hybrid documentary about the investigative method and the fallibility of truth. Set in the present day, the film follows Nina, a reclusive forensic expert who spends all her time in the crime laboratory. When she rediscovers a videotape her mother made about a notorious series of explosions in Russia in 1999, she must also confront the uncomfortable truth behind her faith in science. Comprised of CGI and archival footage, this film pieces together the contradictory statements presented by authorities, focusing on an unknown substance found at one of the bomb sites. In the widely televised investigations, forensic laboratories became stage sets in a real-life crime fiction, one that became a pretext for the second invasion of Chechnya. As Nina’s investigation spirals further into uncertainty, her own methods become part of a wider conspiracy.

Mapping Unknowns and Uncertains in Security Vision (Francesco Ragazzi, Francesco Luzzana & Erica Gargaglione)

Where and who is Security Vision? While research typically delves into the intricacies of algorithmic security, focusing on the software, algorithms, and devices constituting surveillance micro-infrastructure, it often overlooks the broader macro-infrastructure supporting security vision. Over the course of a two-year project, the Security Vision group endeavored to tackle this oversight by meticulously surveying the deployments, institutions, products, and datasets comprising the global Security Vision network. In doing so, our data collection faced a significant characteristic of Security Vision: its penchant for secrecy and opacity. Rather than concealing the gaps in our dataset, we choose to highlight them by explicitly acknowledging unknown and uncertain data. Through the logic of the wiki, we thus shift the proposition of a colonial practice of data exhaustivity, to one of tentative data accumulation and stratification. The challenge then becomes: how do we represent these intricate relationships? We explore this through two visualization attempts—a graph representation and a geographical map—employing a ‘biased mathematical projection.’ This intentional bias involves positioning nodes within a ‘black box’ at the center of the graph or ‘beneath’ the geographical map. The work is thus both an attempt at mapping algorithmic vision and a reflection on the possible critical representations of data.

Tracklets: the synthetic present of movement tracking (Ruben van de Ven)

Techniques for generating images are also used to forecast the present. This contribution inquiries how bodies are marked suspicious by novel algorithms that are to help security operators signal adversaries on camera surveillance footage. Such algorithms track people in space and forecast their future trajectories; comparing that forecast with the person’s actual movement yields a quantification of their unpredictability. Building on an empirical exploration of this operationalisation of deviant behaviour, I outline two shifts in the governing of the security subject. The first shift is in the temporal logic of movement tracking and forecasting, which relies on a prediction-of-the-now. In literature on algorithmic surveillance, prediction is most often considered to be about pre-empting acts that will happen in the future (e.g. radicalisation) or about detecting something that has happened in the past (e.g. fraud, shoplifting). As a temporal logic, the prediction of the now brings together past and future, in which the failure predict comes to serve as judgement of movement. The second shift, is a politics of space in which there is no bystander. The operationalisation of deviance under inquiry here relies on the ‘normal’ passer-by in the creation of its forecasting model. As every step of the passer-by is used to produce a model of predictability there is no outsider, no public, to these surveillance practices.

Synthetic Battlefield in the Time of Dynamic Maps (Svitlana Matviyenko)

Maps of the war in Ukraine – and we know a good number of such tools from the ISW interactive map to liveuamap.com, deepstatemap.live and more – are typically not considered synthetic images in the same sense as computer-generated simulations or visualizations as they are based on real-world geographic data, satellite imagery, intelligence reports, and other sources of information. However, as they promise to provide an accurate representation of the war’s dynamics by mutual augmentation to increase the overall value or quality of an interactive map, these maps constitute a particular type of operational synthetic images. The use of AI technology also impacts the capabilities of interactive maps of war by automating data analysis and providing personalized user experiences. Thus, these maps serve as cognitive tools for relating to and understanding the evolving situation on the ground, including territorial control, troop movements, and key strategic locations. Despite the promise, the dynamic maps posit significant challenges in terms of data accuracy and reliability, security concerns, misinterpretation and bias, lack of context, technical limitations, and ethical considerations. This presentation will discuss the synthetic nature of dynamic maps in terms of their truth-value, evidential capacity, and surplus value of augmented data.

Cutting through algorithmic violence (Rocco Bellanova)

In 2018, artist Mimi Ọnụọha suggested algorithmic violence as the “phrase that addresses newer, often digital and data-driven forms of inequity.” A few years later, algorithmic violence seems to be even more present, and an even more apt way to describe – to a broader public than before – how some digital technologies cut across our societies. In the last couple of years, this is mostly due to algorithmic violence’s embodiment in very material and kinetic forms, notably in various settings of warfare and conflict. These instances of algorithmic violence highlights how far-reaching, deadly and destructive, its effects can be. They do so in a way that is seemingly more evident than the instances in which algorithmic violence takes more symbolic, epistemic, structural or slow forms, for example when it facilitates the discrimination of already marginalized or racialized groups, or it feeds into environmental degradation. This talk starts from the idea that, nowadays, the phrase algorithmic violence may have a strong potential for critique. But that, at the same time, it obliges us to map and think the diverse relations between algorithms and violence, and thus invites us to consider its conceptual underpinnings. Bringing into conversation the social theory of Etienne Balibar, the feminist technoscience of Lucy Suchman, and the artwork of Lucio Fontana, in this talk I will attempt to cut through algorithmic violence to better seize its potential for critique.

From AI to paper: de-materializing and re-materializing evidence (Kevin B Lee)

This talk presents an excerpt from an upcoming feature film that navigates the afterlives of extremist media produced by the Islamic State (ISIS). Experiments are conducted with generative AI platforms to visualize its retainment and exploitation of the legacy visual data of ISIS that have otherwise been largely removed from the internet. The sequence reflects on how generative AI de-materializes its own archive of extremist media and documentation of war crimes. The sequence then adopts a re- materializing approach to digital media as a way of resituating the possibility of bearing witness amidst a world of dematerialized evidence.

SyntheticRealism:ExploringtheAestheticsandPoliticsofGenerativeAI inatime of widespread violence (Donatella Della Ratta)

This presentation explores the profound impact of synthetic images, i.e. generated by artificial intelligence, on contemporary violence dynamics. From target identification to tactical execution of military strategies, AI technologies are reshaping the landscape of warfare. The talk investigates not only the overtly violent applications of AI in today’s conflicts, such as The Gospel and Lavender, which are involved in the genocidal killing civilians in Gaza; but also the broader implications of synthetic imagery in contexts of conflict and political unrest. Emerging AI applications, which I term ‘speculative’ and ‘world-making’, offer alternative realities. They introduce a new form of realism – ‘synthetic realism’ – where AI-generated visuals, closely resembling reality and drawing their legitimacy upon traditional ideas of representation, could pave the way for novel and more sophisticated forms and formats of violence enactment.

AI in Gaza: Image-evidence, digital suspicion, colonial history (Adi Kuntsman and Rebecca Stein)

Much has been made of the ways that Israel’s genocidal attack on Gaza has been enabled by AI technologies. AI has also suffused the war’s associated social media sphere, with critics bemoaning “unprecedented” challenges to the wartime evidentiary visual field. In this paper, we argue that the attendant discourse of AI newness belies the longer history of politicized doubt surrounding digital image-evidence in Israel/Palestine – a condition we have conceptualized as “digital suspicion,” namely, a political structure of feeling targeting visual evidence. This paper begins in the current moment of the “mean image” from Gaza – amidst both proliferating images of Palestinian suffering, and proliferating Israeli state denial of the scale of its lethality – tracing both the longer colonial history of “digital suspicion” and the ways it has been transformed by generative AI.

AI to Subvert and Expose Evidence (Paolo Cirio)

As artist and activist, Paolo Cirio will showcase some of his interventions with AI as counter tool to attacks, disrupt, and provoke institutions. Along the Tactical Media approach, Cirio proposes policy making and runs campaigns for the implementation of regulations. This practice led him to theorize on the “Aesthetics of Information Ethics” and on a “Evidentiary Aesthetics”.

Synthetic Vision for Subversion: Thinking beyond the algorithmic (Jonathan Luke Austin & Maevia Griffiths)

What’s the future of synthetic vision? Can it be imagined beyond the algorithmic? As a subversive mode of worlding reality? To explore that question, this presentation meditates on three cases of synthetic vision that offer starkly contrasting visions of its normative potentials. First, we introduce a research project being developed with the support of Chinese security agencies. The project involves developing machine vision techniques to detect abuse in detention and policing settings by authorities. It works to reduce human beings – police or detention authorities – to pixelized skeletal forms interpreted by an algorithm, erasing any specificity to their personhood or the ecologies they inhabit. Purportedly, to improve compliance with legal norms. In a second move, we compare this project to Wassily Kandinsky’s series of line drawings Dance Curves in which he reduced the form of a dancer he observed to a series of skeletal lines remarkably similar to that of the algorithmic vision of Chinese security agencies. In the Bauhaus tradition, he sought to minimize – reduce – the form of dance and the body to abstraction, erasing personhood. In a third move, we explore one filmic representation of the process of becoming a torturer, entitled Grievable/Ungrievable. Grievable/Ungrievable is an abstract representation that meshes the movements of a dancer’s body with her descent into the materiality of an underground facility. Without voice, or narrative, a process of becoming violent emerges.

An experimental engagement with film and dance as a mode of revealing the embodied essence of violence. We discuss this project as a form of ‘artistic’ synthetic vision in which the moving images produced seek to represent a form of image-truth and image-evidence but without the restraints of algorithmic or technical mediation. Overall, these three acts of minimizing human form and subjectivity are put into deliberately provocative tension to help think synthetic vision in the full scope of its normative ambiguity to meditate on alternative political trajectories. Specifically, we draw on these examples to problematize the dominant intuition that what Latour termed ‘irreductionism’ (avoiding such minimization of human form and subjectivity) is always normatively preferable to the reductionist nature of the kind of technical mediation epitomized by synthetic vision technology. Instead, we speculate on the possibility of designing a subversive synthetic vision.

Image-Life. Minor Forms of Living with Computation (Shintaro Miyazaki)

This short contribution extends the triadic analytical framework for grasping synthetic vision and the images of power (image-truth, image-evidence and image-labour) with image-life and point to the maybe trivial notion that images are part of life, therefore imaging needs articulated relations to becoming and change. Images have never been stable, but the recent developments provoke non visual, time-signal-based, sonic and affective approaches to imaging. Via the notion of operational images proposed by Jussi Parikka the talk will sketch out multimodal strategies such as counter-dancing and minor algorhythmics both to avoid getting captured by power (= frozen into an image) and to prevent invasive, forced and violent imaging. These ideas are informed by a re-reading of Deleuzo-Guattarian works (most famously A Thousand Plateaus, but also Kafka – Toward a Minor Literature) and the talk will articulate what minor forms of living with computation possibly could mean and how this is all linked with otherwise understandings of imaging and synthetic vision.

Data as Drama: A Kinopolitics of Emotion Datasets (Cyan Bae)

This experimental film explores how technoperformances of human emotions construct and escape datasets that are used to train emotional artificial intelligence (Emotional AI) technologies in security practices. Automated emotion recognition and deception detection technologies are often based on machine learning. These models, trained with massive amounts of data, often operate like a black box, producing decisions without explanation. When the modus operandi of deception detection and emotion recognition is ungraspable, this research turns the attention away from mathematical principles to the architecture and practices of the datasets. In this study, I argue for a shift from a photographic to cinematic understanding of security governance, or what I call “kinosecurity”, through which the fluidity and movement of affect are governed and resisted. Empirically, reenactment of the data collection processes of three datasets are examined to study how kinosecurity operates through scientification, materialisation, and temporalisation. This multimodal study employs filmed reenactment as a research method to investigate affective and embodied experiences of the participants, foregrounding the reiterative nature of dramatisation for datafication.

A Tale of Two Data Centres (Marloes de Valk)

This lecture-performance traverses a time line bringing together local, national and international events to reconstruct the decision making process that allowed for the arrival of 200 hectares of energy hungry data centres, 6 meters below sea level, in the North of Noord-Holland, in the midst of the climate crisis.

Algorithmic accountability (Evaline Schot)

This presentation discusses the (lack of) algorithmic accountability in The Netherlands. In specific, it outlines trends and flaws in governmental use of data analyses and algorithms for fraud detection and predictive policing purposes. Examples from the field of investigative journalism are given as a means to address such issues

Social media vs. war crimes investigations (Maria Mingo)

Social media platforms use machine learning algorithms to remove illegal or harmful content at speed and at scale. However, conflicts like Ethiopia or Gaza have shown the real-world harms that such systems can cause, from failure to remove content that incites violence to over-removing and deleting potential evidence. As a response to over-removals, Mnemonic developed robust infrastructure to archive human rights content from Syria, Ukraine, Yemen, and Sudan for accountability, and continues to advocate for effective content moderation. This presentation will focus on content identification and archiving challenges and content moderation needs and opportunities.

BIOS

Anna Engelhardt is an alias of a video artist and writer. Her investigative practice follows the traces of material violence, focusing on what could be seen as the ‘ghost’ of information. The toxic information environments Engelhardt deals with stem from structures of occupation and dispossession. She has shown her work at ICA, transmediale, Ars Electronica, Kyiv Biennial, BFI London Film Festival, International Short Film Festival Oberhausen, The Henie Onstad Triennial for Photography and New Media, National Gallery of Art (Lithuania), Aksioma, and V.O Curations. Engelhardt is a core faculty of the MA Information Design at Design Academy Eindhoven and co-editor of “Chimeras: Inventory of Synthetic Cognition” (2022, Onassis Foundation).

Mark Cinkevich is a Belarus-born interdisciplinary researcher and artist. In his practice, he is interested in critical, speculative and experimental aspects of art that operate at the intersection of fact and fiction. His work focuses on the post-Soviet infrastructural and social landscape, through which he explores in particular the concepts of nuclear colonialism, infrastructural colonialism, extractivism and monstrosity. His works have been shown at transmediale in Berlin, steirischer herbst in Graz, the BFI London Film Festival, the National Gallery of Art in Vilnius, Ars Electronica in Linz, and the Aksioma Institute for Contemporary Art in Ljubljana, among others.

Francesco Luzzana & Erica Gargaglione When working as a duo, Francesco Luzzana (Kamo) and Erica Gargaglione (Grgr) research, prototype and publish situated digital tools. They often collaborate with larger and loose collectives of cultural workers and researchers as a way to face contemporary issues from multiple perspectives. They share a background in media studies and graphic design, and they both graduated in Rotterdam at the Master Experimental Publishing (XPUB), at Piet Zwart Institute. “Individually”, Francesco’s research revolves around software practices and how they shape the digital landscape. Erica is interested in small-scale infrastructures that can enable public spaces, and wire connections between cultural production and production of public (health)care.

Ruben van de Ven is a media artist and PhD candidate in Political Science at the Institute of Political Science, Leiden University. His PhD project studies the ethical and political implications of surveillance algorithms that order human gait and gestures. Since graduating from the Master in Media Design programme at the Piet Zwart Institute, he has researched algorithmic politics through media art, computer programming and scholarly work. He has focused on how the human individual becomes both the subject of and input into machine learning processes. Earlier artistic work on the quantification of emotions examined the transformation of humanistic concepts as they are digitised. His work has been presented at both art exhibitions and academic conferences.

Svitlana Matviyenko is an Associate Professor of Critical Media Analysis in the School of Communication and Associate Director of the Digital Democracies Institute. Her research and teaching, informed by science & technology studies and history of science, are focused on information and cyberwar, media and environment, critical infrastructure studies and postcolonial theory. Matviyenko’s current work on nuclear cultures & heritage investigates the practices of nuclear terror, weaponization of pollution and technogenic catastrophes during the Russian war in Ukraine. Matviyenko is a co-editor of two collections, The Imaginary App (MIT Press, 2014) and Lacan and the Posthuman (Palgrave Macmillan, 2018). She is a co-author of Cyberwar and Revolution: Digital Subterfuge in Global Capitalism (Minnesota UP, 2019), a winner of the 2019 book award of the Science Technology and Art in International Relations (STAIR) section of the International Studies Association and of the Canadian Communication Association 2020 Gertrude J. Robinson book prize.

Rocco Bellanova is a Research Professor at the Vrije Universiteit Brussel (interdisciplinary research group Law, Science, Technology & Society-LSTS). He is the Principal Investigator of the ERC Starting Grant project “DATAUNION – The European Data Union: European Security Integration through Database Interoperability.” He is also a member of the editorial board of the interdisciplinary journal Big Data & Society. His work sits at the intersection of politics, law, and science and technology studies. He studies how digital data become pivotal elements in the governing of societies. Rocco’s research focuses on European security practices, the role of data protection therein, and algorithmic violence.

Kevin B. Lee is the Locarno Film Festival Professor for the Future of Cinema and the Audiovisual Arts at Università della Svizzera italiana (USI). Combining filmmaking, media research and criticism, he has produced 400 video essays exploring film and media.His award-winning Transformers: The Premake introduced the “desktop documentary” format and was named one of the best documentaries of 2014 by Sight & Sound. His video essays Reading // Binging // Benning and Once Upon a Screen: Explosive Paradox received the most mentions respectively in the 2017 and 2020 Sight & Sound video essay polls. His current feature documentary project Afterlives is supported by Le Centre national des arts plastiques (CNAP), the Sundance Institute Art of Nonfiction Grant, the Eurimages Lab Project Award, the German Federal Ministry for Culture and Media (BKM), and Field of Vision. He is leader of the Swiss National Science Foundation research project “The Video Essay: Memories, Ecologies, Bodies.”

Donatella Della Ratta is an ethnographer, writer, performer, and curator specializing in networked media, with a focus on the Arab world. She holds a PhD from the University of Copenhagen and is a former Affiliate of the Berkman Klein Center for Internet and Society at Harvard University. From 2007 until 2013 she managed the Arabic speaking community for the international organization Creative Commons. In 2012 she co-founded SyriaUntold.com, recipient of the Digital Communities award at Ars Electronica 2014. She has published a wide range of books and essays on networked technologies, among which Shooting a Revolution: Visual Media and Warfare in Syria (Pluto Press, 2018); and co-edited, among others, the collective volume The Aesthetics and Politics of the Online Self: A Savage Journey Into the Heart of Digital Cultures (Palgrave McMillan, 2021). She is Associate Professor of Communications and Media Studies at John Cabot University, Rome ddellaratta@johncabot.edu .

Adi Kuntsman is Reader in Digital Politics at Manchester Metropolitan University and author/editor of multiple monographs and edited collections. Adi’s new book, Digital Technologies, Smart Cities and the Environment: In the Ruins of Broken Promises (with Liu Xin, Bristol University Press) is forthcoming in 2024.

Rebecca L. Stein is Professor of Cultural Anthropology at Duke University and author and/or editor of five books in the field of anti-colonial Israel/Palestine studies and visual media. Her most recent book is Screen Shots: State Violence on Camera in Israel and Palestine (Stanford University Press, 2021).

Paolo Cirio engages with social, economic, and cultural issues of contemporary society. His interventions and research-based artworks are presented as installations, lectures, artifacts, photos, videos, and public art, both offline and online. Paolo Cirio is known for having exposed over 200,000 Cayman Islands offshore firms in 2013; the hacking of Facebook through publishing 1 million users on a dating website in 2011; the theft of 60,000 financial news articles in 2014 and of e-books from Amazon in 2006; defrauding Google in 2005; and the obfuscation of 15 million U.S. criminal records in 2016; exposing over 20,000 patents of technology enabling social manipulation in 2018. Recently, in 2020, he pirated over 100,000 Sotheby’s auction records and he attempted to profile 4000 French police officers with facial recognition. His early works include his cyber attacks against NATO and reporting on its military operations since 2001. Paolo Cirio is an artist, activist, and media theorist. He engages with the social, economic, and cultural issues of contemporary society to address human rights, economic inequality, social justice, and democracy. Cirio has exhibited in international museums and has won prestigious awards. His artworks have been covered by hundreds of media outlets and he regularly gives public lectures and workshops at leading art festivals and universities worldwide. His art making and writing integrates aesthetics and political theory, media ecology, cultural politics, knowledge economy, jurisprudential studies, financial analysis, and technological scrutiny. In particular, Cirio has researched Internet privacy and surveillance, artificial intelligence, climate change, high finance, intellectual property, and militarism.

Jonathan Luke Austin is Associate Professor of International Relations at the University of Copenhagen. He is also Director of the Future of Advanced Security and Technology Research Hub (FASTER), and Principal Investigator for the Future of Humanitarian Design (HUD) research collective. Austin’s research is transdisciplinary, cutting across (international) political sociology, science and technology studies, international relations, political science, and beyond. Currently, his research is orientated around four main axes: 1) the study of global political violence, 2) the material-aesthetic design of emerging digital, computational, and architectural technologies, 3) the state of critique in social science, and 4) applying social science to problems in global public policy, particularly humanitarianism and its politics.

Maevia Griffiths is a PhD candidate at the University of Copenhagen in Political Sciences and the co-founder and co-director of the Visibility for Transformation Lab (VIFT) – an NGO which fosters social change through creative and innovative transdisciplinary processes. She works both as a social science researcher and as a film director, aiming to bring both disciplines together. With two different Masters, one in Documentary Filmmaking (Goldsmiths University, 2022) and the other in Development Studies (Geneva Graduate Institute, 2021), she mobilises transdisciplinary visual methods for matters of social (in)visibilities, violence, affect, memory and human rights. Her work includes various film projects, such as short and medium format documentaries, art videos and music clips for different bands.

Shintaro Miyazaki is a (junior)-professor in “Digital Media and Computation” (with tenure track) at the Faculty of Humanities and Social Sciences, Department of Musicology and Media Studies, Humboldt-Universität zu Berlin. Born in then West- Berlin from Japanese classical music percussionists and raised in Switzerland during the 1980/90ies, he lives in Kreuzberg since some years and belongs to the first wave of scholars, who have a full disciplinary background in German Media Studies (studying it from student to PhD-level and beyond). Most recent monograph: Counter-Dancing Digitality. On Commoning and Computation, meson press, 2023 (open access, https://doi.org/10.14619/0481).

Cyan Bae is a PhD candidate in International Political Sociology at the Institute of Political Science, Leiden University, and an award-winning artist-filmmaker based in the Netherlands. Her research examines the role of affective computing in security politics, integrating methods from filmmaking, visual journalism, and graphic design. Cyan holds a Master’s degree in Fine Art and Design from the Non-Linear Narrative programme at the Royal Academy of Art, The Hague (NL), a Bachelor’s degree in Political Science and a Bachelor’s degree in Design from Sungkyunkwan University (KR).

Marloes de Valk is a software artist and writer in the post-despair stage of coping with the threat of global warming and being spied on by the devices surrounding her. Surprised by the obsessive dedication with which we, even post-Snowden, share intimate details about ourselves to an often not too clearly defined group of others, astounded by the deafening noise we generate while socializing with the technology around us, she is looking to better understand why. https://bleu255.com/~marloes https://damaged.bleu255.com

Aymeric Mansoux (he/him) has been messing around with computers and networks for far too long. He is lector (reader/professor of practice-oriented research) at the Willem de Kooning Academy, Hogeschool Rotterdam. Recent collaborations include What Remains, an 8-bit Nintendo game about whistleblowing and the manipulation of public opinion in relation to the climate crisis; LURK, a server infrastructure and collective to host discussions around net/computational art, culture, and politics; and the Permacomputing wiki where a growing number of contributors document and discuss alternatives to extractive mainstream computation.

Evaline Schot (she/her) is a Netherlands-based investigative journalist specialised in social affairs reporting and the use of data in the public sector. In this role, she has collaborated with Lighthouse Reports and Follow the Money to investigate government use of algorithms and data for surveillance purposes, both in welfare systems and predictive policing. Within this reporting, the emphasis is on human rights, privacy laws, and the impact of surveillance on vulnerable communities. Evaline also works at the regional investigative newsroom Bureau Spotlight, bringing investigative and background reporting.

Maria Mingo has 10 years’ experience in international criminal law, specialising in documentation of atrocity crimes through technology. At Mnemonic, she leads its work on content moderation, engaging with social media platforms, regulators, NGO coalitions, and other stakeholders on over-removals and content preservation. Previously, Maria worked at eyeWitness to Atrocities, collaborating with activists around the world on the use of provenance technology to record photos and videos for accountability.

Conference Website

https://www.recntr.nl/2024/06/synthetic-vision-images-of-power-conference-27-28-june-framer-framed-amsterdam/

Share