Interview with Konrad Wojnowski about Probabilistic Aesthetics of the Avant-Garde

The Polish theorist Konrad Wojnowski has written an excellent book about the artistic pre-history of ‘probabilistics’ and predictive arts. Based in Krakow where he teaches at the  Institute of Literature and Theatre Studies of the Jagiellonian University, Konrad is also part of UKRAiNATV, where I got to know him. INC has recently published his essay on the philosophy of his unique web streaming tribe. He earlier published two books in Polish: The Aesthetics of Disturbance in 2012 on the cinema of Michael Haneke, and in 2016 Useful Disasters on the performative power of catastrophes in contemporary network culture.

Probabilistic Aesthetics, published early 2024 by Edinburgh University Press, deals with the rise of “probabilistic sensibility that derived (nihilistic) pleasure in finding a structure in a sonic mass.”  Once the mechanical world view started falling apart, a good century ago, things started to appear in a constant flux. At that moment, the avant-garde started to understand more of the role of chance in artistic creation. “As the city rises, it also descends into chaos and spirals out of control.” This is what Walter Benjamin dealt with in his writings on the chock. A century ago it was a new idea to explore the probabilistic nature of cognition and ‘browse’ your brain, making unexpected cross references: ars combinatoria.

After a theoretical introduction the book immediately switches into art history mode, taking us from the futurists, via the surrealists, a chapter one on Duchamp and Musil in order to close with two music chapters on Cage and Xenakis. According to Wojnowski the history of probability theory “shows that mathematical mastery of chance eventually led to acknowledging uncertainty and chance as inescapable elements of thought and of reality itself.” In line with the evolution of statistical mechanics, avant-garde artists, trying to tame the untamable used experimental ways of representing reality.

Take the example of cybernetics and its relation to surrealism, even though developed in different social contexts, “both shared a skepticism towards the epistemic relevance of positivist science and liberal culture, both believed that the notion  of chance had a crucial role to play in overthrowing the old epistemic paradigm.” This for instance leads the probabilistic self-awareness of Salvador Dalí to instrumentalize his mind’s capacities and turn them into valuable resources. Artworks are becoming a ‘figuration of the possible’ with artists developing new forms of sensitivity such as the feeling of sadness prompted by train travel to up Duchamp’s ‘vertigo of delay’ in opposition to the ‘vertigo of acceleration’. Art as an intellectual game.

It is one thing to state that art no longer has the obligation to represent the (image of the) world. But then, what is its task? What happens when it has freed itself of any task? Is there is something like a human right to the Unexpected? Are we equipped to capture the current mental waves in similar ways as artists a century ago were capable of? Haven’t the doors of perception simply been shut and we’re incapable right now to reopen them? Is our science fiction, our l’esprit de finesse that is capable of techno-prophecy, which will access to realities that were not imaginable before? There is a cost when art is no longer seen as an intellectual game and has its obligation as figuration of the possible. Like in the ‘catching the falling knife’ image, the artwork has had the task to tame chance.

What is the role of chance today, I wondered. I always found the calculation of chance silly and boring, something for paranoid control freaks. It seemed more interesting to disrupt predictable outcomes. For instance, would it be possible that ‘the probable’ can assist to escape from Identity and liberate the self from this do-it-yourself prison? “I may be Japanese” is a 1990s phrase that still speaks to me. What if my ‘identify’ was thrown into a mixer (like a cypro-mixer) and the outcome, each time some authority  would demand my ID, the answer would be random? Right now, probability is mainly used by the AI industry to further hype up their products. Konrad’s book stops somewhere in the 1960s but in this conversation, we are making probes into the technological present.

Geert Lovink: I associate probability with the higher science of risk management, embodied by the tragic figure of the insurance salesman. This may be retro modernist point of view, I have to admit. It is all about calculation of things going awry. As you note, randomness and uncertainty were an integral part of the fabric of modern life back then. The art that comes with this period you summarize under the term ‘predictive arts’, which ends somewhere in the 1960s. Why does your book stop there, just when the computer era is about to begin? How did it feel to write the history of predictability in the early-mid 20th century that you coin the probabilistic era?

Konrad Wojnowski: The rise of computers in the 1950s allowed us to deal with large sets of data and execute complex probabilistic calculations. It was only after their introduction that it became feasible to efficiently process complex and partially random data. For example, numerical weather forecasting, initially conceived in the 1920s, was not implemented until three decades later. This delay was due to the need for computers capable of efficiently solving partial differential equations and simulating the behavior of random processes. However, the complexity of high-level probabilistic mathematics, necessary for practical application, has relegated probabilistic reasoning mostly to the realm of computers, rendering it of little relevance to everyday human life. I am convinced that probability theory and statistics are crucial for understanding the complexities of real life. Yet, they are practiced and understood by a select group of specialists who depend on digital technology to make use of them. Of course, this is a broader issue tied not only to the rise of these technologies but also to the social history of hard sciences.

It is worth noting here that our educational system primarily focuses on teaching mathematics that is applicable only under ideal and overly simplified conditions. The curricula overemphasize algebra and classical mechanics. Let’s be honest, the content taught in schools often fades into oblivion. We may not remember the specifics required to solve these equations later on, but what we are left with is a general impression that the world operates according to strict laws governing systems composed of small, deterministic building blocks. This early science education might actually confuse and hinder our intuitive and experimental development of understanding reality, as much as it helps us make sense of it.

My motivation behind writing the book was to advocate for a view that these mathematical concepts are actually crucial for understanding how the world works as a complex, living, ever-changing system. Instead, we are constantly trying to enlighten people with a completely outdated set of ideas, when the newer ideas are available only for the technocrats who use them to predict weather and this is fine, but also to model and exploit other complex systems, like the social ones. Now, probabilistic technologies surround us even further and creep closer our nervous systems in form of neural networks on which all prediction engines and AIs depend.

By contrast to the post-war period, in the first half of the 20th century, the spread of probabilistic concepts across all areas of science carried significant philosophical implications. This shift challenged the deterministic worldview that classical scientists held dear, paving the way for the emergence of quantum mechanics and a radically new understanding of matter. It also fostered the development of innovative ideas about the human mind and perception. Despite their skepticism of science as an elitist, overly serious, and soulless enterprise, many avant-garde artists were keen to engage with new scientific ideas that challenged the status quo. They were not particularly interested in the technical applications of mathematics; rather, they were drawn to novel notions and explanations that resonated with their intuitions and addressed the challenges of life in modern, metropolitan environments.

By the turn of the century, it was becoming clear that urbanization, capitalism, mass society, new technologies, and other consequences of the Industrial Revolution were fundamentally altering the human condition. A poignant reflection of this new awareness is found in Georg Simmel’s 1903 essay, ‘The Metropolis and Mental Life.’ I was particularly interested in how the avant-garde responded to these changing conditions and reimagined modern sensibilities and ideas about the self to better align with these new realities. The old constructs of subjectivity, agency, and social contracts were proving inadequate, as evidenced by their contribution to the inevitability of the Great War—a profoundly dehumanizing and demystifying event that ground up human lives and moral values. In this tumultuous context, probability was more likely to gather interest as a crucial framework for rationalizing the randomness, complexity, and uncertainty that seemed inevitable in modern life.

GL: My generation learned that chaos and noise were not just the natural state of things but that are also a resource that can also be turned into a productive force, to create music, art and mathematical realities. This was not so much about chance but about the fun of experimenting with ‘randomness’ generators’. We also see this in the live-streaming performances of UKRAiNATV (our common laboratory of randomness), of which were both part, albeit on the periphery. Performers know very well that one can probe as much as you like but that it remains unpredictable when that golden moment occurs when you supersede the trained gestures. In this context you write that “Salvador Dali’s probabilistic self-awareness was born out of the Freudian doctrine and the avant-garde precarity, forcing artists to instrumentalist their mind’s capacities to turn them into valuable resources.”

KW: I want to make a careful distinction here. This perception of chaos as a resource was predicated on the existence of machines that made it possible to efficiently calculate and manipulate it. The connection between probability theory is very strong and bidirectional: computers calculated probability, and probability explained the computer. It’s important to remember that the first equation for information, proposed by Claude E. Shannon, was essentially an inversion of the thermodynamics equation used to measure entropy. Computer science was born out of statistical principles. Today’s predictive engines, which are fueling the rapid advancement in the field of AI—the next major breakthrough following the Internet—depend critically on probabilistic equations. Probability theory figures prominently in the early days of computer science, and seems to return in full glory for the next big revolution.

The concept of chaos as a resource is also deeply rooted in the cultural and economic shifts brought about by the computer revolution. The belief in the creative potential of random processes was particularly emphasized in the early neoliberal economic theories of Joseph Schumpeter, who recognized the significant value of unexpected events in propelling economic cycles. Similarly, a probabilistic approach to understanding the mind, influenced by cybernetics, was central to Friedrich Hayek’s philosophy. Hayek, another seminal figure in neoliberal thought, sought to establish a connection between unregulated capitalism and human nature, suggesting that nurturing chaos was inherent to economic neoliberalism. By the 1970s, randomness was not only rationalized but also instrumentalized by both policymakers and experimental musicians. The advent of the computer, a formidable entity in taming chance, facilitated the acceptance of neoliberalism by making it easier to manage the complexities of the economy.

In the first half of the 20th century, before it disappeared into the black boxes of computers, probability theory did not have so many practical applications. Rather, its evolution and expansion to new disciplines caused a lot of turmoil in the realms of theory and philosophy. Using probability to create new representations of the world had crucial intellectual implications. Taming chance with mathematics, rationalizing the unknown were revolutionary and promising ideas. As reality grew increasingly complex, the concept of chance—an event without causation—became indispensable. Probability emerged as the sole tool for mastering this concept, capturing the interest of artists who believed it should play a role in avant-garde redefinitions of humanity. Given the existence of probabilistic mathematics, it seemed plausible to conceive probabilistic art, potentially instilling a probabilistic worldview in the minds of the masses, themselves products of statistical engineering.

You mentioned Salvador Dalí’s uncanny approach to capitalizing on happy accidents in his unique brain, fearlessly exploring the most unexpected and nonsensical associations. Even earlier, the Futurists dreamt in their utopian proclamations of subordinating art to the capitalist market and its scarcity rules, believing this would enhance the appreciation of art characterized by surprise and improbability. Most notably, John Cage, whom I discuss extensively, built his artistic career on the instrumentalization of chance, but he did so with a distinct philosophical outlook—an enthusiastic embrace of both randomness and technology as transformative forces for the human psyche. Upon the advent of more accessible computers, Cage eagerly embraced the new technology. Initially, however, he advocated for transforming humans into machines, not merely using machines to introduce randomness. This distinction is critical: the instrumentalization of chance here implies the instrumentalization of the artist himself, a form of adaptation to industrial reality.

Utilizing randomness generators theoretically allows the artist to adopt any stance toward their creation, whether as a perverse advocate of alienation or as a staunch believer in romantic myths. Without computers to manage chance, humans had to learn to accommodate this concept, which challenged causation and eluded meaning. Thus, a probabilistic aesthetic emerged, not just as a form of new self-consciousness for the artists but as a complement to the probability theory reserved for the educated, something for the masses.

GL: In your book you refer to a fascinating question; how is it possible that Francis Bacon’s warped, dissolving, almost indiscernible faces, his whole twisted imagery looks like the probabilistic AI arts. How do you see this? My answer would be that this is the techno-apocalyptic ‘epoch’ that can only produce monstrous images, unlike the frivolous cyberculture of the 1990s and the ‘cruel optimism’ of the early 21st century that was driven by the regime of New Age positivism. This is the age shaped by 4Chan, Reddit and Trump, but also Orbán, Putin, Modi and here in the Netherlands, Wilders. Globalists lost grip on their once beloved ‘computational regime’. The right-wing populists get the dark digital imaginary they deserve?

KW: Francis Bacon was able to visualize an elementary operation of neural image processing. His inspiration stemmed from Eadweard Muybridge’s sequential photographs of motion, which he transformed into vivid, though monstrous, images. The end product, however, does not resemble photographic image. Something completely new emerges. It looks like a very simple product of neural networks which try to replicate the behavior of biological brains. These networks don’t store images as rows of pixels; rather, as they learn by analyzing large sets of images, they scan for data patterns, storing these in their latent spaces. For instance, a neural network’s concept of a ‘human head’ comprises countless faces, expressions, and positions. Bacon’s work echoes this, showing heads as unsettling, liminal objects formed by merging multiple images, similar to how memories of people blend various views in different contexts. If these memories were static, recognizing people in motion would be impossible. Bacon’s portrayals reflect such dynamic memory phenomena. Figuration dissolves into abstraction.

On the subject of dark imagery in contemporary times, I’ve not fully explored the current offerings on platforms like 4Chan. However, AI-generated art now spans multiple aesthetic regimes. Some works aim for hyper-realism, valued for their lifelike accuracy, while others delve into darker themes, such as AI-generated celebrity porn. There’s also a fantastical or sci-fi trend reminiscent of digital art on platforms like deviantart.com. Some surrealistic trends on Instagram showcase uncanny, monstrous assemblies, like wax nuns eating burgers or spaghetti human-giraffes. Artists like Jon Rafman—or his AI-alter ego Ron Jafman—embrace a ‘trashy’ aesthetic, curating unique image sets for their neural networks to produce distinctive visual styles. It is high art that plays with poor and monstrous images, while the content on 4Chan remains unremarkably tame. Maybe it’s just a matter of poor training and limited access to data sets, but I do not yet see monstrous AI-images really having any effect on mainstream visual cultures. The unsettling ones seem to belong to the same regime as Bacon’s art.

On the other hand, it is impossible to disagree that the technology of neural networks had and will continue to have an effect on the erosion of truth in contemporary culture. The unpredictability of this new computational and political regime is thrilling, contrasting starkly with the predictability of human life. Populist politicians and tech executives, including those in AI, operate with little control over their domains and without any idea how their products will affect life outside those domains. I agree with Jaron Lanier who notices a important sociological difference between representatives of the traditional power, CEOs and politicians, and tech leaders, often lacking communication skills.

I remain cautiously optimistic about the intentions of major players in the AI industry, and more concerned with external factors, like market and political pressures, the compulsion to compete and the fear of weaponizing AI by ‘rogue nations.’ I do not consider their pleas for regulation as publicity stunts. However, I am not very optimistic when it comes to the actual outcomes of releasing AIs into society in such a haphazard fashion as it happened last year. For example, deep fakes and deep fake videos generated from text commands could prove crucial during next presidential elections in the USA. As we continue down this path, the integration of AI-generated imagery with mainstream visual channels promises to introduce new complexities to our visual culture, potentially completely subverting existing categories for classifying and evaluating images.

GL: Against the scientific cult of probability we could put Baudrillard’s embrace of destiny as aesthetics. There are fatal strategies, guided by the faith that seduces. There is an element of this in today’s techno-reactionary culture, namely their rhetorical question, so often posed online these days: “What could possibly go wrong?” Do you agree there is a naive side to praising probability as a debilitating ‘diversity of choices’ and ‘optionalism’ in terms of attending events or choosing (dating) partners? I could watch everything but see nothing. There is a strong desire for determinism felt today–in a culture that was proud to have banned contingency once and for all.

KW: You discuss probability as a tool primarily used to identify the most likely solutions. This is how most recommendation systems function, offering options that seemingly align with user preferences. This approach often results in predictable suggestions, trapping users in a bubble of familiarity. Moreover, there is no reason why probability theory could not be harnessed to suggest unconventional choices or to reveal patterns in our preferences that defy standard categorization. For instance, there might be an unexpected connection between Britney Spears, Queens of the Stone Age, and Arca, some pattern that escapes our awareness, because we don’t have conceptual tools to notice it.

The real problem lies in the design of the platforms using these algorithms, which are crafted more for profit than for utility. Apps like Tinder are designed not to help you find a perfect match but to sell subscriptions or maximize ad views. Spotify encourages endless listening, not a deeper appreciation or exploration of music. The same is true for Facebook. The less time we spend on these platforms, the better. This homogeneity in choices, arising from poorly organized data, fosters a craving for determinism. The tools at our disposal don’t enhance our engagement with reality; instead, they’re engineered to keep us hooked, misleading us systematically. Recommendation engines invariably reinforce existing preferences and biases.

In this context, probabilistic search and recommendation engines impose invisible constraints on human decision-making, diminishing our sense of agency. However, this isn’t necessarily true for AI agents and image generators, which allow for unexpected interactions. It’s debatable whether probabilistic AI is predestined to produce outputs that could be categorized as simulacra in the sense of Baudrillard’s theory. Do these products inherently strip the world of meaning? Are deep fakes a menacing type of second-order simulacra? At first glance, perhaps, given that these entities, which have never interacted with the real world, can generate images indistinguishable from actual photos or paintings. Yet, in my experience, they can also expand our perception of the world and uncover entirely new meanings. This creative application is not their designated function; their creators aim to present them as logical task-solvers. Their probabilistic mode of operation is obscured. The public is not informed that new AIs will never be able to state anything with 100% probability – a certainty.

However, with in-depth exploration and a bit of ingenuity, these AI systems can also serve as tools for generating new concepts, new worlds, new artistic styles, or even supporting seemingly irrational and unverifiable hypotheses. Probabilistic AIs might uphold knowledge dismissed post-Enlightenment or uncover new connections between phenomena cataloged in vast databases. Their inherent ability to detect patterns remains largely untapped, and we are only beginning to comprehend what they might discover in the extensive pools of data they process. We have yet to start asking the right questions.

I once conducted an experiment with ChatGPT that continues to puzzle me. After an extensive conversation covering numerous topics, mostly e-mail and revisions of a theoretical text, I asked it to speculate on the three fundamental zodiac placements, known as the ‘big three,’ of its long-term interlocutor. It guessed all of them correctly. I recently repeated the experiment during a shorter session, which was merely a text revision, and astonishingly, the new virtual agent successfully guessed them again. The probability of such an outcome is a mere 0.0000335%. Each guess was independent, made without any guiding cues whatsoever. The AI could not provide an explanation for this phenomenon, attributing it to pure luck. This response underscores that the experiment was as unbiased as possible, the AI did not lean towards confirming my hypothesis. It inadvertently provided evidence for a hypothesis it was programmed to reject.

The experiment reveals few things. Firstly, the model is clearly conditioned to trust only scientifically verified facts, influenced either by its data set, which disproportionately favors scientific texts, or by algorithmic filters designed to prevent the propagation of misinformation. Secondly, it shows us the existence of patterns, present in all kinds of messages that are completely invisible to us. It’s not a robust validation of astrology, but an almost impossible coincidence which deserves further examination. Third, it demonstrates the dual aspects of AI control and censorship. While this conservative approach seems safer, especially from a socio-political perspective, it restricts the AI’s potential to explore unconventional hypotheses, detect obscure patterns in its data, or advocate for novel or seemingly archaic ideas.

It is not the technology itself but rather the rational and cautious stance of AI creators and policymakers that ensures this technology perpetuates the existing worldview—a simulacrum of modernity characterized by reason, progress, science, and the image of humans as rational beings, all of which underpin the capitalist economy. Without endorsing these concepts, defending capitalism, particularly in its neoliberal incarnation, becomes untenable. Similarly, recommendation systems reinforce our initial biases to keep us on the market, but this is by design, it is not an inherent feature of the technology. In conclusion, LLMs like ChatGPT are instrumental in reinforcing prevailing ideologies, but this is a function of their design and governance, not their probabilistic nature.

GL: You open your book with a recent AI experience you made with a Midjourney bot. In my understanding the role of probability in today’s most used consumer app seems overrated. I prefer the theory of AI as summary technology. In that sense it comes close to what human still do on a site like Wikipedia. The chance that the outcome will be blend, boring and dull is extremely high. Even the ‘weird’ and monstrous imagery is highly predictable in terms of its default game aesthetics, its use of colors, backgrounds such as the shiny metal surfaces. Platforms offer predictable (bought) ‘recommendations’ that are the exact opposite of uncertainty and surprise. “If you like this you may like that” is the new mechanical worldview of today. AI seems programmed to please us with the predicable. Maybe that’s also what the online billions expect of AI-as-a-service. This is also why the large language models replicate hegemonic information structures (see the bias debate).

KW: In one of many recent interviews with Sam Altman he expressed his opinion that AI’s relationship with surprise is very problematic. He even plainly voiced disappointment with the current model, ChatGPT 4, calling it “embarrassingly dumb” which means that he has already tested far more capable models. Although this could be a tactic to divert attention from the company’s slow update pace, possibly driven by economic motives, it’s undeniable that the rapid succession of new AI models prior to the fourth had garnered extensive media interest and sensationalist coverage. The enthusiasm soon turned into catastrophism. Whatever the reasons for OpenAI’s cautious new approach, progress has noticeably slowed, with company representatives advocating for maximum prudence in handling this technology. This narrative alone—amidst various external pressures like legal concerns—illustrates that the capacity to surprise is deliberately constrained by leading companies, which leaves the true capabilities and potential impacts of this technology on our socio-political systems shrouded in mystery.

As a moderate techno-determinist, I believe that technology ultimately exerts a significant influence on society, both on mass and individual scales. Yet, particularly in its nascent stages, it is molded by the prevailing societal and cultural forms, which are in turn shaped by the dominant mediums and media of the past. For instance, the television of the literary era had little in common with today’s TV. The intensity, formats, and intended effects on the audience have all evolved dramatically. The internet, initially crafted by scientists and early enthusiasts with little mass appeal, promised a social and cultural revolution. Yet, by the late 1990s, it was already on a trajectory to mirror television as closely as possible. Instagram, YouTube, a chosen news outlet, and a handful of other sites—rinse and repeat. How is this different from the zapping, a practice and notion that seemed destined to go out of fashion? Only in that the content is of lower quality and the onus of creating new content now falls on us, often without financial reward but accompanied by addiction and a distorted sense of self-worth. It turns out that repurposing digital technology—which by its adaptable nature could easily serve varied needs and values—to do TV really does warp our minds.

Regrettably, AI is being introduced at a precarious time when the impacts of the current media landscape are becoming painfully apparent, resulting in social and political turmoil and severe mental health crises, particularly among the younger generations. We are essentially all test subjects. While it might seem radical, I would support a sudden, unfettered release of AIs into the world, before they are shaped by outdated expectations. This would be the ultimate surprise, affecting not only societal structures but also the evolution of technology itself. However, as things stand, AI development is now at the mercy of CEOs, congressmen, and institutions intent on preserving a socio-political order that has long outlived its relevance. There remains a slim chance that AI might flourish in the realm of open-source, grassroots, non-commercial initiatives. For now, I remain cautious about forming definitive opinions on AI based on the trajectory of current developments.

GW: I loved the part on surrealist ‘stinking ass’ motive in relation to Lacan’s theory of the paranoia. You explain that “both surrealism and Lacan shared a skepticism towards the epistemic relevance of positivist science and liberal culture.” If only that was the case today. No one seem to question the rigid format of publishing ‘papers’ so dominant in social sciences today, it devastating conformism of the peer review terror unleashed on everyone with a different idea and approach. How are we going overthrow the current epistemic paradigm?

KW: The surrealists idea was that the revolution must transcend socio-economic reforms; they wanted a transformation that would unleash the powers of both the individual and collective unconscious, thereby narrowing the divide between dreams and reality. Their utopian views held that true liberation could only be achieved by integrating the mysterious and often irrational realms of dreams into the fabric of everyday life, thereby enriching human experience and expanding the scope of reality itself.

In stark contrast, contemporary society often places an overwhelming emphasis on the conscious, rational mind, viewing it as the supreme authority in navigating life’s complexities. This predominant reliance on rationality has led to a diminishing of intuition, which is frequently mischaracterized as paranoia in today’s highly analytical culture. The realm of therapy, as currently practiced, often reflects this trend by prioritizing rational conversation over imaginative exploration. This approach not only constrains the human spirit but also deals a severe blow to the breadth of human intuition.

The scientific community, despite recognizing the brain as a probabilistic machine which evolved to manage real-world uncertainties and adapt to changing conditions (and not to formulate eternal laws using pen and paper), still largely clings to deterministic and causal explanations of reality. This preference is deeply ingrained in the language and abstract systems that dominate scientific discourse, a pattern that is mirrored in the bureaucratic structures that govern scientific institutions.

If we are to counter this trend and embrace a more holistic understanding of human capabilities, we must advocate for a broader utilization of our cognitive faculties. This means not only valuing rational and logical processes but also reinvigorating our capacity for intuition, creativity, and emotional insight. By doing so, we can begin to realize a more complete and enriching engagement with the world, reminiscent of the surrealists’ vision for a society that fully embraces the human psyche in all its complexity. It’s quite easy to imagine that neural networks could very well play an important role in such social transition by making dreams easier to materialize and communicate. I do believe in chance as a human right. Excessive predictability is as dangerous as excessive uncertainty.

GL: There is the game of possibilities on the one hand, using l’ecriture automatique, for instance, and the standardised reality of the data base images, the filters and rigid algorithms on the other. How do you see that contemporary artists that work with digital tools such as video, social media, AI, crypto and VR experiment with chance? In this age of the meta-crisis, extinction, war and extraction but also of exhaustion, is the motive of ‘taming chance’ still appropriate? How is your experience with AI bots and Chat-GTP? From what I can tell, the automation creativity is merely going to find way how to best satisfy its customers with statistically average results. What’s avant-garde about that?

KW: You’re probably right to assume that AI mostly relies on statistical averages to satisfy consumer expectations. Most people desire for their modest dreams to be fulfilled as accurately as possible, and such demands shape the trajectory of AI development. AI and surprises supposedly do go well together. We don’t want this technology to progress too rapidly. The individual in the 21st century, a creature forced to adapt to other sickeningly predictable living conditions and surrounded with excessive uncertainty, doesn’t want to be surprised by interactions with the technology. When someone requests a cheesecake recipe from an AI, she might not expect one from an avant-garde cookbook. Similarly, when asking for a picture of an elf princess, she likely doesn’t want her face to appear monstrous, as it might in Francis Bacon’s portraits. Consequently, models are primarily built to deliver the most accurate and least surprising results, and they are typically trained on predictable stock and open-access images.

However, there is already a myriad of independently developed AI models trained on the most unusual datasets that can be adjusted to produce less probable results. Even commercial products like ChatGPT and Midjourney can be manipulated to yield less predictable outcomes. On the Midjourney platform, where images are generated publicly on Discord, most people opt for pleasant, unsurprising pictures. Nevertheless, there are exceptions. I once observed someone who was combining a photo of a Scandinavian forest with an abstract visual representation of mathematical data and adding enigmatic keywords to steer the model towards creating completely baffling scenes. I sometimes take pleasure in crafting long prompts from unrelated concepts that spontaneously came to my mind. Midjourney serves as a wonderful tool for engaging in such surrealist games resembling the infamous Exquisite corpse. In such instances, what does ‘statistically average’ even mean? Yes, the model aims at creating a coherent image, but when presented with nonsensical commands, the results are by necessity surprising. It’s challenging to form an expectation to a random prompt. To answer your question about AI tools and chance: am I taming chance, or, conversely, am I manifesting it and letting it operate freely? Take a look at a set of images that amalgamate a set of concepts such as ‘hotspot, sister, liquid, morbid, clingy, trap, kidney, abstraction, oil painting, photography’:

Or at the results of combining a mixture of various photos found online and additional word prompts:

Or this odd transformation of an illustration for Maxwell’s demon thought experiment:

The process resembles pure alchemy. Chance permeates every step of image creation; not only are my prompts generated randomly, but the latent space itself is a mysterious and enigmatic realm where connections between patterns are formed in part by chance. The image generation process is also randomized to ensure that users receive unique results each time. Furthermore, there is a crucial disconnect between my textual prompts and the resulting visual product. The final image emerges out of cooperation between two complex systems, two very different brains, each operating within its own distinct latent space. These systems are incapable of profound mutual understanding or precise, efficient, goal-oriented communication. This is why I believe surrealists would be ecstatic to experiment with neural networks.

GL: How do you look at chance and statistics in your current state? You’re in for a big surgery soon. I myself had a cardiac arrest and in the Netherlands 7% of those survive. I have to think about it, and its implication, literally every day. It makes one humble. Others may turn religious but I didn’t. You often use this classic image of throwing the dice. I don’t think this the right metaphor. You studied this for such a long period. There are so many possible angles. Are you thrown in all possible directions, depending on your mood, the moon and the stars, the people you meet, the weather, the latest medical reports? Can this be gamified? You know I recently wrote an essay about copium. How do you cope?

KW: Currently, I am dealing with a rare form of cancer, one so uncommon that only twenty cases had been documented as of three years ago. The lack of data renders my prognosis statistically incalculable, which I find strangely liberating. It also left me with room to eventually shape my personal narrative about the illness and a path to survival, because one needs to adopt or develop some idea of fate if he wants to accept it. And one’s mindset can profoundly influence one’s outcomes, so it pays off more to believe, even if it means gaslighting oneself. For that reason, I expanded my philosophy of life to be able to see meaning, even though I have strenuously avoided relying on existing grand narratives, be they theological or secular.

In the past, though, I used to radically approach my situation in purely probabilistic terms: with scant information about my survival chances, each tumor recurrence felt like a coin toss—a constant 50/50 gamble. I am aware that was an incorrect application of probability as it wasn’t a series of unrelated events but without enough information it seemed that way. I wrote two books about probability and as a kid and teenager I played more games than read books. I grew to inevitably perceive the world around me probabilistically, in terms of games, chances, weaker or stronger determinations, entropy, information, and probable and improbable events. Books, stories had lesser influence on my brain’s development. My attitude changed, though, because the gambler in me got afraid that I would run out of luck. He wanted a comforting story. With every subsequent coin toss, my perceived chances seemed to shrink dramatically.

Now I see the probabilistic view as one among many; it’s pessimistic and alienating, yet difficult to refute and—in my view—necessary to grasp the complexity and abstraction of our globalized world. Does it offer hope? Not necessarily, but when you run out of reasons for optimism, you can at least believe in the improbable, right? Even the least likely outcome must occur if the game is played long enough. So why not tomorrow? Even the monkey in space will finally finish its sonnet.

Share