Ben Grosser-Geert Lovink Dialogue on Media Art in the Age of Platform Capitalism

US new media artist Ben Grosser and I met at the 2013 Unlike Us #3 Institute of Network Cultures event in Amsterdam where he presented his Demetricator, a free web browser extension that hides all the metrics on Facebook. I  have followed his work ever since. We got in contact again in 2019 when he premiered his video art clip Order of Magnitude.[1] The cut-up piece features Mark Zuckerberg’s obsession with growth. Instead of taking the traditional critical approach, Ben Grosser magnifies particular words that return in each and every one of his sentences: more, millions, billions, trillions. Covering the earliest days of Facebook in 2004 up through Zuckerberg’s compelled appearances before the US Congress in 2018, Grosser viewed every one of these recordings and used them to build a supercut drawn from three of Mark’s most favoured words: more, grow, and his every utterance of a metric such as two million or one billion. Inside the exploding galaxy of Facebook there are no limits of growth. After a few minutes the viewer gets exhausted and is ready to swipe the video away, stand up and walk out: the exact opposite response to what we experience when we’re on Facebook, Instagram, or WhatsApp. The emptiness of the guy suffocates. Well done, Ben.

Geert Lovink: Let’s start with original Unlike Us approach that we kicked off in 2011 and that combined the critique of social media platforms with the search for alternatives, a network and series of events in which we originally got to know each other. How do you see the visual arts & the Facebook Question nowadays? Especially young artists largely depend on Instagram. There seems to be no counter-culture that resists against the social media platforms. The avant-garde is dominated by an unprecedented form of uncritical uptake, a mass subjugation to the platform we have not yet experienced. What’s your take on this?

Ben Grosser: This is a problem based on a combination of platform dominance, context collapse, metrics-focused interaction design, algorithmic feeds, and the homogenizing aesthetics of social media interfaces. We use platforms like Instagram or Facebook for so many different aspects of life these days (info access, work interactions, entertainment, family communication, network building, etc.) that it’s hard to escape them—and harder still to imagine life without them. Their interface designs have fully conditioned users to focus on like/follower/etc counts as primary indicators of success or failure, rather than, say, narrative feedback via comments or discussions generated outside the platform.[3] (In)visibility of one’s latest post to their network (of “friends”) is determined by an opaque algorithm and thus requires repeated experiments that are challenging to evaluate.[4] All of this happens within a visual interface design that treats user contributions as chunkable content to fill pre-configured slots in a homogenizing layout.[5]

What’s an artist supposed to do? Go where all the people are or where the people aren’t? Read metric “success” as a guide for what to post next, or risk posting content that never gains reaction (and thus, visibility)? Succumb to the limits of Instagram’s or Facebook’s media types, post sizes, page layout, etc., or post their content on a personal blog that nobody visits? In other words, today’s emerging visual artists have grown up in a world where the designs of these platforms have been setting the “conditions of possibility”[6] in many facets of life for the last fifteen years. For most, it doesn’t occur to them to resist. Social media is the proverbial water these artists/fish swim in every day.[7] They’ve spent their whole lives watching “success” get “made” on the platforms, and they try to follow a similar path, to emulate methods and materials used by those who’ve metrically excelled before them.

So unsurprisingly it’s hard to find many artists—avant-garde or not—working outside of dominant social media. However, in my view, some forms of resistance are happening on the platforms, enacted from an inside position by users of the systems themselves. I do this with my own work (e.g., Facebook Demetricator,[8] Go Rando,[9] ScareMail,[10] etc.) using an artistic method I call “software recomposition,” or the treating of existing websites and other software systems not as fixed spaces of consumption and prescribed interaction but instead as fluid spaces of manipulation and experimentation. In other words, I write software to investigate the cultural effects of software. These software artworks are designed to get in between the user and the system, allowing everyday users the opportunity to re-evaluate their own experience of the platforms and to see how platform designs change who they are and what they do.

Part of my intention here is to help users develop a critical position towards future platform additions and changes, to nudge them towards an analytical stance where they reflexively examine what a platform wants from them so they can give back something else entirely. I see this as a necessary first step in pushing users towards alternatives—we need people to begin to see (feel?) the platforms for what they are, to understand who most benefits from a site like Facebook and who is made most vulnerable. Only after this transformation—one made on a personal level through interventionist experiments that provoke disorientation and reconsideration—can we expect any mass of users to embrace anti-platform alternatives.

Geert, what do you think about this? Can an artistic avant-garde be avant-garde at all—let alone thrive—if some of its critical activity is enacted within the systems it concerns itself with? I would argue, given the monopolistic position of big tech’s current efforts, that any assemblage of an alternative commons is going to require action both outside and inside the dominant system of the day. That we have to use these systems against themselves—in ways that reveal their engineering of the user—as a necessary parallel effort alongside a building of alternatives. In other words, I’m suggesting we use the platforms to show users how platforms prescribe action and behavior as an opening/invitation to help them consider your concept of “data preventionism” within and eventual abandonment of the walled garden as they move towards adoption of an alternative open-source public commons.

GL: From a critical European perspective it remains necessary—and entirely possible—to develop and articulate critique of platform capitalism outside of the dominant platforms. Many here do not believe in immanent critique and half-baked reforms. Remember, court cases and fines are useless gestures against these companies. The least we should do is break up Facebook and Google, cripple Amazon in terms of its size and close down both Uber and Airbnb (a basket case, as already many cities have done this or dream of such a policy). Closing down venture capital firms would be the best next step if we want to go to the core. At the same time we should develop a notion of what belongs to the markets, and what should be part of the commons and then become a public infrastructure. Platform capitalism inherently leads to monopolies that further speed up (global) inequality.

I might be wrong, but do not see many inside the US rebel against the platform logic. You adequately describe mass dependency, and this is not all that different in other parts of the world. However, the Bernie Sanders campaign was disappointing, in this sense. It lacked an alternative media strategy. Sanders criticized companies but clearly has no clue how to incorporate and work with alternatives. Perhaps, guerrilla tactics inside these platforms are possible but I doubt this is going to happen at the level of images and postings. What is civil disobedience against Zuckerberg on Facebook. Tell me, Ben. I am not aware of it. Why is the dissent so invisible? We only see artists, scholars and political groups pushing their own issues, like everyone else. Subversive content that the powers to be do not like is being filtered and censored. This why they are employing these tens of thousands of cheap moderators, worldwide. It is in itself interesting to note that their so-called superior ‘automated’ algorithms and flagging systems are constantly failing. Instead of ‘representation’ of politically correct content I would propose much more tactics of hackers, pranksters and whistle blowers. We need more people like Chelsie Manning, Christopher Wylie and Eduard Snowden. We need to discuss the failure of Wikileaks as a celebrity-centric drama and support investigative journalism and radical indy research. Where are our think-tanks? Or, to be more precise, what’s our alternative model to the policy-centred approach? We need more meme factories.

BG: I’m fully on board with the actions of Snowden, etc. Their sacrifices have been essential acts of disobedience against the state. Yet if that level of sacrifice is the bar by which we judge all such past/current/future actions, there won’t be many willing to sign up! We need an array of tactics across a wide spectrum—not only those that are high-risk/reward but also those that antagonize with low risk as well as those that (try to) skirt just under the line of highly visible or openly antagonistic. Not only can the latter move users (to become critical, to see the systems for what they are, to abandon the platforms, etc), but they can also serve as a barometer of the ever-shifting legal landscape we’re up against.

For example, in the summer of 2016, Facebook made a bogus legal claim to get Demetricator kicked off the Google Chrome web store. This was their first attempt to openly thwart my efforts, and it came without warning after four years of releases/writings/talks about the project. In reaction, I was fortunate to enlist pro bono representation from the Electronic Frontier Foundation (EFF), and we managed to convince Google to reinstate it.[11] Fast forward to 2019 and the tech companies—by then under constant fire from all sides—started co-opting the project. Twitter and Instagram began talking about the negative aspects of visible metrics (as if it was an epiphany!), and Instagram’s CEO spoke in language that looked eerily similar to words I wrote in 2012.[12] Months later, I’m attacked by a bogus legal claim again, this time by Instagram. Having been unable to attract new attention from the EFF, my Instagram Demetricator remains blocked.

My larger point is that this kind of skirt-the-line tactic—one that finds and probes the relevant boundaries—is an essential part of shifting away from the platforms. We need think tanks and meme factories and all the rest to be sure. But those in the think tanks will need artists to push against the corporations from all sides so they know which tactics to craft and try next. Some of these future tactics will be hardcore acts of full-on visible resistance (ala Snowden), but others will need to be less visible acts of infiltration or subversion. Only a collection of acts across the spectrum can move things forward. We need to build alternatives, but we also have to find ways to convince two billion platform users to try out those alternatives. From my perspective, helping users develop their own critical perspective is an essential part of that process.

GL: Ben, what fascinates me is how you look back at ‘post-internet art’ and the ‘post-digital aesthetics’ movement.

BG: Looking back at 2012 from now (in 2020), I see the new aesthetic and post-digital as a moment when a broader consciousness was emerging around the cultural effects of digital technology. Of course, there were a number of theorists and artists who were investigating such effects well before 2012 (yourself very much included here), but James Bridle’s Tumblr blog[13] definitely helped animate attention to the moment, sparking a wider and visually-focused investigation of how contemporary experience was being changed by computation.

My own artistic efforts are also wrapped up in that time. Some of my works were featured on Bridle’s blog,[14] and are now cited in histories of the new aesthetic.[15] For example, works like Interactive Robotic Painting Machine[16] or Computers Watching Movies[17]—which investigate the nature of machine vision—are not just about how machines see but also about what they might want to see and how they could judge what they had seen, on their own terms. For me this activity was driven by my interest in computational agency (can/does a machine have an aesthetic sense? does it have its own preferences?). But I also was and continue to use examinations of machine vision as a way of understanding human vision—by looking at the difference between how machines see and how humans see, we can start to tease out how both are machinic. These projects also coincide in time with my work on Facebook Demetricator, which became not just a critical software project to hide quantifications on the Facebook interface but also led me to theorize about metrics as agent, asking the question: what do metrics want? (spoiler alert: more).

(Ben Grosser’s Demetricator in action)

The new aesthetic emerged after software studies, which personally has been a more enticing theoretical and practice-based frame from my perspective. Regardless, these concepts share much in common — namely that software’s designs have become determinative of contemporary experience. Though I guess I was always more interested in the wider software studies view that stretched beyond the visual into how computational logics were informing/infecting the culture of everything, digital and non-digital, visual and non-visual alike.

In response to your asking me about the new aesthetic/post-digital moment, I started re-reading Contreras-Koterbay’s and Mirocha’s book[18]—and am now seeing intriguing parallels with texts I’m tracking lately, such as similarities between their discussion of post-digital being a moment when the digital becomes increasingly hidden and Andersen and Pold’s theories of The Metainterface.[19] There’s also the inherent messiness of defining “post-digital.” Florian Cramer took this on, and I tend to agree with his argument that, like contemporary post-colonialism, post-digital is a “mutation” of the digital into an everyday condition, supporting new power structures that exert profound—if less obvious—impacts on social, cultural, and political life.[20]

Another of Cramer’s definitions—that “post-digital = disenchantment of the digital”—is less compelling for me because the culture of more means we always have some new whiz-bang tech ready to draw our attention. Thinking through this led me to recall Adorno, particularly his “regression of listening” essay,[21] contemplating his critiques of popular music’s 20th century obsession with lush moments of timbral color in ways that discouraged consideration of a broader musical whole. The software equivalents—glitzy animations, endless scroll, enticing and sometimes tactile UI interactions—similarly keep users focused on lush momentary sensation in ways that distract them from the broader picture. I have yet to articulate more applications of Adorno within software studies, but I can imagine a number of potentially fruitful analogies with metainterface realism,[22] cloud computing, autonomous systems, etc.

And then there’s post-internet art, from Marisa Olson’s initial conceptions to critiques such as those by Brian Droitcoir.[23] But I don’t think about ‘post-internet’ much in daily practice. Regardless, I do run into this kind of art, made as much for the online networked camera as it is for the viewer in the gallery. Seeing it makes me think about Nathan Jurgenson’s ideas in The Social Photo, where he talks about the way we have been reconfigured to walk around the planet imagining what we see as a future Instagram post.[24] My version of this is how we continually evaluate our current experience in terms of its future metric potential on Facebook—and that this process changes what we do and/or seek out in daily life. Going back to your earlier question re artists on Instagram, it seems many artists are perhaps now unable to avoid evaluating their efforts within the studio without considering how what they’ve made will look/function in online spaces? In other words, social media (and its engineering of the user) has broadly infiltrated the studio, turning the artist into a user. No wonder it’s harder to find a critical avant-garde in today’s social media age.

Geert, where do you align (if at all) and/or where do you draw inspiration from within the various subfields of software studies, platform studies, code studies, algorithm studies?

GL: Our interest, here at the Institute of Network Cultures, is not academic in nature. We based inside the largest Dutch ‘hogeschool’ or polytech, an applied science aka vocational training faculty for ‘digital media and creative industries’. We do not start and finish with ‘studies’ as we’re an applied research place outside of the university system where we collaborate inside projects (that we shaped as ‘networks’) with artists, designers, activists and critics. Funny enough you’re missing one that I might associate myself and that would be ‘internet studies’. Here in continental Europe, I see myself as part of the German media theory tradition, which is somewhere in-between media archaeology and media studies. I know… all these labels must be dazzling for outsiders that are not part of the academic game. A crucial one in our case is even missing here… what some call ‘network sciences’ (there never was anything that even came close to ‘network studies’). What they all have in common is a deep desire to hardwire math, IT and code with the vast universe of the arts and humanities. This is what we find in the work of Kittler, Galloway and Chun, but also Fuller, Terranova, Bratton and Manovich. We’re just not very good at promoting our techno-materialist school of thought.

BG: I’ve been reading/sharing/assigning INC publications for years, so I think you’re understating your group’s influence. And yes, internet studies belongs in that list. For me, there are many overlaps across these various “studies”—and one example of that is that we find interest in many of the same scholars (I would count Chun, Fuller, Bratton, and Manovich as part of software studies, with Kittler and Galloway regularly referenced within it). My own interest with these and related thinkers is in how their ideas inform, reflect, and inspire practice—I often think of my work as practice-based artistic research that fits within the relatively loose methods of software studies. At the same time, though, most people who encounter my work aren’t thinking about any of these things. Many of them don’t even think of works like Facebook Demetricator or Go Rando as art at all. This is by design in order to reach wide audiences.

(Ben Grosser, Computers Watching Movies (2013), a computationally-produced HD video that shows what a computational system sees when it watches the same films we do. Screenshot from The Matrix)

Thinking about my role as a teacher in a university art school, I’m often asking students to consider how the designs of commercial software guides the aesthetics of what they make. This is part of a larger pedagogical frame, asking them to 1) critically analyze any software they encounter to look for what it wants from them (so they can at least be aware of it but also so they can instead give it something else), and 2) to build their own software tools whenever necessary. As an example, I encourage students to question what Adobe’s Creative Cloud wants from them and how its design guides/limits their conception of good/not good. With this in mind, do you think a refocusing on the new aesthetic (or something else like it) might be of use for today’s new generation of young artists? Or is a moment like that even possible now?

GL: We won’t be able to develop a new aesthetic under the current regime of platform capitalism in which venture capital, geeks and UX designers and behavioural psychologists are in the lead. Take the recent rise of Snapchat and TikTok. The only thing artists can do is re-appropriate and comment on these current waves of pop culture. This puts us in a difficult position. Either the development of new visual vocabularies is going to come from privately funded labs and studios. Or will we disappear from the digital surface and build underground movements. Both of these options seem unlikely so chances are considerable that neither is going to happen. Will we be able to reclaim the internet, to take back the city, after the real-estate take-over? As we know, cyberspace and urban space are related. It will be up to us to reconnect the two.

BG: Further complicating this picture is that many of the latest platforms are trending away from the web as a primary distribution mechanism, instead designing solely (or near so) for the proprietary phone-based app ecosystem (as run/owned/guarded by Google and Apple). For example, Instagram, Snapchat, and TikTok all restrict access to and/or block submission of material on their web-based versions. This shift frustrates my primary method of software recomposition, as it leaves myself and other artists unable to manipulate those platforms within the web browser (and it also means that it doesn’t matter much since most users don’t frequent these limited web-based versions anyway). To get around this with TikTok I drew inspiration from Joana Moll and ran a demetrication test on their app by covering up a portion of my phone’s display with electrical tape (thereby hiding the metrics in the feed);[25] after using it that way for a week I was able to get a visceral sense of just how deeply these numbers were driving my use and assessment of the material being posted there. This kind of a brute force tactic can work on an individual or small group basis, but it certainly doesn’t scale well (most people won’t be willing to leave a piece of tape on their phone for a week or longer). Despite Instagram’s limited web functionality, I have published and maintained a Demetricator for it—but that’s the one they’ve recently forced off the web now that they’ve decided they want to co-opt a version of the idea for themselves (discussed earlier). As a result of this wider platform trend away from the web, we definitely need new tactics for investigating and manipulating these and future phone-focused apps.

GL: How would you describe the state of the art of online video? How do you see the move from text-only to image-heavy apps?

BG: Some of the challenges we’ve already discussed are strongly in play with the online video services. YouTube’s algorithmic feed and autoplay/”up next” feature has been widely indicted for the ways it leads users down unexpected paths that can be harmful (e.g., for children), manipulative, and ideological.[26] Visible metrics are rampant across all video platforms, heavily influencing what users see/create/post, and how they assess quality, authenticity, and authority. YouTube and Facebook are overwhelmingly dominant, giving them outsized influence over what is deemed appropriate, what becomes successful (and what defines “success”), and what is treated as legal or illegal. YouTube in particular is in lock step with global media corporations, helping corporate legal divisions police presumed copyright violations via “content ID” algorithms. Despite having been shown to make errors, these algorithms let the corporations automate legal attacks against individuals, thereby eliminating (or taking ownership of) content that was arguably legal under fair use law.

All of these effects (monopoly/duopoly, automated legal monitoring, algorithmic feeds, etc.) has left individuals with little agency if they want to compete in or contribute to this ever-increasing sector of the internet. They are further complicated by the equally consolidated streaming entertainment video platforms such as those from Netflix and Amazon, where user preferences are constantly profiled and pitched to. The result is that a handful of corporations have control over what people watch, what they create, what is allowed, and what is not. Further, the act of watching enables every user’s clicks and preferences to be tracked, databased, and profiled in order to sell targeted advertising (fueling the voracious appetite of surveillance capitalism[27]). We’ve heard plenty over the last four years about how these kinds of closed ecosystems enable political disinformation to be unusually effective, widespread, and cheap—and we’re now living with the consequences it produces in terms of the ineffective and racist/sexist/classist/homophobic/ableist/etc political leadership in the USA.

I will note that the number of videos available via these platforms can offer artistic research opportunities. With my own work I have drawn on both streamed television shows and uploaded documentary videos as source material for supercut projects that examine and critique everything from the ideological championing of technology in Netflix’ House of Cards[28] to the origins of Silicon Valley’s 21st century obsession with growth. With the latter, a work called ORDER OF MAGNITUDE,[29] I drew on every video recorded appearance by Mark Zuckerberg over his professional career—from age 19 to age 34—and extracted every time he spoke one of three words: “more,” “grow,” and his every utterance of a metric (e.g., “one million” or “two billion”). I then assembled these clips into a nearly fifty-minute film that examines what Mark cares about and what he hopes to attain.

(Ben Grosser, Order of Magnitude, 2019)

When I started collecting footage for ORDER OF MAGNITUDE I thought it would be relatively trivial to obtain all of the source videos Mark had appeared in, but the deeper I got into it the more I realized yet another downside to the state of online video: it’s easy for corporations to “clean up” their public histories when doing so requires scrubbing damning videos from just a few sites. For example, in the course of my research I realized I was missing footage from an important event: Zuckerberg’s keynote presentation at the first Facebook Developer Conference in 2007. I had seen tiny clips of it in a BBC documentary made in 2010,[30] but the source was nowhere to be found. It wasn’t on Facebook (even though keynotes from most other Facebook conferences were and they were all clearly produced by Facebook itself). It wasn’t on YouTube (again, even though many others were still there). Googling didn’t turn it up. The Zuckerberg Files archive didn’t have it.[31] This only made me more curious. Why would such a formative document from the company’s history be missing from public view?

Determined to solve this riddle, I asked my friend and Italian filmmaker Elena Rossini[32] for help. She suggested I translate words about the event into Chinese and then use that translation as search terms for Chinese video sharing sites like YouKu (I had previously tried searching YouKu, but had used English terms). Elena’s technique proved successful—I found a low-resolution copy of Zuckerberg’s 2007 keynote! So why would a document like this exist on a site behind the Great Firewall of China but be nowhere to be found in the USA? The likely answer is that the video used to be on sites like YouTube, but that at some point Facebook sought it out and had it removed (and hadn’t thought to try Elena’s technique so missed the Chinese copy as I first had). Frankly, when you view the keynote it’s not hard to imagine someone at Facebook deciding to scrub it from the ‘net because it records a moment when Zuckerberg was at the height of his youthful arrogance, a time before his presentation style became so robotic and scripted. Why does this matter? It shows how the limited set of online video options we currently have makes such scrubbing easy, especially for a well-resourced company like Facebook. Further, recorded speeches like that keynote are important historical documents, as they illuminate how one of Silicon Valley’s most influential CEOs talked about the company in its earliest days. If online video was more distributed and decentralized, Facebook never would have been able to (almost successfully) hide it from view.[33] And, again, this illustrates the need for tactics that engage with the platforms so we can probe the edges of what is shown and what is hidden away.

(Ben Grosser, Touching Software: House of Cards (2016), a supercut that examines the interactions between human and touch-based software systems in the Netflix show House of Cards; video here).

Earlier you mentioned TikTok, which is my latest video sharing platform obsession. A cross between the old Vine (a 6 second video looping app that is now defunct) and Musical.ly (a lip syncing app that was purchased and absorbed by TikTok’s parent Chinese corporation ByteDance), TikTok is all the rage amongst young users right now. Scrolling through its AI-driven feed (AI in that it continually tries to profile you and then serve you videos it thinks will keep you there) quickly gives you a sense: it largely consists of teenagers lip syncing and dancing to the same short 15 second music clips. Most new videos posted are attempts to imitate videos by the metric leaders (“TikTok Stars” with the most followers), though some demonstrate wider deviations. Regardless, because of this pattern of repetition (with the same music clip and same dances coming over and over again) user creativity often emerges through small changes rather than radical departures.

For example, maybe the dancing teenager will wear distinctive clothing, or make a minor adjustment to the dance, or perform the dance with a friend or in front of a parent. What I’ve been marvelling about is how the platform’s design has made such extreme conformity “fun,” and how it encourages the celebration of minor deviation as significant. Constraints in and of themselves aren’t a problem—in fact, they are useful and necessary for effective composition. But when one’s creative freedom is cultivated and limited by a platform designed to preference imitation, I worry that such a constrained way of making will negatively influence the emerging generation’s cultural activity for years to come.

Something else I’ve experienced first-hand is just how addictive the app’s “For You” page is (this is the name they give to the AI-driven feed). I’ve often found myself stuck in this feed, as you wrote in Sad by Design, “unable to disrupt [my] own behavior.”[34] I think there are many reasons for this, some of which you describe in your book, but others of which are perhaps specific to TikTok. Because creativity on the platform is forced to emerge through small deviations, scrolling the “For You” feed necessarily becomes a search for those small changes. I find myself continuing to scroll, hoping to find the next deviation that represents an improvement or entertaining variation. To be clear, these moments are few and far between.

But because the satisfying gestural swipe is all it takes to see if the next one is any better, it keeps me swiping, sometimes for hours! In fact, talking about TikTok addiction has itself become a TikTok meme, yet another soundtrack to lip sync to. One example is a meme by older users (being considered “old” on the platform starts in ones 20s) that talks about the evolution of their addiction—how at first they didn’t get it, then they found the content funny, and then before they knew it they were also doing the same dances and lip syncing to the same songs. In other words, even minor critiques of the platform have to conform to the same meme structures used by other popular content if it wants to metrically survive and gain visibility.

TikTok has gotten a lot of press lately as the “fun” social network, the latest space where teens go to play with their friends (and to get away from parents on the old networks like Facebook). One reason for its fun reputation is the relative lack of political content on the platform, suggesting to users that others on the platform just don’t care much about that kind of thing. Another characteristic of TikTok videos I have noticed is that so many of them are by pretty people performing within opulent home interiors. I wondered why this was? Was it because the app somehow attracted a disproportionate share of rich, pretty teens? The answer was revealed just a week ago via an article on The Intercept that shared leaked internal content moderation guides from the company.[35] Perhaps unsurprisingly, it turns out that TikTok employees are directed to “suppress” videos that exhibit certain characteristics, such as those with individuals whose bodies were “chubby” or had “ugly facial looks” or if they were “senior people with too many wrinkles” or who were performing in “shabby” spaces such as those with a “crack on the wall.” Another leaked document lists extensive moderator guidelines for suppressing political content such as “criticism towards civil servants, political, or religious leaders” or even anything that “mentions” any app in competition with TikTok. In other words, the “fun” facade is a ruse, hiding extensive censorship at the same time it encourages conformity and idolatry of those with the most likes or followers on the platform.

Finally, I want to respond to your question about the overall shift from text-heavy to image-heavy apps. I started thinking about this shift right after the 2016 US presidential election, when we were first hearing details about “fake news” on Facebook. It made me wonder: what was the role of the image within these political disinformation campaigns? To think about this, I quickly coded and released a browser extension called Textbook.[36] A simple proposition, the work hides all images across the site, leaving blank areas in their place. My first reaction upon using it was to recall how Facebook used to be so text-focused back in its early days. Around 2008, everyone’s status box began with a mandatory bit of text: “[Name] is…” This simple prompt led users to complete that sentence, and to potentially keep writing. Eight years later the balance is reversed: now Facebook is mostly images or video and not nearly as much text. Use of Textbook confirms this, as browsing the site with the extension installed shows that there just isn’t much left when the images are hidden. It’s a lot of blank space.

Experientially the work led me to focus on the text that was left, and overall, the result felt like a much calmer environment. One year later, in 2017, the US House and Senate Intelligence Committees investigating Russian interference in the 2016 election released a number of disinformation ads that had circulated via Facebook before the election.[37] What struck me about them was that they relied on the image. For example, one pictured a glowing Jesus arm wrestling with a fiery Devil in order to characterize a vote against Hillary Clinton to be a vote in alignment with Jesus. Another pictured angry-looking women in burkas and full niqabs and exclaimed that ‘“Religious” face coverings are putting Americans at huge risk!’ As with many of the ads released, these also embedded large bold text within the image itself in order to exceed the font size limitations of a text-based Facebook status post. The wider tactic being employed through these images was to activate—in the words of Cambridge Analytica’s CEO—voters’ “hopes and fears.”[38] In other words, the image was a primary weapon deployed to scare or anger voters into voting a particular way.

Instagram is another platform worthy of critique within this context. Whereas Facebook (even in its current image-heavy incarnation) still allows clickable links, large text (if the text is short enough), and long text-based posts, Instagram strongly deemphasizes text in favor of the image. One prime example is that https links pasted under an image are not clickable, breaking the most fundamental design aspect of the web: the ability to connect text at one location with a page anywhere else on the internet. In its place users are allowed to use clickable #hashtags, but the reach of these tags is limited to the Instagram platform itself. While the hashtags do allow users to link to something, those links can only lead to other images on the platform whose posters chose to associate their images with the same tag. This kind of design decision serves to keep users within the platform’s borders, and to suppress dissent by excluding the posting of opinions critical of the platform itself (and also possibly to limit what gets posted to keep it a “happier” space like TikTok does?).

With any piece of software, it’s important to think critically about the effects of every design decision. Whose interests are most served by the wider shift from text to image or the elimination of links, and whose are made most vulnerable? As is often the case with the mega platforms, the answer is usually that the platform serves its owners at the expense of its users.

Geert, as someone who has studied internet culture for twenty-five years, what do you see as the primary effects of this wider shift from text to image online? Further, given the difficulties of using techniques such as my tactic of “software recomposition” with a closed app like TikTok—combined with that app’s extreme popularity amongst teenagers—what do you suggest as a way to challenge these kinds of new, highly addictive, closed video platforms? How can we break the interaction patterns they enable, or more specifically, what will it take for users of those platforms to disrupt their own behavior?

GL: Inside the European theory landscape the iconoclast tendency remains prevalent. Despite all our love for Italian film, British television humour and YouTube, I do not see an epistemological shift towards the image here. Visual culture is still considered lazy, sensational, fast food for the brain. Let’s discuss the role ‘the curve’ is playing in the corona crisis. Can we consider info visualization a tool to inform the population? We rarely relate fake news to such visualizations. Data are considered ‘true’. But how about the collective intelligence of the upcoming generations? Can we see a shift of knowledge that is not only image-born but also primarily spreads as images? Important here will be the question if we can reroute and expand the critical creative forces in the direction of memes 2.0. We should not reduce memes to Reddit, 4Chan and alt.right.

Subversive content will travel inside and through images. To me this is a reality we will have to deal with—and take into our own hands. With Karl Marx I would say that is it not enough to interpret visual culture, we need to change it. This is entirely possible. We just need to see the urgency and stop treating images as ‘eye candy’. Images, as Lev Manovich and others have been teaching for decades, are dense knowledge carriers. Let’s be more precise. Images embody, contain concepts and ideas, much like words. Literary is not enough, that’s for losers. Those in charge will be the ones that are going to define the new apps, platforms, interfaces. We have known this for a long time, but until recently it was a consensus that the TV, film and visual arts industries would simply continue and somehow adapt themselves under the ‘digitization’ regime. But the current social media reality should be a wake-up call for those who still subscribe to the ‘remediation’ thesis.

BG: I agree, current social media reality should be a big wake-up call. And while the platforms have made possible a broad shift to the image as “everday communication,”[39] the corporate design intention here is to produce “engagement,” not comprehension. For them, engagement is a euphemism for the capturing and recording of attention in order to profile users for the purposes of selling targeted advertising. Beyond creating a database record that associates the user with a topic/product/etc, that engagement is perhaps equally designed to occupy the user, to hold them within the system for as long as possible by embedding each image within an interface that encourages the user to keep scrolling, to focus on what might be just beyond the bottom of the screen. Further challenging a user’s comprehension of the images they see is the way platform designs condition them to speed through them.

From feeds that never end (all the social networks) to opening up the feed to everyone regardless of friend/follower networks (e.g., TikTok) to automated expiration / timed advancement through a series of images (e.g., Instagram “stories”), the platforms teach users to scan the current post as quickly as possible so they can move on to the next.[40] Users learn to watch the metrics as their guide in this process, slowing down for posts with high like counts and skimming past those without. In this way, the platforms are making users algorithmic, turning people into scanning machines where their speed of feed traversal and their clicks of a Like or Love produce endless potential for future profit that depends on ever increasing amounts of data.

From my perspective, any attempt to use the rise of the image for subversive anti-establishment means requires not just careful/tactical image composition (e.g., memes 2.0), but also requires efforts to liberate users from blind manipulation by the design structures of platform interfaces. One work of mine that aims to enlighten in these ways is Safebook.[41] Safebook is Facebook without the content, a browser extension that hides all images, text, video, and audio on the site. Left behind are the empty containers that frame our everyday experience of social media, the boxes, columns, pop-ups and drop-downs that enable “likes,” comments, and shares. Yet despite this removal, Facebook remains usable: users can still post a status, scroll the news feed, “watch” a video, or Wow a photo. This radical transformation not only provokes consideration of what it might take to make a platform like Facebook “safe,” but also reveals just how engrained the site’s interface has become (to see gif animation of Safebook, click here).

So while information visualizations and memes 2.0 and critical supercuts and every other image-based form that might shift populations can inhabit and transmit via existing platforms, those who create and post them have to keep in mind the massive interface frames they’ll always be surrounded by. Information visualization can be useful, but it’s no less susceptible than other image types to disinformation tactics or the fallacy of data as objective. As discussed by Brunton and Nissenbaum in their discussions of data obfuscation as an anti-surveillance tactic,[42] data is fragile, contingent, and circumstantial. We’re certainly living through a period partly produced by the contingent nature of data. Here in the USA, a country now leading the world in COVID-19 infection rates (despite having more lead time and resources than most to mount a defense), we are indefinitely stuck inside our homes while thousands die every day, at least partly because of Trump’s desire to manipulate one set of data (confirmed case numbers early on) in order to maintain his endless quest for more when it came to the metric he most cares about: Wall Street performance. Until we learn as a society that looking at a topic “by the numbers”[43] doesn’t mean we’re getting an objective look, then data-focused image-based communication will be a useful tool for oppression just as text has been before it. In the meantime, artists have, as you have said, a “special responsibility”[44] to take on the technology platforms, to use image, video, and other visual media forms to expose the ideologies behind the interfaces we use and the media they (re)present.

[1] https://bengrosser.com/projects/order-of-magnitude/. Video here: https://vimeo.com/333795857.

[3] Benjamin Grosser,“What do Metrics Want? How Quantification Prescribes Social Interaction on Facebook.” Computational Culture: a journal of software studies 4, 2014. http://computationalculture.net/what-do-metrics-want/

[4] Taina Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook,” New Media & Society, 14, no. 7, 2012.

[5] Benjamin Grosser, “How the Technological Design of Facebook Homogenizes Identity and Limits Personal Representation.” Hz 19, 2014. http://www.hz-journal.org/n19/grosser.html

[6] Matthew Fuller, Introduction to Software Studies \ A Lexicon, edited by Matthew Fuller (Cambridge, MA: MIT Press, 2008), 2.

[7] David Foster Wallace, This is Water. New York: Little Brown and Company. 2009.

[8] See: https://bengrosser.com/projects/facebook-demetricator/. Video of the project here: https://vimeo.com/249448543.

[9] https://bengrosser.com/projects/go-rando/.

[10] https://bengrosser.com/projects/scaremail/.

[11] Though I haven’t detailed this incident in writing online, it is discussed briefly in: Will Oremus, “The Illinois Artist Behind Social Media’s Latest Big Idea.” OneZero, 23 July. https://onezero.medium.com/the-illinois-artist-behind-social-medias-latest-big-idea-3aa657e47f30.

[12] See https://twitter.com/bengrosser/status/1151632283448872960 for a screenshot comparing 2018 text by Instagram to my own from 2012. This is also detailed in the Oremus article cited above.

[13] https://new-aesthetic.tumblr.com/.

[14] For example: https://new-aesthetic.tumblr.com/post/73412837507/computers-watching-movies-benjamin-grosser.

[15] Scott Contreras-Koterbay and Lukasz Mirocha, The New Aesthetic and Art: Constellations of the Postdigital. Amsterdam: Institute of Network Cultures,  pp. 12, 152-154, 204-207, 2016.

[16] https://bengrosser.com/projects/interactive-robotic-painting-machine/.

[17] https://bengrosser.com/projects/computers-watching-movies/.

[18] Scott Contreras-Koterbayand Lukasz Mirocha, The New Aesthetic and Art: Constellations of the Postdigital. Amsterdam: Institute of Network Cultures, 2016.

[19] Ulrik Andersen, Christian and Søren Bro Pold, The Metainterface: The Art of Platforms, Cities, and Clouds. Cambridge: MIT Press, 2018.

[20] Florian Cramer, “What is ‘Post-Digital’?” A Peer-Reviewed Journal About … Post-Digital Research, volume 3, no. 1, 2014.  https://doi.org/10.7146/aprja.v3i1.116068.

[21] Theodor Adorno, “On the Fetish Character in Music and the Regression of Listening.” In The Culture Industry. New York: Routledge, 2001.

[22] Søren Bro Pold, “New ways of hiding: towards metainterface realism.” In “After the post- truth,” coordinated by Jorge Luis Marzo Pérez. Artnodes, no. 24, pp. 72-82, 2019.

[23] https://www.artnews.com/art-in-america/features/the-perils-of-post-internet-art-63040/.

[24] Nathan Jurgenson, The Social Photo: On Photography and Social Media. New York: Verso, 2019.

[25] See Joana Moll’s Critical Interface Politics workshops, in particular her technique of making custom paper mask screen overlays as a way of examining user interface design. http://www.janavirgin.com/HANGAR/. To see a demo video of tape-based demetrication on TikTok, visit https://vimeo.com/364669574.

[26] James Bridle,  “Something is wrong on the internet.” Medium.com, Nov 6, 2017. https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2

[27] Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York: Hachette, 2019.

[28] https://bengrosser.com/projects/touching-software/.

[29] https://bengrosser.com/projects/order-of-magnitude/.

[30] “Mark Zuckerberg: Inside Facebook,” BBC Documentary, aired on 26 January, 2011. Copy currently available from https://epublications.marquette.edu/zuckerberg_files_videos/65/.

[31] The Zuckerberg Files is a public archive initiated/directed by Michael Zimmer that aims to store all of Mark Zuckerberg’s public communications for academic research purposes. This archive is currently hosted by Marquette University’s digital Institutional Repository. https://zuckerbergfiles.org.

[32] https://elenarossini.com.

[33] While in general I believe the originators of content online should be able to force its removal, Facebook is a special case. It’s a public company who has a dramatic impact upon the global public (as demonstrated not just by its 2+ billion users but also the way it has been used as weapon to influence democracies in the USA, UK, and others. Because of this, I would argue that its previously public documents should stay within public view.

[34] Geert Lovink, Sad by Design: On Platform Nihilism. London: Pluto Press, 2019.

[35] Sam Biddle, Paulo Victor Ribeiro and Tatiana Dias, “Invisible Censorship: TikTok Told Moderators to Suppress Posts by “Ugly” People and the Poor to Attract New Users.” The Intercept, 15 March, 2020. https://theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination/.

[36] https://bengrosser.com/projects/textbook/.

[37] See https://www.nytimes.com/2017/11/01/us/politics/russia-technology-facebook.html for screenshots.

[38] “Cambridge Analytica Uncovered: Secret filming reveals election tricks,” Channel 4 News [UK], 19 Mar 2018.

[39] Nathan Jurgenson, ibid.

[40] This is yet another purpose visible metrics serve within social media feeds: they are there to provide a shortcut that tells users when to speed up (low metrics) or when to slow down (high metrics), to guide them on when it’s appropriate to leave a “Like” or a “Haha.”

[41] https://bengrosser.com/projects/safebook/.

[42] Finn Brunton and Helen Nissenbaum, Obfuscation: A User’s Guide to Privacy and Protest. Cambridge: MIT Press. 2015.

[43] https://www.google.com/search?q=covid-19+%22by+the+numbers%22.

[44] Geert Lovink, “Sad by Design,” podcast interview by Douglas Rushkoff on Team Human, Episode 114, December 2018. https://teamhuman.fm/episodes/ep-114-geert-lovink-on-sad-by-design/.

Share