Interview with Hartmut Winkler

The Computer: Medium or Calculating Machine? Geert Lovink meets Hartmut Winkler

This is a 25%-excerpt of an E-mail discussion which took place in April 1996 and was first published in the online magazin Telepolis, Munich, Germany (http://www.heise.de/tp/co/2038/fhome.htm).

The media scientist Hartmut Winkler who lives and works in Frankfurt, Germany, just presented a comprehensive critique of the current German media theories. ‘Docuverse’ is the title of his Habilitation work (in the German academic system the hab. follows the PhD); the manuscript of 420 pp. is subtitled ‘On the Media Theory of the Computers’. The book is forthcoming in February 1997 (Boer Verlag, Munich). The background of the text is the Internet and the contemporary transformations in the media landscape shiftig from the image-dominated media to the computers, and Winkler asks about the social motives causing this change. The book focuses on the concept of ‘wishes’, the “reconstruction of the underlying wishes to which the data universe is the answer”. The title ‘Docuverse’ is borrowed from Ted Nelson and as a term is relevant to Winkler because it forces to think of the data universe as a textbased, socio-technical implementation complex. The term also allows to criticize this idea as a fiction of theory. […]

Hartmut, can you briefly outline what ‘Docuverse’ is about?

Drafting the book two of my interests came together: first, the anger about the huge computer-as-medium hype which emerged with the internet and about the fashionable, precipitate character of the debate. Second, writing the book was a chance to recycle my past as a programmer. It was a challenge to confront the computer with certain theories developed in the field of classical media. Then I wanted to see what happened to those categories everybody used to use. The issue yet completely lacking in the debate and worth thinking about is the theory of language. The WWW is exploding as a medium of written texts and nobody thinks about why the media history leaves the technical images (photography, film, television) behind, after a century of unquestioned supremacy, in order to return, as it seems, to writing and language. In the current debate, however, the ‘end of the Gutenberg galaxy’ is announced which -if at all- happened already around 1900.

Your critique of media theory is aimed predominantly at a certain group of authors who have published a lot since the end of the eighties. On the one hand, the ‘Kassel School’ including Kittler, Bolz, Tholen and others, and, on the other hand, the circle around the ‘Ars Electronica’, Weibel and Roetzer. Would it be possible to describe this discourse a little more precisely? In my perspective there were very distinctive regional, cultural and even historical conditions for this vital text production. The year 1989 comes to my mind: climax of the eighties, of yuppie culture and postmodernism, the fall of the German wall, the birth of techno and the fist appearance of VR and the networks. This group of theorists, now, can neither -in terms of an overall technology skepticism- be characterized as left-progressive, nor as right-wing conservative culture-pessimists. Naturally you can always feel the spirit of Heidegger around, and one could name Lacan as a common background, the latter is even true for you. For a long time in Germany people who were concerned with the media were considered conformist. But I always thought of this as a disease of the Ideologiekritik. The sphere of the media, this is evident, is very real and material (and gets so more and more). Have those authors still anything to say, or should we stop asking about sociological and ideological positions?

It’s true that my book is mainly concerned with the German theory and the authors you mentioned, it undertakes a critical revision and develops its own interpretations and conclusions from this vantage point. That’s the project. However, I would locate this debate differently; first of all I don’t think that the Ideologiekritik was hostile to media and technology in general. If the authors in question more than evidently distance themselves from the Ideologiekritik (and this also resounds in your presentation to a certain degree), I can see a whole bundle of motives: it is a well justified interest in reaching a more differenciated interpretation of technology and also in overcoming certain aporias in the realm of the Ideologiekritik. Taking distance, though, could also be considered an immediate result of political disappointments; technology offers a way of escaping the complex demands of the social, and whoever considers technology the ‘apriori’ of social developement can stop caring about a lot of things. And above all, one got around asking what it is that gives technology its drive and direction. Here I would clearly differentiate between Kittler and Bolz: while Kittler makes a real effort to develop a hermeneutics of technology (and tries to win back what the social process inscribed into technology) Bolz turns to an open affirmation with politically reactionary implications. I think, like you do, that the debate is precisely located in place and time. But in my view the year 1989 doesn’t stand for an awakening but for a doughy German chancellor and the potential immortalization and globalization of the bourgeois glory. If technology seems to be the only sphere where one can still find some kind of progress, it’s no wonder that it’s highly appreciated.

In my opinion the 70s’ Ideologiekritik has indeed caused a lot of damage by grossly neglecting the realm of the media and, secondly, by refusing to understand what is so attractive about mass culture, a question that later on was taken up by English cultural studies. […]

Dealing with the seventies, you already focus on the followers, and they, I agree, seldomly reached up to the prophets. For the classics of the Frankfurt School, however, your estimation doesn’t apply; neither for Benjamin, nor for Kracauer who was very hopeful about mass culture; Brecht articulated the utopian idea of changing the monological character of the mass media, a utopia taken up by Enzensberger in the 60s and which became the basis for a number of practical-democratic media initiatives. The Communal Cinemas, financed by the municipal administrations, were founded in the 60s/70s etc. Above all, I think, that the opposition critical attitude vs. sympathy/understanding/affirmation is much too coarse. If the ‘culture industry chapter’ of the ‘Dialectics of Enlightenment’ didn’t exist it would need to be written right away – as a contribution to a debate and a very radical perspective which makes visible a particular side of the media. And Adorno’s ‘Aesthetic Theory’, even if repudiating media, jazz, and mass culture, offers many criteria which, in a certain way, are more appropriate for the media than they are for autonomous art which is so favourably treated by Adorno.

According to me, contemporary German media theory is not rooted any more in the instrumental, rational, technocratic thinking of the last two decades (the Affluent-NATO-Police-Nuclear state). Working neither positivistic nor fromnegation, it mainly seems to trace the inner voice of technology. The de-animated machines, worn by their commodity character, ought to sing again. Since there are mainly people from literature, philosophy, and the arts involved. Such a constellation merely existed in Germany at that time (1989). In other countries, you have to look for media theory in the departments of sociology, communication sciences, and in hardboiled history of technology. Why is the attitude of the German media ideology and their ‘virtual class’ (if one reallywants to name it this way) so sublime, so poetic? Elsewhere the media specialists do not invent such wonderful and complicated terms in order to describe the grey everyday life of the media. Does Germany, in the international division oflabour, develop more and more into the country of the datapoets and thinkers?

Jee, now I’m in the position to defend a particular German solution. Although many of the efforts, terms, and results of the debate seem very absurd to me I very much thinkthat the more pragmatic approaches (“sociology, communication sciences, and history of technology”) miss their subject matter – the media. Concerning the media, we definitely don’t know what we are dealing with. We know that a more or less blind practice brings them into being, but we don’t know theimplications of the fact that ‘communication’ asks for increasingly complicated technical devices. The world of symbols melts into the one of technology. And as long as we don’t know, I believe, it’s important to work on the terms. ‘Communication’ is a very good example; you assume without questioning that living people communicate with one another (bilaterally), in contrast to the ‘dead’ universe of writing. Is that plausible though? Isn’t technology ‘dead’ in the same way as writing is? And isn’t that the reason why people want to make them sing again? And that is where my pleading for the “academic ways of thinking” starts. Certainly there are the “rituals of academic writing” that you mentioned; yet this kind of writing opens up the opportunity to distance oneself from common sense and to talk differently – in a way that is unconditioned by the needs and pressures of practice. I’m always astonished about how fast and definitely certain things become established as consensus: multimedia is the natural aim of computer development, the computer is a universal machine etc. If you want to oppose these kinds of consensus, you have to have either good nerves or good arguments (and probably both). In any case you need terms and tools which don’t stemn from the debate itself, but from different contexts, and may beeven Lacan and Heidegger. And if the international division of labour assigns this part of theory to the Germans – that’s o.k. with me, they (we) did worse jobs in the past.

Hence, around 1989, in a time of rapid technological developments, a theoretical movement comes into being which doesn’t leave the Gutenberg-Galaxy behind, but takes on the whole knowledge of the last centuries into Cyberspace tracing back the history of technology and connecting chip architecture and modern literature. Though people outside that movement wouldn’t ever think that way. Doesn’t technology work excellent without Nietzsche and the humanities? Isn’t it only us, the intellectuals, that need the aid of Kittler and other theorists in order to cope with technology? Are we dealing with a media theory developed for a well educated middle class who has a hard time with the titanic forces of the ‘techne’? Or do the heavy volumes of theory serve to give the shares of AEG, Mercedes Benz, Siemens, and Deutsche Bank additional weight? For their power, it seems to me, the metaphysical insights of the German media theory aren’t very useful.

I very much hope so. And certainly technology works without Nietzsche. Generally the main problem is not just to cope with technology the way it is. If our society has chosen to inscribe its contents not in texts but into technology, the effect is that the contents aren’t visible and discernible any more. They appear as the natural features of the things, as a result of a linear (and necessarily single-track) progress, as unchangeable. It’s the same as codification. Things once encoded are the invisible precondition of communication. And whoever argues that a critique of technology is not possible any more and that the times of critique are generally over is taken in by a strategy of naturalization. Thus it would be the task of theory and of the hermeneutics of technology to win back the contents which the society has ‘forgotten’ into technology. The decisions and values, the social structures and power configurations, the practice which became structural in technology. To show the transformation from practice/discourse into structure (and from structure into practice/discour-se) is the main theoretical project of the book. Your ‘internet critique’ aims at precisely the same, doesn’t it? The grown structure of the net also doesn’t depend on criticism in order to keep on growing. And if you don’t simply partici-pate but think about the net in a different medium (writing and print) it won’t be far to Nietzsche anyway.

There isn’t yet a media theory of the computers, not in Germany and not anywhere else, as you state in your introduction, too. Isn’t that mainly because the theorists don’t yet live in the internet and hesitate to settle there? Choosing Ted Nelson’s term ‘docuverse’, I think, you indicate that cyberspace for you is mainly a sphere of texts and documents. Nowhere in your book you mention that there are really people in the networks (and their artificial agents). You are talking of a ‘universe far from man (/woman)’ and of the inadequacy of ‘communication’ as a term. Isn’t that because the internet for you is a collection of ‘dead’ information anyways? Your major sources are Derrida, Lacan, Freud, Nietzsche etc. combined with the recent media literature. Why is your media theory of the computers so tightly rooted in the written knowledge of the pre-internet age. Don’t you have a lot in common with the people you criticize? Could it be that there is no paradigm shift at all and that everything new culmi-nates in the return of the long established? In that case the wellknown theory frame can remain!

Plainly spoken: There aren’t any people in the net. Estimating roughly, there are 60% written texts in natural languages in the internet, 20% programs and algorithms, 10% numerical data, 10% images, and 10% digitalized sounds -110% in total: which is ten more than hundred, appropriate to hyperspace. And some of the written texts, here you are right, are meant for direct use and are exchanged in real time dialogues. As a whole the internet is a written universe, there’s no doubt about that. Asking what is new about all that I don’t think it’s the bilateral communication of two partners (as a new version of telephone or wiring logic) or the single documents, but the arrangement of those documents in an n- dimensional space, the material links between them, and the utopia of a universal accessibility associated with this construction. Nelson’s term ‘docu-verse’ as an ingenious anticipation summarizes all those aspects: and that’s why I chose it as a title. Indeed I rather believe we are dealing with a resurrection of the Gutenberg galaxy than with its coming to an end. After a hundred years’ supremacy of technical images there is an explosion of written texts, and in my book I ask why this is happening. I would discuss this thesis separately from the methodological problem of how to describe the new medium and which criteria to use. Talking about new subjects always means to applicate ‘old categories’ – and usually the knowledge of the writing age – simply because language is always the language of the past. Much more suspicious, it seems to me, is the current ‘rhetoric of the new’, which by using the term of simulation doesn’t recognize the time-honoured problem of similarity, in talking about virtual reality misses the assertation of realism, and which denies the ontological implications in the concept of data. The old theory frame cannot remain. But the people who claim to have torn it down will to their astonishment find out how much of it they carry along.

You often return to your thesis that the new media are based on language. Following Sherry Turkle’s distinction, you certainly are an old-fashioned IBM-PC-modernist, not yet having encountered the blessings of the Apple-Windows 95- postmodernism. In other words: the traditional computer who must be treated as a calculator versus the new image-machine with the accessible, democratic user interface. Umberto Eco distinguishes the image-lacking, abstract, protestant PCs and the illustrated screens of the catholic Apple community. So, confess, you are a protestant modernist (as I am) who belongs to the Luther-Gutenberg-pact! Officially you have to pledge allegiance to the printing guild, as a hobby, however, you like to go to the movies. […]

That’s a great division! […] You read it: in the first place I criticize the media historians’ habit of directly confronting the computer age and the age of writing, and thus neglecting the long period of visual media. Isn’t it surprising that the non-sensual computers (and even a few icons don’t make them sensual) replace the overwhelmingly sensual universe of images? The frustration with the bugs replaces the ‘uses and gratifications’ (those were the categories people used to describe visual media!); and doesn’t a newly arbitrary system substitute a motivated one, as semioticians would say? And these are, I suggest, the important questions to ask about the relationship of images and computers. You are right: I don’t think that writing was superseded because it was poor and too little complex; or because it was outdone by other media. But that doesn’t mean that, as you put it, I preach the return to disciplined linear writing. One has to distinguish three different levels of examination: first, the historical fate of writing, secondly, the question of images, and thirdly, my concern with thinking about the n- dimensional data universe from the vantage point of language. We will talk about the images afterwards. Yet I want to mention that I don’t always come back to language theory because I appreciate language and depreciate images, or because there are so many written texts in the WWW. It’s much more important to me that there is a structural parallel between the internet and language – perceived as two general, semiotic implementations. The structure of the network itself, that’s my major thesis, imitates the system of language; more precisely: the semantic structure stored in our heads. The semantic system of language is an n-dimensional network of interrelated references, as language theory teaches us. Language elements become meaningful through differing from each other, semantic oppositions are created by repulsion within an n-dimensional space. My thinking is mainly about this parallel between internet and language and the new perspectives opening up from there.

And as a fourth issue there is the idea that technology in general can be conceptualized from language. I’m sharing this approach with Christoph Tholen who goes back to Lacan and Derrida, and I would name Leroi-Gourhan as a down-to-earth and more accessible witness. This thesis allows to bring both sides of the media together: looking at computers as symbolic machines, you cannot separate the symbolic from the technical, for both aspects are interrelated and it is the challenge of theory to precisely describe this connection. The discussion has not yet developed very far, but terms like ‘inscription’ already bridge the gap and therefore they are fascinating.

A part of your book which I like a lot is the description of Leroi-Gourhans ‘Le geste et la parole’ and the part on ‘machines of collective memory’. These are connected to the concept of evolution and to a theory of technology which ‘locates technology in a triangle between natural history, practice, and language’. In Leroi-Gourhan’s thinking social memory (closely connected to technology and language) substitutes the natural instinct and its bonds. Can you see a link to the theory of the ‘memes’ developed by Richard Dawkins? How do you see the future of evolution in this context? Does it make sense to use a biological metaphor like ‘evolution’ for describing the further development of technology and the machines of collective memory?

You are just detecting one of my knowledge-gaps: I never read Dawkins. If everyone using the term ‘evolution’ reflected the fact that it is a metaphor, then the problem would be less severe. Talking about evolution and using the fashionable term ’emergence’, one correct argument and a stupid one interfere. According to me, it’s necessary to consider that the history of technology is a huge macro process which – this is the main characteristic of evolution – escapes a conscious guidance and excedes all human purposes. Stupid, however, seems to me the conclusion that every guiding interference is senseless and every effort to take some distance (by means of consciousness for instance) is doomed to failure. Here an originally skeptical argument – being made absolute – seems to turn into an affirmative one, with disastrous consequences for theory. Even the most naive ecological consideration teaches us that batteries may not necessarily be made from cadmium and that agriculture shouldn’t be a subdivision of the chemical industry. Suddenly one is confronted again with those complicated and unattractive political questions which one was happy to have said goodbye to. In any case it seems important to me not just to account for one technology but for several, competing ones. Then it becomes complicated using the term ‘evolution’. Reading Leroi-Gourhan illuminates one of Teilhard de Chardins’ problems: both of them start out from the term of evolution, while the latter aims towards a unifying and necessarily religious apotheosis, Leroi-Gourhan focuses on the collective memory as a historically plastic structure; sedimented and of a considerable persistency, yet dependent on the course of concrete practices. Again we are dealing with the dialectics of discourse and structure.

[…] You mention a ‘crisis of language around 1900’ and you state that a comparable ‘crisis of the visual’ must be acknowledged nowadays. But isn’t it, above all, the (German?) autoritarian bourgeoisie who can’t stand the flood of images, and who, as you say, is deeply irritated that TV doesn’t speak with one voice anymore; an older class of teachers, the ‘Zeit’-readers who also aren’t able to enjoy zapping and who long for a quiet and orderly media landscape.

[…] Finally we have reached a widely discussed issue: the role of images. If you want to analyze the present-day situation of media history, you first of all have to ask whether the computer is a new visual medium following the tradition of technical images (photography, film, TV), or whether it has broken with this tradition because of its specific characteristics. And my opinion is very definite: computers can produce images, this is beyond doubt, yet it’s quite a strong-man act (considering the exorbitant use of ressources), and it is not exactly part of their nature. My experience as a programmer tells me that the computer is a medium of abstract structures, of program-architectures, and the algorithms which, in the end, govern the digital images, too. The computer doesn’t care whether it’s pictures which finally appear on the screen. The computer doesn’t know what to do with the image-character of the images (there aren’t any algorithms of Gestalt-recognition or direct comparing and administration of visual contents); the two-dimensional output merely aims at the users’ viewing-habits.

Therefore, I would say that the present hype about the digital images and multimedia is a temporary phenomenon, a historical compromise between the universe of images, which has come into crisis, and the new, abstract, and structure-orientated system of computers. And if that is the case, I think we have to ask what happened to the visual. The common answer is that the images have lost the confidence of the audience because they can be digitally manipulated. This might be a factor. However, my opinion is that confidence has been lost mainly because the images have become so many, piling up in layers and increasingly revealing the hidden structures and schemes. This way the pictures lose their concreteness which was their major promise and constitutive for their functioning. Everybody who zaps knows the phenomenon: no matter how many channels there are, after a while TV appears as a uniform surface of relatively few, often repeated cliches. And through the iconic surface of the screen the symbolic skeleton of the images emerges.