Interview with Alan Liu

“I work here, but I am cool.”

By Geert Lovink

Good books do not just tell, they create history. In my case this happened to Alan Liu’s The Laws of Cool, subtitled Knowledge Work and the Culture of Information. Ever since I found it in a New York bookstore, late 2004, I carried it with me on planes, trains, on the bike–and remained puzzled about its analytic density. The Law of Cool is a so-far unnoticed classic of new media theory that is not a hurry to show off its relevance. The Laws of Cool proved hard to finish, and even harder to put aside. I got the feeling that I might have had enough of it, yet the book wasn’t ready with me. What fascinates me is its unusually quiet, untimely style. The Laws of Cool is a thick and comprehensive University of Chicago Press humanities study by a Wordsworth scholar who digs deep into the contemporary conditions of knowledge production. As Liu writes, the Cool has always bordered to the Cold. The writer did not get carried away by the Latest or the Obvious. Liu, an Californian UC Santa Barbara professor and web editor of Voice of the Shuttle (http://vos.ucsb.edu/), writes theory from a broad range of perspectives. The Law of Cool is hard to compare with the Deleuzian MIT Press titles and is light years away from the ordinary cyberculture readers. It studies business management bestsellers as serious literature, takes further elements of hypertext theory, explains the attraction to uselessness and the arbitrary, interpretes HLML language, analyses the cyberlitertarian ideology and maps the shift from ‘power to the people’ to ‘power to the individual’. Like it or not, cool is the antipolitics of information and ‘bad attitude’ is the constitutional gesture.

What makes Liu’s study so unique is his redefinition of the contemporary time scale. Liu discusses 1920s typography, quotes from Processed World and the Hackers’ Dictionary and writes about Jodi as if it is 1996. This study of the “cultural life of information” focusses on life at the US campus. It investigates the corporatization and computerization of academia and its impact on the humanities. Liu: “It might be said, with Kafkaesque irony: I went to sleep one day a cultural critic and woke up the next metamorphosed into a data processor.” Liu calls for an update of Stephen Greenblatt’s study on Renaissance self-fashioning, and produces a number of useful elements for such undertaken. But, before we re-awake in a New Age, we have to reconcile with destruction in the name of innovation and creative arts. Are you ready to slough off yesterday?

Liu’s motto is “I work here, but I am cool.” In an Ascribe press release he explains the cool attitude like this: “I am not so cool as to actively rebel or quit, but I am just cool enough to be slightly kinky in the web pages I browse at work, I’m not quite subversive, but my behavior asserts that I’m me and not just part of this corporation or that team.” Liu doesn’t get excited about this or that future scenario, nor is he interested in a deconstruction of the hype and spin that so characterizes the computer and Internet industry. Instead, he observes the behavioral patterns of “head work” that perform a subtle play around the ethos of refusal and resistance. A glimpse at Slashdot will tell you what this often misunderstood attitude is about. Cool starts to rise when unproductive elements come into play, ‘destructive creativity’ plays up and counter-systems of ‘style’ develop. “What is really cool, after all,” Liu asks. “At the moment of truth on the coolest Web sites—when such sites are most seriously, deeply cool—no information is forthcoming. Cool is the aporia of information. In whatever form and on whatever scale (excessive graphics, egregious animation, precious slang, surplus hypertext, and so on), cool is information designed to resist information, a paradoxical ‘gesture’ by which an ethos of the unknown struggles to arise in the midst of knowledge work.” Cool is an ethos of information. It is the moment of awareness of the information interface. It is the wellknown moment of revelation when you no longer look through a window and instead look at the window frame. Cool, so Liu, gives the knowledge worker the hope of ‘personality’.

GL: What makes your book so special is the somewhat different time frame that you use. The Laws of Cool is neither historical in it is approach, in the sense that it spans centuries, like media archeology does, nor does it stick to the ever‑present now, as new media theory often does. These days we hardly find references to 1980s computer culture, but for you that seems like yesterday. How come? Do you practice a hermeneutics of the digital everyday?

AL: “Hermeneutics of the digital everyday” is a nice phrase. My book is in part about the digital everyday. Every day we go into the cubicle (or office, or classroom, or Starbucks) and log in to work on our identity, which increasingly gets swallowed up in some institutional identity or “corporate culture.” The kind of hermeneutics or interpretation I bring to bear on that kind of everyday is historical. I try to bring meaning to the digital everyday by breaking down the hyper-compressed sense of “now” that is its prison (or cubicle) to compare it to past days. I make a narrative of the genealogy of “knowledge work” and, more specifically, of the information work that is a kind of carrier wave for knowledge work. And I use that narrative to make a historical critique. In this critical narrative, the intermediate “time frame” of the 1980s you point to is pivotal. The “now” and the far past, I believe, are necessary to each other, but can only be brought into meaningful engagement if their encounter is staged in a transitional zone of generational history–the history, that is, of the most recent change between generations that made us what we are today. Recently, after all, generational changes (between baby boomers, X’s, and now Y’s) have been the great scenes of critique, revision, and sometimes rapprochement. The 1980s witnessed a generation change simultaneously in society, business culture, intellectual approaches, and information technology (from the epoch of mainframes to that of the personal computer and the network). So that becomes the pivot point in my historical critique of the digital everyday.

GL: You write: “Cultural criticism is fundamentally historical.” At the same time History as we know is declared obsolescent. The history that unfolds is now partitioned in files and stored in a database. You call for cultural criticism to become ‘ethical hackers’ of knowledge work.

AL: Your question is interesting to me partly because of the way it is asked. There is actually no question in your question. No insult intended, but it’s as if you were yourself a database outputting information (a fragment from my book, sound bites from the culture of obsolescence, etc.). More frightening, you (and I, too!) are like many professionals today, whether they are information workers, economists, journalists, bloggers, or professors: we’re good at outputting data without any query (SQL or otherwise) actually having been made by anyone. We call that knowledge work, which produces a kind of “information overload” from which corporate culture harvests all its surplus value. (They don’t even need to query; we output!) I play upon the database-like aspects of your question because it’s a way of getting at what my book is about. A long time ago (and, of course, still in many parts of society today), people had another name for massive information dumps that occurred spontaneously without any query having been made. They called it God. It was God, or the gods, who spoke out of the burning bush to tell you what you didn’t even know you needed to ask. Before Oracle, Inc., in other words, there were oracles. But since the Enlightenment, secularization, and the many modern revolutions, that role of the oracle has been renamed History. We know we are in the presence of history when it preemptively tells us, and enforces upon us, something we didn’t even want to ask about. Gods and history: before we even know to query or pray, they have their root kit in place.

So that accounts for the perhaps too romantic notion of the “ethical hacker” in my book. It’s now unfashionable to summon up prophets who can preempt even the preemptive force of the gods or history to query, in essence: what have we done that has called down upon us such a fatal information dump (the Biblical “handwriting on the wall”)? What was the query that we have forgotten? So ethical hackers must serve in the place of prophets. Ethical hackers are not just programmers or engineers, but also humanists, artists, social scientists, scientists, and occasionally economists, politicians, and media workers, too. Their calling is not just to query the database of cultural history, but to bring to view the conditions of critique, speculation, and downright curiosity that allow that database to speak unannounced, unbeknownst. The risk, of course, is that hackers of any sort–white or black hat–are just another priesthood, vanguard, or avant-garde. (So trust only hackers who–on principle, when needed for a greater good–are willing to turn off their own firewalls and open themselves to being hacked. Everyone else is just in the techno- or avant-garde priesthood.)

GL: Lately I had a short but interesting dispute on the phone. In the midst of a conversation the lady I talked to used the phrase “knowledge society” and I objected. I told her that I preferred “information society,” despite all its troubles. She said: “but that term is only used by technocrats.” Yes, I answered, but I like it more, compared to the hyped‑up term “knowledge” that, for me, stands for well‑meant, soft exclusion combined with ugly intellectual property right clauses. Why should others define what knowledge is, and is not? Information is a much broader term. It’s cold and technical, perhaps even anti‑human, and leaves the possibilities. Your book circles around “knowledge work” in the age of computerization. How do you judge the current knowledge society craze?

AL: I think that you were exactly right in your phone conversation with the person who preferred “knowledge society” to “information society.” “Knowledge” is supposed to mean a deeper, cohesive, integral, and more spiritually real apprehension–less a way of knowing, really, than a way of being. Even some business books (for example, Peter Senge’s The Fifth Discipline: The Art and Practice of the Learning Organization) treat it that way. The problem, of course (paceBourdieu) is that knowledge is also a way of life or lifestyle. I call it in my book “work style” (lifestyle = work). As such, it is dominated by institutions that know better than you how to work at knowledge. Such institutions shape knowledge at the level of workaday protocols (“this is the document format you will use”) and also at the level of overall social protocol (“corporate culture”). Indeed, the power of contemporary institutions is that they enforce a seamless fit between workaday and cultural protocols. It is all one protocol, which substitutes for what we used to call culture. In this situation, the apparently reductive, purely “technical” notion of “information society” is preferable to “knowledge society.” “Information workers” are peons of the knowledge-work regime who don’t always need to conform to the knowledge ideology. Some good engineers and sysadmins I know are like that. They do their thing without needing to pretend that they are integral parts of the whole corporate culture of knowledge. On the one hand, they can be perceived reductively (“just a sysadmin”). But, on the other hand, they have powers and capabilities that spread out in decentralized ways beyond the institutional knowledge construct. The so-called “professions” used to function in that capacity (with their own professional associations and guilds spreading out beyond any particular company). Now that professionalism has been increasingly subordinated to corporatism (as in the corporate attorney or accountant), the techno-people are stepping into the role. They do biz; but, for example, they also do open source.

GL: I don’t know of any literary scholars that reads, let alone analyzes business books and quotes management gurus. Should students of English read Tom Peters, in much the same way as cultural studies professors require a discourse South Park analysis? Business magazines were around everywhere in the 90s, but no one seemed to know how to read them. A ‘French’ reading perhaps would have failed, but wasn’t even tried. The global social movements at the time focused on foreign policy, WTO and IMF, not on the techno‑libertarian ideology. We’re still waiting for Marxian critiques of the ‘creative’ knowledge industries. Is your literary criticism taking the lead?

AL: Some of us in literature departments are beginning to “read” business books closely–my colleague here at UCSB, for example, Christopher Newfield (“Corporate Culture Wars,” Ivy and Industry). Chris has actually interviewed Tom Peters and his group. In general, I think that a serious literature department today should be able to offer a course titled “Contemporary Fiction” in which novels are read alongside selected works from business, economics, politics, city planning, and journalism (including medical, scientific, and technological journalism). In a sense, the search for the “great American novel” is over. The winner is business literature. I can’t easily think of another genre of blended realism and fantasy, gritty concreteness (case studies, character studies) and sweeping vision, objective description and moral designs upon our soul that has such wide cultural impact.

It would be facile, however, for literature professors to read business literature as if it were exactly like a good work of fiction. Similarly, it would be facile for cultural studies professors to read such works as if they were just another kind of popular or consumer media. Seriously to engage with business literature will require rigorous attention to financial structure, accounting numbers, the graphic design of annual reports, and–goodness–even the prose of Alan Greenspan. We are dealing here with producer, rather, than consumer culture. Like many humanities scholars, I myself can only scratch the surface because I do not have the numeracy skills, for example, to understand such fictions behind contemporary corporate maneuvers as the “derivative” and the “leveraged buyout.” The dollar (or Euro), after all, is the most powerful vehicle of imagination or speculation ever invented. (It’s amazing how many things people of even the most pedestrian imagination can “see” in a hundred-dollar bill!) But humanities scholars tend not to have the skills to follow the manipulations of dollars beyond the scale of their own vanishingly small publishing royalties.

GL: Five years after you completed most of the manuscript (early 2001), the book still has a freshness, density and untimely character that fascinates me. Still you remark in a footnote that it has a definite pre‑9/11 quality. The dotcom crash unfolded when you wrapped it up. Would you now write it in such a different way? So many of your topics gained further importance. The ‘cool’ hasn’t cooled off. Creative industries are still on the rise… then what did change?

AL: On the one hand, even while my book is very attuned to our hyper-compressed “now,” as I put it earlier, it is relatively insensitive to what Fernand Braudel called the short-term l’histoire évènementielle, which, to adapt his metaphor, is just white crests of waves on top of the deep seas of information history that we “surf.” But neither do I go back to the millennia-spanninglongue durée. (Albert Borgmann’sHolding On to Reality: The Nature of Information at the Turn of the Millenniumis better at that, since he imagines for us the prehistorical, ancestral state of information.) My book instead goes back to the intervening time frame that Braudel called “conjunctural” or “cyclical” history (in between the scales of short-term andlongue duréehistory)–which, by the way, coincides with the “long-cycle” innovation history that economists such as Joseph Schumpeter, theorist of “creative destruction,” specialized in. In that time frame, 2001 does not make an epochal difference. It only builds on what came before. (When I was still trying to make a publication deadline of 2000, which I failed to do, I wrote often in the present tense. Then, when it became clear that the book could not be finished and published until a few years later, I went back and revised what I had written about the 2000-2001 moment in the past tense. To tell truth, the book felt more true to me then. The present tense had imposed an artificial strain, when my real intent was the continuity of the present with the past.) But, on the other hand, I cannot deny that the book would have been different if it had been written mostly after the events of 2001. As you know, one of the larger chapters in my book (Chap. 11, “Destructive Creativity: The Arts in the Information Age”) inverts Schumpeter’s idea of “creative destruction” to focus on contemporary “destructivity.” I have already begun work on a new book tentatively titledThinking Destruction(though it is being slowed because I am first trying to get to the publisher myLocal Transcendence: Essays on Postmodern Historicism and the Database, which bundles together my essays on historical critique). InThinking Destruction, I am re-exploring the theory and history of “creativity” from the perspective of destruction, which has its own structures, processes, and complex/emergent agendas. So, in answer to your question, myLaws of Coolwould have been different if written after 2001 because I would have emphasized even more strongly the need to look at the dark side of the force. Knowledge workers today say “cool” in the way that a Jedi might say “May the force be with you.” But we need to remember the dark side of the force. Who is Darth Vader, after all, but the ultimate worker in a cubicle–a cubicle so tight that it is an armored suit?

GL: What, except for Microsoft, needs to be destroyed? Macht kaputt was euch kaputt macht is a famous German punk slogan (destroy what destroys you). However, the destruction that you talk about can hardly be labeled as punk. At best it’s tinkering, uncovering the dark, but it can also be the kind of adolescent troll behavior that attracts attention, and aims at the destruction of online social structures. Can we still make a distinction between us and them, between those that need to be defeated and others who are the revolutionaries? Isn’t the problem of destructive creativity not also that everything (digital) can and will be saved, stored and archived? And talking about creative destruction, what do you see as obsolescent these days? We know what management gurus in the late nineties want to blow up in terms of old corporate structures but what is there really to do if want to apply destructive creativity? I often see not much more than self-destruction or predictable critiques of mainstream media.

AL: “Destructivity,” as I call it, is a much larger and more interesting phenomenon than adolescents, artists, intellectuals, and hacktivists performing, as you suggest, the now predictable acts of mischievous critique and petty kink. It is a way of participating in a civilization of destruction. As you know, the great theories of “civilization” in modern times have been dark ones–whether we think of Weber’s vision of bureaucratization, Freud’s of repression and sublimation, Foucault’s of discipline, Habermas’ of the decoupling of the “lifeworld,” or even Elias Norbert’s works on the”civilizing process” and the rise of “manners. The process of civilization is not the bright, Enlightenment vision of ever-upward “progress” in which all the main domains of life–intellectual, social, economic, and cultural–improve together but instead a kind of hostile take-over of life at large by the rational-economic subsectors of life. That’s what we call corporatization today. Corporatization attempts to sell its own vision of civilization, which it calls “globalization,” on the basis of a kind of neo-Enlightenment vision of progress, which it calls innovation. It is astounding, for example, how many business books and articles there are with such titles as “Continuous Innovation,” “Radical Innovation,” “The Innovative Enterprise,” “The Rise of the Creative Class,” “Creativity Under the Gun,” and so on. But a dark interpretation of such civilization would ask: what needs to be destroyed to make creation possible? Even more interesting, what logics, structures, and technologies of destruction are embedded so deeply in the process of creativity itself that they’re not just viral; they are part of the DNA of “creative destruction”?

What I call “destructivity” is a way of asking such questions and, on that basis, proposing ethical as well as tactical “best practices” for participating in the civilization of creative destruction. So, to come back to adolescents (we call them “students”), artists, intellectuals, and hacktivists: such people are often like the old Processed Worldcollective I write about at one point in my book, who wrote critiques and played merry, situationist pranks against institutionalized knowledge work while simultaneously eking out a living in office cubicles. Such people–who can easily be mocked, but can just as easily be cherished as the carriers of our collective best hopes and dreams–stake out their identity at the margins of the major power institutions, neither fully in nor fully out. Under the title of “destructivity,” I want to provide a more useful rationale for how and why such people can best participate in the major institutions of knowledge work that, in one way or another, they have to engage with anyway.

In this regard, the old post-May-1968 choice between staying true to the revolution and “selling out” seems to me terrifically unuseful in giving people a reason to situate themselves in the world of life and action. A better rationale might be framed by the question: “If you see that corporate life is destructive when it tries to ‘civilize’ the globe, can you do a better job of managing that destruction?” That is, the corporations may advertise for innovation managers (called “designers”), but what the civilized world really needs are destruction managers. How can the great processes of destructive creation driving globalization today with all its myriad social, economic, political, religious, environmental, and cultural effects best be managed so as to blunt its worst tendencies and, despite itself, to evolve emergent, new ways of sustaining what the classical philosophers once called the “good life”? So my message to the adolescents doing the whole net, hack, and porn thing in their bedroom is: instead of staging trick acts of destruction from the outside, can you find a way to manage destruction from the inside for a larger social good? The role of artists, intellectuals, and educators, it seems, to me is to educate those adolescents to the point where they can see that there are larger, and socially-good, ways in which they can contribute to our civilization of destruction. That’s what education today means. Let’s face it, educators today are training the cadres of those destined to work inside the knowledge work machine (this isn’t the 1960’s when some educators fantasized that they were training people to “drop out” of the system). I want to place students inside the system who can better manage our civilization of destruction.

GL: Much of your book is devoted to the corporatization of the university. You must have seen a fair bit of decline, or “change” as the business rhetoric calls it. You call scholars “middle managers”. Academy has lost its “supreme jurisdiction over knowledge”. Can it reclaim such position? What can we teach students or then how to best do their “knowledge work”. They do not need to be told how to surf the Net.

AL: You touch here on the vexed issue of the role of the university in the age of knowledge work, which, as you see, I have impatiently already started talking about. My short answer to your question is that students do”need to be told how to surf the Net.” Otherwise they will end up serving just the particular versions of the net that the great institutions and nations of our day have in mind. I don’t mean that students should be counter-indoctrinated in any left- or cultural-critical understanding of information technology, knowledge-work society, and the university’s role in all that (as if that would work!). I mean that training in critical and ethical action for networked society (what I theorized as the “best practices” management of destruction) can only be built on top of what students really need to learn: knowledge not just “of” the tools/skills needed to succeed in contemporary society but also “about” those tools/skills. They can make up their own minds about how best to use their tools and skills if we can only teach such tools fully enough that the technical, social, cultural, aesthetic, ethical, and historical context surrounding their invention and implementation (as in the recent controversies about the trustworthiness of Wikipedia) come into view.

Currently, for example, I am leading a collaborative project in the University of California system called “Transliteracies,” whose goal is to “improve” online reading practices with an awareness (historical, social, aesthetic, and computational) of what “improvement” might mean. This project involves understanding what actually happens when students “surf the Net” and what kinds of new, untapped intelligences lurk in what might otherwise be called shallow, broad, casual, quick, or lateral browsing/searching. If we can understand better what happens when we surf the net (specifically, when we read online text in adaptive relation to new media and networked environments), then perhaps we can build tools and skills that give users, including students, a better chance of surfing the net to gain knowledge, as opposed to just doing knowledge work. Knowledge involves a self-reflexive circuit in which what we know is mediated by what we know about howwe know. Today, universities should not only teach such recursive knowledge at a high, intellectual level (“no data without exposing the metadata” is my slogan) but also intervene at the level of the tools and source code that make knowledge possible.

Companies like Google, Amazon, and Adobe are innovating wonderfully in online reading, for example; but there is also a necessary role for the fully multi-disciplinary, historical, and social-good perspective that is only possible (among today’s major social institutions) in the university. Keep in mind that the distance between research and end-user in the university is extremely short. The divide between research and teaching in the university is a cliché that is not really true. Everyday, researchers in the university have to face that lecture hall of uncomprehending, bored, or suspicious students–end-users, in other words, at the most formative, vulnerable, yet (paradoxically) also shielded point in their lives. And so, ethically and pragmatically, the researcher-teachers of the university need to collaborate to create the tools that allow knowledge actually to work, which is to say, to be shared. (True knowledge work = knowledge sharing). I’ve been thinking about the issue of the university and society for a long time, and you pressed the hot button.

GL: Do you find it justified to talk about a Web 2.0 wave? What could be the theoretical tools, for humanity scholars, to analyse blogs and social networks such as Orkut, Flickr and MySpace? Most net artists and activists that I know, can’t deal with the subject formation that’s happening inside those networks. They don’t want to write a personal diary and don’t feel going to a dating site after work.

AL: Our Transliteracies project is beginning to collect for study in its Research Clearinghouse some of the tools that people have invented to analyze online social networking (http://transliteracies.english.ucsb.edu/category/online‑social‑networks‑tools‑for‑analyzing/). The social and collective dimension of online reading is one of the project’s main concerns. In general, though, I am highly skeptical of the “Web 2.0” hype. There are two reasons for this. One goes back to the issue of history on which our interview started. “Web 2.0” is all about a generation-change in the history of the Web, but from a perspective that is looking at what is happening right now, as opposed to what was happening during the previous generational change (the “1980s” we discussed earlier). It’s not clear that we can really describe a generation change of this magnitude and complexity while we are in the midst of the change itself, except to say that “something” is happening that a future generation may decide is qualitatively different. After all, when people speak of Web 2.0, they are actually referring to a swarm of many kinds of new technologies and developments that are not all necessarily proceeding in the same direction (for example, toward decentralization, open content creation and editing, Web-as-service, AJAX, etc.). It’s not at all certain, for example, that open content platforms in the style of blogs, wikis, and content management systems align with a philosophy of decentralized or distributed control, since many such database- or XML-driven technologies require a priesthood of backend and middleware coders to create the underlying systems and templates for the new “open” communications. Just how many people in the world, for example, can make one of the current generation of open-source content-management systems (which often start out as blog engines) do anything that isn’t on the model of “post”-and-“category” or chronological posting? Even the more trivial exercise of re-skinning such systems (with a fresh template) requires a level of CSS knowledge that is not natural to the user base. So saying that we are making the change from Web 1.0 to 2.0 is like saying that a swarm behavior is definitely moving in a single direction, when in fact it may be moving in several contradictory directions at once. (It’s not accidental, by the way, that many of the best known statements or conferences about Web 2.0 have relied on examples rather than generalizations. For example, Web 2.0 is “Flickr or MySpace.”)

My second reason for being skeptical about ‘Web. 2.0″–at least the hype about it–is more important. I think that people who make a big deal out of Web 2.0 are trying to take a shortcut to get out of needing to understand the real generation changes that are happening in the background and that underlie any change in the Web. Those changes occur in social, economic, political, and cultural institutions. Let’s take the example of Facebook or MySpace, which (like other social networking systems) are often spoken of as exemplars of Web 2.0. These systems, of course, are deeply rooted in particular social scenes–especially at different levels of the educational system (even if MySpace started out in the music scene). There was recently a mini-scandal at my daughter’s school (she’s 13) when it was discovered that many in her class had lied about their age to set up MySpace pages, where they revealed unguarded details and characterizations about themselves without full awareness of what it meant to be online. What is happening in such social scenes as the generations change?

Web 2.0 is just a high-tech set of waldo gloves or remote-manipulators that tries to tap into the underlying social and cultural changes but really requires the complement of disciplined sociological, communicational, cognitive, visual, textual, and other kinds of study that can get us closer to the actual phenomena. That is, thinking that Web 2.0 is cool is just a shortcut because the real scene of cool lies underneath; and I don’t think there are many developers of Web 2.0 technologies who have done the hard social and cultural studies to help them think about what they are developing. They make a neat system or interface that only taps into some aspects of the social scene. Then, if there are a lot of hits or users, their system is said to be a paradigm. But it’s hit or miss. There is no assurance that such technologies are the real, best, coolest, or even most useful “face,” “book,” or “space” of people–only that they are the face, book, or space allowed to surface through a particular lash-up of technologies. What is happening underneath is history, in other words, and it is stupid to think that “Web 2.0” is any better as a formula for that than, for example, the even more stupid formula, “Generation Y.” Ultimately, I guess, I don’t believe in such concepts as Web 2.0 because I don’t believe in People 2.0. (Go to the transhumanists for that). People live and change in relation to all that gave them their history as people; and that history is swarming, overlapping, conflicted, and multidimensional.

GL: When I first read about the Transliteracies project I was surprised to see that was focussing on online reading. That seems so passive, as if the computer is a mere extension, or hybrid, of the television and the book. There is this widely shared assumption that the computer is there to produce texts, images and sounds. A high-quality consumption of the produced content will likely happen elsewhere, in the cinema, a magazine, the lounge, through your i-pod when you=re on the road. And then there is the cultural factor that US-citizens spend a lot of time in front of PCs, whereas other cultures rather do something more social, with family and friends, out on the street. What are the preassumptions and outcomes of Transliteracies so far in this respect?

AL: I disagree with you here, Geert, though usually we are–as they say–on the same page. The recent, explosive research in the related fields of “history of the book,” “history of print culture,” and “history of reading” shows that reading has never been a passive task–if by passive we mean the rote usage of information distributed through well-understood, regulated channels. Consider, for example, William St Claire’s recent Reading Nation in the Romantic Period(2004), which is an astonishing work of archival recovery and methodological innovation that, for instance, demonstrates the tremendous variety and inventiveness in the relations between, on the one hand, publication systems and, on the other, what can only be called reading systems (including the many kinds of collective reading societies, book clubs, lending libraries, etc., of the time). Nor is it only the scene of collective reading that teemed with inventive activity in the past. The individual reader was inventive as well. Much of the recent research on the transition from the ages of orality to that of manuscripts and then of print has been about the way reading changed as a psychological or cognitive activity. And this is not even to mention the tremendous ferment of “writerly” activity that readers have always undertaken, including the medieval culture of copying and glossing, the Early Modern culture of the “commonplace book” (the precursor of today’s sampling, aggregating, etc.), the long history of annotation practices, and so on. It’s a hoax that reading has ever been a passive activity. And this is even more the case now in our current moment when we are changing our collective print-reading practices to adapt to online reading practices, and vice versa.

There is no “producer” today in any realm (scholarship, the film industry, the technology industry, journalism, you name it) who is not first of all a prolific and creative “reader.” Granted that much of this reading occurs in ways that are quick, distracted, and superficial–that is, not through “deep” or “close” reading but through scanning, browsing, searching, aggregating, etc. And granted that an increasing proportion of the works that are read belong to such genres as the memo, report, spreadsheet, email, Web page, blog post, text message, podcast audio, and so on. But the premise of Transliteracies is that there are hidden intelligences and social agendas within such contemporary “superficial” reading–especially in its network effects–that can be formulated and improved, both for private and social good. I don’t see any reason why corporations like Google, Amazon, Adobe, and so on should have an oligopoly over developing the activities (not the passivities) of online reading practices. They do what they do well. But what are the under-researched and under-developed areas in online reading technologies, tools, and systems that need to be explored by non-profit and other social sectors to make the overall framework of online reading more robust and diverse? I’d like to see universities, governments, NGOs, and others contribute to that research before everything is graven in stone by the big business of online reading. So, the short answer to your question is: perhaps only the corporations want you to believe that reading is passive. Take what they give you (their systems, their innovations, their file formats and protocols, etc.). But it ain’t so.

And as regards “high-quality consumption”: I don’t actually think it is a done deal that high-quality consumption always means high-sensory consumption of the sort you suggest (cinema, glossy mags, iPod, etc.). For a significant part of the world, including people who are producers in the knowledge-work economy, it continues to mean low-sensory text. We don’t even need to go to the old-school Unix folks for witnesses (all those old postings in the comp.unix.user-friendly or comp.human-factors newsgroups, for example, about why the Unix command line is actually more friendly because it gives control instead of illusory ease-of-use). We just need to consult the “new media” crowd who I think is the immediate audience of our present interview. Real high-quality consumption for this crowd means source-code or script view, which is plain-text. And that is not even to mention the whole new plenum of machine readers (that is, RSS, adaptive aggregators, “Web services” of different kinds, etc.), which sip the fine wine of XML directly. These are also paradigms of activetext-reading. Reading practices–individual, social, and machinic–are where the action is. Okay, so you can tell that I am an English professor who grew up reading (to the point where in childhood my parents had to make rules about how many hours I had to play outdoors instead of curling up with a book). If Marshall McLuhan was an English professor who betrayed reading (the “Gutenberg galaxy”) to prophesy the new mediaverse, then I am an English professor who sees a whole new, online textverse within the mediaverse.

http://vos.ucsb.edu/liu-profile.asp