Download a PDF of the program booklet here.
In the information society, the current reality is an increasing dependence on technological resources to create order and to find meaning in a gigantic quantity of online data. Searching has surpassed browsing and surfing as main activity on the web. This development turned the search engine into our most significant point of reference. Its focus on efficiency and expansion of services however tends to veil the nature of the technology as well as underlying (corporate) ideologies.
In this query driven society, The Society of the Query conference seeks to analyze what impact our reliance on resources to manage knowledge on the Internet has on our culture. The theory of a semantic web lurking around the corner revives the ‘human vs. artificial intelligence’-debate. The centralizing web demands to critically question the distribution of power and the diversity and accessibility of web content, while promising alternatives for the dominant paradigm surface in peer-to-peer and open source initiatives. Finally, the question arises what role politics and education, after having invested substantially in media intelligence, can play in the creation of an informed users’ group.
For two days, the Society of the Query conference aims to zoom in on some of the essential themes surrounding web search by critical analysis and the contextualization of developments in interface design and the organization of knowledge. The Institute of Network Cultures seeks to achieve this specifically by uniting researchers, theorists, activists, artists and professionals working in this area and by creating a platform for not only realized projects and recent research, but also for open questions and predictions.
10.15 – 12.30 > SESSION 1 >
Society of the Query
Due to the difficulty of managing the vast amount of dynamic content available on the Web, it often lacks editorial review, and finding meaningful content has become increasingly dependent on technological resources. The traditional role of the expert-editor has gradually been replaced by the algorithm, introducing a specific logic and privileging mechanism for organizing Web content. In recent years, the growing dominance of a few main search engines has trigged many people to critically look at the way by which search engines rank and serve their results. This conference session will focus on ‘searching’ on the level of the software and will discuss the notion of the organization of knowledge within the theoretical framework of both humanities and computer science. Can we trace the history of knowledge organization, and what is the impact of the back-end algorithm, which is increasingly becoming the dominant means by which users acquire and make sense of information online?
Moderator: Geert Lovink
Yann Moulier Boutang (F)
Inescapable Google? Organization of knowledge, economic value in cognitive capitalism, and collective intelligence
Google, the most-used search engine, has conquered a dominant position. For activists the question has become: how to get rid of it? Until research on the SemanticWeb (P. Levy) or ‘deep Net’ has produced results, Google will enjoy an inescapable monopoly. Google has become the emblem of cognitive capitalism because it has invented a new economic model relying on the controlled development of collective intelligence in networks, a kind of neo or post-market. Google has cleared the path for cognitive capitalism as the only way to survive in a world of communization of production through contribution and pollination. It combines free access as a necessary condition for harvesting real economic value. Taking this into consideration is necessary when we want to understand how to cope with major search engines.
Matteo Pasquinelli (NL)
Google’s PageRank Algorithm: A Diagram of Cognitive Capitalism and the Rentier of the Common Intellect.
The origin of Google’s power and monopoly is to be traced to the invisible algorithm PageRank. The diagram of this technology is proposed here as the most fitting description of the value machine at the core of what is diversely called knowledge economy, attention economy or cognitive capitalism.This essay stresses the need of a political economy of the PageRank algorithm rather than expanding the dominant critique of Google’s monopoly based on the Panopticon model and similar ‘Big Brother’ issues (dataveillance, privacy, political censorship).
Teresa Numerico (IT)
Cybernetics, search engines and resistance: Notes for an archaeology of techno-knowledge of search.
Why was Norbert Wiener so worried about cybernetics that he decided to disseminate it as much as possible, with the precise intent to alert people of its risks? It is very likely that he foresaw what would have happened to digital technologies once they adopted the cybernetics approach, intertwining the concepts of communication and control. Search engines are a direct consequence of cybernetics in terms of the history and philosophy of technology.What we need now is a new ‘archaeology of knowledge’ of the actual developments of the different branches in search engine technologies. It would provide an analysis of the techno-scientific discourse, envisaging its power-knowledge connections and its ideological constraints. This critical attitude might introduce resistance against the dominant discourse, both by using other methods of searching and by creating ‘non-communicative’ open areas that are not susceptible to being archived or searched.
David Gugerli (CH)
The Culture of the Search Society. Data Management as a Signifying Practice
Databases are the search society’s operative platforms. Since the 1960’s, they have been developed, installed, and run by software engineers in view of a particular future user, and they have been applied and adapted by different user communities for the production of their own future. Database systems offered powerful means for the shaping and management of a society, and they eventually developed into the main working base for a search centered signifying practice. The paper will present insights into the genesis of a society which depends on the possibility to search, find, (re-)arrange and (re )interpret of vast amounts of data.
Book Presentation: Deep Search. The politics of Search Beyond Google (Studienverlag & Transaction publishers, 2009)
As a follow-up to the Deep Search symposium, held in Vienna, Austria on November 8, 2008, the World Information Institute has now issued the book Deep Search: The Politics of Search Beyond Google and will be officially launched at the Society of the Query conference. The volume, edited by Konrad Becker and Felix Stalder, is a collection of 13 texts that investigate the social and political dimensions of Web search and addresses urgent issues of culture, context and classification in information systems. Article authors are Konrad Becker, Robert Darnton, Paul Duguid, Joris van Hoboken, Claire Lobet-Maris, Geert Lovink, Lev Manovich, Katja Mayer, Metahaven, Matteo Pasquinelli, Bernhard Rieder,Theo Röhle, Richard Rogers, and Felix Stalder & Christine Mayer.
13.45 – 15.30 > SESSION 2 >
Digital Civil Rights
In 2005, John Batelle characterized Google as a ‘database of intents’; a valuable archive of individual and collective wishes. As the number of services offered by search engines is expanding, large amounts of personal information are gathered, stored and used for commercial purposes.The current technological climate seems to be one in which the user is virtually unaware of whom or what is behind theWeb applications they use on a daily basis. How, for instance, does the intermediary function of the search engine threaten digital civil rights such as the right to privacy and freedom of expression? What role can politics play in protecting these rights? How can the way search engines are designed aid in protecting our autonomy, and how will the legal framework concerning search engines be shaped?
Moderator: Caroline Nevejan
Joris van Hoboken (NL)
Search engines and user privacy: the need for legislative reform
The discussion about the protection of privacy for search engine users has matured over the last 5 years. Since the New York Times showed the sensitivity of pseudonymous search logs of AOL users, which had been released for research purposes, search engine providers have been pressed to improve their protection of user data. The EU’s Article 29 Working Party in particular has pushed the three major search engine providers in the US and Europe, to minimize user data processing.The recent privacy discussion with regard to the Google Book Search Settlement, in which EFF and EPIC have taken the lead, is another example of a better understanding of the importance of privacy safeguards for online information seeking and accessing behavior. However, current data privacy laws do not acknowledge the importance of a free realm for information seeking. Even if search companies like Google would be willing to protect their users against access of user data by third parties, they have no appropriate laws to turn to. In this presentation I will argue that the fundamental rights to privacy and freedom of expression warrant legislative reform to provide for a free domain for internet users to seek and access information.
Ippolita Collective (IT)
What is to be done? or How is to be done?, that’s the question!
Talking about ICT, not only activists but also scholars, politicians and common users often ask themselves: so what? Whatever. What is to be done if we all realize that Google, Facebook and other so-called social networks are spying on us? What if there were such a thing as Big Brother, if we’re all under Echelon’s ear, if both authoritarian and democratic governments use digital technologies against the freedom of speech? Here we find ourselves, blogging, and doesn’t it sound ridiculous that we’re tweeting the revolution? The Leninist slogan ‘what is to be done?’ is a typical question asked from a hegemonic point of view. Does the question make sense for the kind of counterpower that defines itself with the same criteria of the power? For the oppressed, waiting for their turn to oppress, for those who want to rule someday? On the contrary, the quest for a new media literacy for those living the Society of the Query should be: how should it be done? Which methods are suitable for people who want neither to rule nor to be ruled?
15.45 – 17.30 > SESSION 3 >
Alternative Search 1
In response to a growing interest in alternative methods to search the Web, this session will focus on alternatives that highlight vulnerabilities and shortcomings within the currently dominant search engines. Looking beyond the tag as systematizing principle, how is, for instance, the field of visual search developing? What can we learn from search methods within different spheres on the Web? Additionally, search methods will be looked that disregard the ‘engine’ as dominant paradigm. How promising are, for example, peer- to-peer and open source technologies with regards to the current search conditions and which alternatives for commercial and centralizing methods have already emerged?
Moderator: Eric Sieverts
Matthew Fuller (UK)
Dissonance, Double-Accuracy and Parallel Worlds
Interfaces to search processes and the question of what counts as a search and what a result are contentious questions. Image and sound based search engines, those with a linguistic or political project, and those that aim to disrupt or question the too-quickly established norms of search and display provide numerous possible redirections for such questions. In a short time this presentation will provide a partial survey of tendencies in the development of variant conceptions of search and locate its terms within wider considerations on the nature of software development.
Cees Snoek (NL)
Concept-Based Video Search
Despite the rise of commercial video search engines like YouTube, Truveo, and Blinkx, searching relevant fragments in video collections is by no means a solved problem. Present day commercial systems are mainly based on textual analysis of speech transcripts or closed captions. Unfortunately this approach is futile when the visual content is not mentioned or unrelated to the words spoken. In this presentation, a novel means to search in video content using concept detectors will be discussed. The academic challenges will be highlighted as well as problems, and solutions of concept-based video search. We introduce the MediaMill semantic video search engine and discuss it’s performance in international video retrieval competitions.
Ingmar Weber (E)
“It’s Hard to Rank without being Evil”
Google and similar Web search engines are known for collecting detailed logs about all incoming requests and for mining this data on a large scale. In this talk I’ll discuss whether good ranking is possible without such an approach and whether peer-to-peer Web search engines are not always doomed to present mediocre results. First, I’ll discuss scenarios where ranking is not required at all. Then I’ll give an overview of the sources of information used for ranking by current Web search engines. Finally, I’ll try to point out the relative importance of each information source and how easily accessible it is.
10.00 – 12.30 > SESSION 4 >
Art and the Engine
Even during its early stages, artists used theWeb as a platform to produce and distribute an extensive diversity of media such as animation, programming, video, audio and games. While in the last decennium we have witnessed a shift from the directory towards the algorithm, it is the art database that has been refining the directory model for years. What influence does Google’s omnipresence have over the production and distribution of Web based art? How does art criticism manifest itself in the era of Google, and how can online artistic experience be preserved and ensure it can be found easily? This session will discuss the latest developments within the field of graphic design, art and the architecture of information, presents potential outcomes of search result design and investigates how the interface may stimulate new and progressive ways for the user to search, find and analyze data.
Moderator: Sabine Niederer
Lev Manovich (USA)
Learning from Google: a search engine as a method for cultural analysis
Can we translate the principles of search engines algorithms and large scale data analysis in general into a new methodology for cultural theory? In my talk I will discuss what such a methodology
would look like, and also demonstrate practical examples drawn from Cultural Analytics research conducted in Software Studies lab at University of California, San Diego.
Daniel van der Velden (NL)
Peripheral Forces: On the Relevance of Marginality in Networks
Despite the intricate system of ranking, most engines make search look deceptively simple. Initially, ranking seems like a normal, everyday procedure, comparable to the ways we judge between relevant and trivial, foreground and background information in everyday life; after all, our own hierarchies of visibility are also shaped according to certain needs, beliefs, and limitations. Often, the hierarchy applied by ranking rewards what is already popular and suppresses less often viewed currents and opinions in broad, public topics. Redesigning the search engine begins with challenging the principles of relevance and popularity inherent to ranking. In this presentation, we argue how ranking mechanisms translate as phenomena of sociability, and how a different take on the sociability of ‘weak ties’ may bring a different appreciation of their relevance to networks.
Christophe Bruno (FR)
From Dada to Google
I will present some of my artpieces that deal with the hijacking of search engines on the net. From the Google Hack, Epiphanies (2001), to my recent Dadameter (2008), which is an attempt to map language at large scale and to ‘measure our distance from Dada’. I will also discuss semantic capitalism as described in my performance The Google Adwords Happening (2002).
Allessandro Ludovico (IT)
The Google Paradigm, for the funny dictator it’s never enough.
Google establishes monopolies. It conquests predominance in strategic net sectors with a pervasive coolness and attracting error-proof functionalities. Its empire is easily and vastly acknowledged, and because of its accelerated innovation rate, ‘antitrust’ sounds like an obsolete and uninteresting word. Google has the power to establish rules that are both flexible and effective. Internally they gain more productivity lending ‘freedom’ to employers in organizing their own working time. Externally its brand and products are focusing on a sophisticated commodification of knowledge, pursued through the myth of sempiternal searchability and sold with the semi-infinite potential of contextual advertising.This deadly combination is both entertaining through its charm and creating a conceptual shield for their growing collection of monopolies. But even if Google wants to sweetly take over a large part of the internet and entertain us forever, there are still chances to debunk their incredibly effective communication (at all levels) strategy and mass-based economy. Based on the aftermath of Google Will Eat Itself artwork, a parasite strategy can be outlined to conceptually dismantle their seemingly self-referential paradigm.
Ton van het Hof (NL)
Flarf poetry is sometimes referred to as an avant garde poetry movement of the late 20th century and the early 21st century. Flarf poets harvest their material on the Internet by typing in combinations of search terms in aWeb search engine. Whether coming across Shakespear’s Sonnets, Heideggers Sein und Zeit or gross stories about animal sex, Flarf poets take today’s society as it presents itself, and give it back to us; abstracted, enlarged and ridiculed.
13.45 – 15.30 > SESSION 5 >
For most users worldwide, Google is the primary entry point to the Web. The current dominance of Google search might be best understood within a larger epistemological shift moving away from an expert driven ordering of information towards a growing emphasis on the algorithm. The algorithmic privileging of sources based on popularity however has important consequences for the type of content reflected in the search results. Issues to be discussed in this session are the influence of the Google hegemony on the flow of information on theWeb and the way this may affect the way we think, act and interact with online information. Speakers will address the particular way Google ranks and serves its results, the diversity of the results, the accessibility of niche or local content and the role of the user in acquiring relevant sources.
Introduction and moderation by Andrew Keen
Siva Vaidhyanathan (US)
The Googlization of the Global Street
After examining the wide array of reactions to Google Street View and the standard way that Google dealt with each unique cultural, political, and historical context, I wondered whether Google operated with a universalizing ideology. Did the company consider local differences and concerns? I didn’t see any evidence of it in the Street View saga. The tension between universalism and particularism in the age of rapid globalization is well trodden. It’s clear after decades of argument that ideologies such as market fundamentalism, liberalism (with its imperative for free speech), techno-fundamentalism, and free trade were no longer simply “western” – if they ever were. It’s too simple (and ahistorical) to tag such ideologies merely “imperialistic.” But they are universalizing. They do carry strong assumptions that people everywhere have the same needs, values, and desires – even if they don’t know it yet. Instead, it seems that if there is a dominant form of “cultural imperialism,” it concerns the pipelines and protocols, not the products — the formats of distribution and the terms of access and use. It is not exactly “content neutral,” but it is less necessarily “content specific” than cultural imperialism theorists assume. The texts, signs, and messages that flow through global communications networks do not carry a clear and unambiguous celebrations of ideas and ideologies we might lazily label “Western”: consumerism, individualism, and secularism. These commercial pipelines may carry texts that overtly hope to threaten the tenets of global capitalism, like albums by the leftist rock band Rage Against the Machine, films by Michael Moore, or books by Naomi Klein. Time Warner does not care if the data inscribed on the compact discs it sells simulate the voice of Madonna or Ali Farka Toure. What flows from North to South does not matter as much as how it flows, how much revenue the flows generate, and whom may re-use the elements of such flows. In this way, the Googlization of the global flows of information and culture has profound consequences. It’s not so much the ubiquity of Google’s brand that is troubling, dangerous, or even interesting. It’s that Google’s defaults and “ways of doing” spread and structure ways of seeking, finding, exploring, buying, and presenting that influences (does not control) habits of thought and action.
Martin Feuz (CH)
Google Personal Search – What are we to make of it?
History has witnessed a number of attempts to organise the worlds information, all with their own underlying operation model for doing so. Google’s Personal Search is a more recent endeavour additionally aiming to personalize the search experience while keeping user efforts at a minimum. In its original inception in 2004, the user still could select her topics of interest and selectively adjust importance with a slider. In its latest reincarnation in 2007, profiling has been fully automated as has the enrollment for the service upon sign-on to a Google account. According to Sep Kamvar and Marissa Mayer (Feb 2007), ‘personalization at first is subtle, but over time you’ll see it’. Unfortunately, Google’s search result page neither gives any indications as to when we are served personalised search results, nor does it point out which ones they are and on what specific basis they were derived. This talk will present my recent research findings which allow to get a sense of the delicate specificity with which such personalized search results surface.The findings will highlight, that giving the user such indications would seem dreadfully adequate. It will do so by reflecting the research findings on concerns of social sorting, network structuring and reproduction of dominant voices, as well as the mere strangeness of those personalized search results produced and dressed up as top entries in ones personal index.
Esther Weltevrede (NL)
Google as a Globalizing Machine
Google’s mission statement is “to organize the world’s information and make it universally accessible and useful.” The question is: Can a globalizing machine ever present the local? Throughout the years Google has introduced over 150 national domain Googles, with Google.ps as its latest addition. The success of Google’s move to the local is based on the premise that the relevance of information sources is also dependent on location. But as Google’s PageRank algorithm privileges the sources that receive most links, does it end up giving global sources top positions in the local rankings too? How far along is Google’s customization on location? The distinctiveness of results for the same query in national domain Googles formed the starting point for two case studies: “The Nationality of Issues: Rights Types” and “Local and Global Information Sources.” Both research projects compare and reinterpret search engine results across national domain Googles to make claims about local information cultures. Operationalizing questions are: Do the results tell us more about a country’s information culture, or about Google’s means of delivering content nationally? What kind of ‘local’ is created in the national versions? Which of the local Googles have particularly distinctive results?
15.45 – 17.30 > SESSION 6 >
Alternative Search 2
In this second Alternative Search session, some of the latest technological developments in semantic search functionality, as well as their implementation by W3C and European cultural heritage project Europeana, will be presented and discussed. In addition to being understood as enrichments of existing knowledge structures, these developments need to be critically addressed on both the cultural and the software level.Which ideologies make up the foundations for the concept of ‘ontology’? And what role will human expertise play in the era of ‘machine understanding’?
Moderator: Richard Rogers
Florian Cramer (NL)
Why semantic search is flawed
The “Semantic Web” and “semantic search” are frequently misunderstood concepts because they are described with words like “ontology” whose meanings in computer science diverge from colloquial and humanities understanding. In reality, they simply boil down to structured keyword tagging of information, which for many reasons does not scale beyond very limited collections of information and application scenarios, reveals a sometimes astounding naiveté about issues of culture and ontology in the original sense of the word. Finally, the false hopes for semantic search result from frustrations with design flaws of the World Wide Web that prevent more diverse search methods and technologies.
Semantic Search for Europeana
Europeana is a pan-European initiative to make accessible Europe’s cultural heritage. It aims to aggregate millions of digital items, as provided by museums, libraries, etc. Allowing users to search among such a wide and heterogenous range of cultural resources raises huge challenges; it also brings a unique opportunity to exploit the large body of knowledge that relates to these resources. I will present some of the latest technological developments that are being tested to provide Europeana users with semantic search functionality, using examples from the Europeana Thought Lab. In particular, I will sketch how re-using and enriching existing knowledge structures provide with new query and exploration possibilities, beyond simple document search.
Steven Pemberton (NL)
Disintermediation through Aggregation: Making your Data your Own
The Sapir-Whorf Hypothesis postulates a link between thought and language: if you haven’t got a word for a concept, you can’t think about it; if you don’t think about it, you won’t invent a word for it. The term “Web 2.0” is a case in point: it conceptualizes the idea of Web sites that gain value by their users adding data to them. There are inherent dangers in using Web 2.0: it partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole. It also puts your data, and its ownership at risk. So does this mean that user contributed content is a Bad Thing? Not at all, it is the method of delivery and storage that is wrong. The future lies in better aggregators.
20.30 – 23.30
host: Michael Stevenson (NL)
The saturday evening program will feature a selection of artistic and activist projects engaging with different elements related toWeb search, such as settings, cookies and search results. Many layers of Web search are often overlooked or neglected in favor of easy and fast results.The evening program will dive into the non-functional or re-attribution of some popular functionalities, elements and ideas that many take for granted in everyday Web searching.The works featured range from browser extensions to alternative uses of the search engine, Web-based art projects and videos. Especially highlighted is the work of Dutch and Netherlands-based artists/developers such as Constant Dullaart, Govcom.org, Erik Borra, Linda Hilfling, De Geuzen, Lernert Engelbrechts & Sander Plug and Andrea Fiore, most of whom will be present to discuss and elaborate on their work with the audience.The evening is hosted by Michael Stevenson and will take place in the downstairs bar area of TrouwAmsterdam; DeVerdieping.