Pascal Jürgens – Measuring Personalization: An Experimental Framework for Testing Technological Black Boxes

Posted: November 12, 2013 at 10:00 am  |  By: philip  |  Tags: , , , ,

Pascal Jürgens, in his presentation titled Measuring Personalization — An Experimental Framework for Testing Technological Black Boxes, discussed issues surrounding control and responsibility in regard to search engine results. As search engines increasingly provide easier and easier access to content, they also hold immense power over what information users actually receive. With the ever-increasing use of personalization and prediction, search engines act as black boxed systems that control flows of knowledge.

jurgens

Jürgens discusses the oscillating nature of control between positive and negative impacts. From the earliest uses of information collection by feudal kings on their subjects, there has always been a power-based aspect of knowledge, and how it is found. It is this historical nature of knowledge that led Jürgen’s to say, “It’s all new and it’s all old.” It is the new that becomes the focus of the presentation.

Jürgens raises the question of Google, and its responsibility to “not be evil.” How do the use of advanced personalization and its potential to influence users fit into this question? Jürgen’s says that “personalized search results further expand this potential because they explicitly aim at maximizing the relevance of delivered content with regard to selection decisions. Despite their relevance, these technologies have rarely been subject to social scientific scrutiny.” As a social scientist, Jürgen’s research focuses on the existence of this ‘filter bubble,’ the idea that the results we get are based on the results we want.

Jürgens determined that while results did fluctuate from one person to another, no real filter bubble appeared to exist. He went about determining this by creating multiple fake Google accounts. These accounts would have search histories created, with each having its own theme (politically left, young, old. These accounts would there query Google, and Jürgen’s would compare what results were returned. It the end he determined that the results were similar enough to disprove the existence of a more controlling filter bubble. During the Q&A session after the talk, Jurgens explained that the testing methods for his reserach need to expand, and he is planning on continuing to study the filter bubble.

Erik Borra and René König Show Google Search Perspectives on 9/11

Posted: November 11, 2013 at 4:54 pm  |  By: Catalina Iorga  |  Tags: , , , , , , ,

10742917514_b329d867c0_nErik Borra and René König were the second to last speakers of Society of the Query #2′s sixth and final session, The Filter Bubble Show, with a talk on why search engines are biased. As a case study, Borra and König chose the controversial topic of 9/11 and tried to answer how Google’s algorithm decides what is relevant for this particular query.  The reason why chose 9/11 as an object of study is its status as a global phenomenon examined from diverse perspectives, including conspiracy theories of 9/11 Truth Movement variety, which questioned the mainstream version of events featured in the media.

For the past six years, a script made at the Digital Methods Initiative, queried Google daily with the term “9/11″ and stored the top 10 search results for each day. The corpus of Borra and König’s study consisted of results chosen from four dates per year, one every few months. The top 10 URLs for the selected days were then coded using an emergent coding scheme: reading through all the pages that the URLs pointed to, noticing content commonalities and constructing the main categories of ‘mainstream’, ‘conspiracy’, ‘meta’, ‘history / facts’, ‘memorial’, ‘aftermath’, ‘popular culture’ and ‘other’.

Read the rest of this entry »

Thomas Petzold: Search Industry’s Five-Percent Gamble

Posted: November 11, 2013 at 3:56 pm  |  By: Catalina Iorga  |  Tags: , , , , , , , ,

10726096593_17d9b2477f_nThomas Petzold started the second session of Society of the Query #2, ‘Search Across the Border‘, on a more positive tone as he gave kudos to the search engine. He commended it for still being a great tool, one that has had a huge impact on not only the collective memory of our species, but also on how we collaborate when trying to solve problems.

However, when talking about languages and search, things are looking a bit grim. Out of the world’s approximately 6000 living languages, 95 percent have fewer than 1 millions speakers. Only 5 percent have more than 1 million speakers, while 1 percent of languages are spoken by more than 10 million people.Google only supports 5 percent of the world’s languages and has a huge preference for the most spoken ones: 40 percent of the languages it does support have more than 10 million speakers, 90 percent more than 1 million and only 10 percent fewer than 1 million speakers.

Read the rest of this entry »

Rebecca Lieberman on the Poetics of Search

Posted: November 11, 2013 at 1:51 pm  |  By: Maya Livio  |  Tags: , , , , , ,

“Demented Panda and Koki wandered through the small plot of land. Except it was no longer only a small plot of land, but also an enormous food court. Except it wasn’t just a food court, but also an outdoor rehearsal space lent to artists by a small nonprofit arts organization. Except it wasn’t a rehearsal space, but a soundstage for gigantic live entertainments. Except it wasn’t a soundstage, but a fake Baghdadi neighborhood staged for counterinsurgency training exercises…”

–Excerpt from An Army of Lovers by Juliana Spahr and David Buuck

Thus began Rebecca Lieberman’s presentation in the ‘Art of Search’ panel at the second Society of the Query conference, introducing and setting a foundation for how to think about her visually similar imgs project. Her piece borrows its name and concept from Google’s Search by Image tool, a feature introduced in 2012 to allow users to reverse-search images by querying Google using visual rather than textual data. When Google is unable to locate an exact match for the image, it utilizes that image’s “visual DNA” – color, composition, pixel density, and other factors – to serve up a proliferation of aesthetically similar images. According to Lieberman, Google places the image into a grouping of related images with a “shared formal vocabulary,” bringing together disparate contents and contexts into the same space.

Rebecca Lieberman, presenting on the 'Art of Search' (photo by Martin Risseeuw)

Rebecca Lieberman, presenting on the ‘Art of Search’ (photo by Martin Risseeuw)

Lieberman’s project consists of several interrelated components in a variety of media, including a series of artist books, a browser-based work meant to situate the images in their native habitat, and a series of looped videos, all composed of images mined from Google. After feeding the Search by Image tool banal images such as cat photos and selfies, Lieberman takes the results she finds to be of interest and assembles them together in a sequence all her own. She describes the process as being like a game of telephone or a stream of consciousness, stitching images together in what she envisions to be a visual poem.

Lieberman uses both literal and metaphorical connections to influence her choices (which she exemplified by showing an image of a soap opera star in a bathtub followed by one of a rhinoceros bathing in mud), and so her selections are filtered through her own subjectivity rather than being what she calls a “straightforward quotation.” Thus, the project taps into the poetic potentialities of search, and she sees Google’s tool as a “gift,” allowing us “a new way of reading pictures.”

By selecting the images and ordering them, Lieberman can be said to reclaim authorship from or at least share authorship with Google. She relates this investigation of authorship and appropriation to the art historical lineage of works asking similar questions, such as the paintings of Ed Ruscha, and also to contemporary Internet practices such as re-blogging and pinning.

Lieberman’s interest lies particularly in the meaning generated by the interstices, how each viewer may make different interpretations based on what appears to be missing between the images in sequence, as in line breaks in poetry or cuts between the scenes of a film. Without linear narrative, meaning accumulates through the assemblage of images and the spaces between them. Lieberman’s intent is then to investigate how meaning may shift and transform as images travel across the Internet.

Astrid Mager – Is Small Beautiful? Big Search and its Alternatives

Posted: November 11, 2013 at 12:12 pm  |  By: martaburugorri  |  Tags: , , , , , ,

In the first session  Astrid Mager tells us about search engines, pointing out that Google is not the only search engine that is using personal data for commercial interests, or for instance, collaborating with the NSA. She holds that we shouldn’t blame only Google as there are many other factors involved, but proposes several alternatives and explains their characteristics under the title Is Small Really Beautiful? Big Search and its alternatives.

“Google dominance is not external from society, but internal, Google is something we create all together”. It is important to keep critizing citicizing surveillance, not only blaming Google butalso researching on the power relations that are involved in the construction of search engines. Astrid holds that capitalism is making profit of our networks and algorithms. The users are interested in finding the most convenient information, and the search engines are very good at this. However, what we  shouldn’t forget is that there are economic relations in this flow of data between providers and users. Content providers and users collaborate together to create Google’s business  model. Indeed,  Google  is the great mirror of the capitalist society in which we live in.

 

3

Astrid Mager presenting on “Google Domination”  (photo by Martin Risseeuw)

In this sense, there is a rise on critical debates about data protection, renegotiation of technology…  Astrid gives several examples as alternatives to Google, especially those ones that have a strong ideological stand. She shows the example of duckduckgo.com, a search engine that for instance uses this slogan: “Google tracks you. We don’t.” Duckduckgo is supposed to be a search engine that does not filter personal data, a search engine that respects privacy, a real alternative for data collecting. The principle of privacy as the ideological basis of this alternative to Google. Because privacy, as Astrid holds, it is a civil right, something essential for the constructions of democratic societies. DuckDuckGo uses more than 100  search engines and sources, both commercial and non-commercial, including Wikipedia and other crowd-sourced sites, but also Bing, Yahoo! (displaying Bing results)  and Yandex. That means that Duckduckgo.com is finally dependant on business parties, that is to say, it doesn’t use itself filters but uses other search engines that certainly do. (It additionally runs its own web crawler called DuckDuckBot, but its index is rather small)

4

Astrid Mager presenting on “Google Domination”  (photo by Martin Risseeuw)

Another alternative she explains is ECOSIA. ECOSIA diplays solely results from Bing and  it supports ecological projects. ECOSIA donates at least 80% of its income to a tree plating program in Brazil. Basically, their ideology and business is based on this, and moreover, it runs on green power. Another alternative she suggests is WolframAlpha.com, she calls it the “Knowledge Engine”. WilframAlpha.com is devoted itself to the scientific field and it really tries to give the users great answers. Nevertheless, WolframAlpha.com is a commercial tool because it uses commercials in order to have free search, or allows you the possibility to pay monthly and have no ads. We may say it is more a software rather than a search engine. But the great search engine she proposes is YaCy. YaCy tries to  provide  decentralized search, it is a free software and according to their website they should be totally transparent. In contrast to all other search  engines, it really fits with the idea of freedom of information and ideology  embedded  in technology. YaCy is the best alternative in both technical and ideological level.

However, Astrid notes that big search engines such as Google have lot of  experience  managing data and they have a big  infrastructure as well. Will be then alternatives possible and  successful? As Astrid holds, maybe it’s time to attract engineers and get more people involved and concerned about alternatives that respect our privacy.

Kylie Jarrett – Search for the Google God: Metaphysics and the Social Imaginary of Search

Posted: November 11, 2013 at 12:12 pm  |  By: martaburugorri  |  Tags: , , ,

In this first session of Friday Kylie Jarrett talks about the history of search, going back to metaphysical desires on historical information technologies but focusing as well on Google and contemporary search engines. Kylie highlights two sources of deep importance in the history of search; past practices that are still valuable to understand the current culture of networked search.

8

Kylie Jarrett presenting on “Reflections on search”   (photo by Martin Risseeuw)

On the one hand she puts the example of Atomism, an ideology of ancient traditions that beliefs that the whole world can be reduced to atomism, that is to say, to void and materiality. That distinction between void and materiality anticipated the current binary distinction characteristic of our digital age. Theories of atomism claim that our bodies are aggregated bits of information. Consequently, reality is constituted in abstraction and reducible . That is to say,  we can reduce all the information, all the knowledge, to an storage device; the best example is Google.  Atomism, by understanding the knowledge as something divisible from which patterns are generated, creates the basis of Google’s personalize form of search.

On the other hand, she also points out the foundation of the tower of Bable and she compares it with a universal library. The myth teaches us that when all the power and knolwledge is located in one only place (such as language or a universal library), is subjected to corruption. In other words, the tower of Bable is like fabricating a code that ends up being independent from its makers. She suggests that the attempt to organize all the knowledge into a single place  might produce an overload of information as well as  a brake between meaning an information.  Once again, she  parallels  this myth with a search engine like Google, in which all the knowledge is located in a single search engine. Indeed, it seems that what does not exist on Google, does not exist in life.  There is no truth, there are millions of truths, each truth is personalized for each individual.

 

6

Kylie Jarrett presenting on “Reflections on search”   (photo by Martin Risseeuw)

She also refers to the notion of metaphysics of search and shows the example of Llull’s thinking machine in which a machine is used to combine elements of thinking – for instance, elements of  language-. Llull’s machine made logical reductions in a mechanical way. He demonstrated that human thought can be described by a device and anticipated our current digital system. As Kylie suggests, this is the idea that knowledge can be reduced to abstract principles and therefore create an universal index of the world. This is the universality of Google’s index and its domination.  To conclude, Kylie holds that we have to understand why Google dominates the world and be aware of our complicity with it. Why we enable Google to take that big space in our lives? -”Only when we will understand search origins we will understand something like Google”.

Dirk Lewandowski: Why We Need an Independent Index of the Web

Posted: November 11, 2013 at 10:26 am  |  By: Serena Westra  |  Tags: , , , ,

How can we create real alternative search engines? German professor Dirk Lewandowski spoke as third speaker in the session ‘Google Domination’. He argues that we need an independent index of the web. “We don’t need publicly funded search engines. Instead, we need publicly funded search index.” Why? He argues that with an index we can do much more than just web search.

Society of the Query #2

A search engine index collects, parses, and stores data to facilitate fast and accurate information retrieval. It is a local copy of the web; sarch engines create direct replicas of documents. This representation includes more information than just the text: information about the author, the length, title, keywords, decay, date, pagerank etc. are also stored. The representation of a website on a search engine does not always match the original page and Google’s copy is often lacking newly added information. It is impossible to be always up-to date, yet a local and up to date copy of the web is the ‘holy grail’ to create alternative search engines. However, this is easy established.

Read the rest of this entry »

Siva Vaidhyanathan on the intimate relationship between state surveillance and corporate dataveillance

Posted: November 10, 2013 at 10:36 pm  |  By: katia  |  Tags: , , , , , , , ,

Society of the Query #2 kicks off with a mind-boggling presentation by Siva Vaidhyanathan, author of The Googlization of Everything. With the steady revelations throughout the summer of 2013 about the United States government’s programs and powers to monitor digital communication, mine metadata, and circumvent encryption, it has become clear that corporate habits once devoted to maximizing market share and targeting consumers serves a much larger and more nefarious interest. According to Siva Vaidhyanathan, the relation between governments and companies has been very interesting subject in the past few weeks. Let’s take a look at the company that decides what matters on the Web: Google.

Siva Vaidhyanathan at Society of the Query #2 (photo by Martin Risseeuw)

Siva Vaidhyanathan at Society of the Query #2 (photo by Martin Risseeuw)

Google seems to read our minds. It knows a tremendous amount about us, Vaidhyanathan states. From the perspective of Google, we are not supposed to understand algorithms. All we have is a rough idea of how a page is ranked more high than others.
Beside that, Google is in a process of change constantly. Almost every year, there are substantial changes made. So every time that we try to get a sense of what Google is doing, it eludes us. A social science about search is therefore almost impossible. Every experiment counts for one day; Google remains a black box for us.

Read the rest of this entry »

Book Launch: Ippolita – ”The Dark Side of Google”

Posted: November 10, 2013 at 8:00 pm  |  By: Ihab Khiri  |  Tags: , , , ,

During the second Society of the Query conference, the Ippolita collective presented their book entitled ”The Dark Side of Google” (2007; it-fr-es-en) firstly presented in 2006 .The book originally appeared in Italian, and has been translated into French, Spanish and consequently English. The distinct thing about writing is that it is a direct action, writing a book therefore is a good way to establish words, words that cannot be taken back. Translating the book into different languages has been a complex process, because every translation is subject to change.

Writing as direct action

The way we see Google has changed, where in the past we did not question the machine and used it without much criticism, nowadays everyone seems to be knowledgeable about algorithms and wonder how our results come to what they are. The idea of Algocracy suggests that the masters of clouds are becoming gods and from this different questions about religion arise. We do not know where our data stays and get the idea that our data is floating around in the sky, nevertheless we should keep in mind that even though machines are physical they are not immaterial.

Why Google?

Google has been taken as a case study because it is widely known, not as one would expect because of its high criticism. The book therefore could be seen as the only account that does not talk about the ”evilness” of Google, rather sees Google as a domination in which we [average citizens] are interested and which want to know more about. This interest comes from the tendency of Technology to become a domination drive in contemporary society.

Speaker from Ippolita collective

What do you want from Tech Tools?

Another issue that the book discusses, is the notion that technology has to be improved all the time. The question one should ask is what precisely do we want to make better? The Ippolita collective is not interested in the capitalistic idea of improving technology i.e. to create something that is better than current technology and earn money with it. Rather one should ask what is better for us and wonder what we expect from technology. Current technologies already offer many aspects that we are looking for in life e.g. Facebook is a great tool for social encounters and Google is already offering a great variety of useful applications. We have to use the medium for what it is intended and therefore we are the only ones that have to wonder what our desires are, before we start craving for ”better”.

We should stop the crazy run for more, in order to answer our questions and satisfy our cravings we have distance ourselves from any technology and have a dialogue with oneself. There is no war, nor oppression and the ideology of infinite growth will not satisfy our desires.

By Ihab Khiri

Read more about the Ippolita collective:

Institute of Network Cultures 

or download The Dark Side of Google

Russia challenges Google and Yandex by unveiling plans for a new search engine called “Sputnik”

Posted: October 15, 2013 at 2:47 pm  |  By: vince  |  Tags: , , , , ,

img - bisbos.com

img – bisbos.com

The world is familiar with Sputnik as being the first artificial Earth satellite, launched by the Soviet Union in 1957. But this time, the famous name is being used for another ambitious project that competes with the US industry – a search engine that would challenge Google and even the local Yandex.

The project comes from Rostelecom, Russia’s state-controlled telecommunication service, which has been commissioned to come up with a search engine to compete with Google but also with the local search engine Yandex which is based in The Netherlands, The Hague.

This search engine would be www.sputnik.ru and, although state-backed, it would face fierce competition as Yandex already has 62% of the market share and Google comes in second with 25% of the traffic. However, sources at Rostelecom claim that the project had already cost $20 million, indexing about half of the Russian internet, and that it will be launching sometime in the first quarter of 2014.