Posted: November 12, 2013 at 10:00 am |
By: philip |
Tags: control, filter bubble, Google, Pascal Jürgens, social science
Pascal Jürgen’s, in his presentation titled “Measuring Personalization—An Experimental Framework for Testing Technological Black Boxes,” discussed issues surrounding control and responsibility in regard to search engine results. As search engines increasingly provide easier and easier access to content, they also hold immense power over what information users actually receive. With the ever-increasing use of personalization and prediction, search engines act as black boxed systems that control flows of knowledge.
Jürgen’s discusses the oscillating nature of control between positive and negative impacts. From the earliest uses of information collection by feudal kings on their subjects, there has always been a power-based aspect of knowledge, and how it is found. It is this historical nature of knowledge that led Jürgen’s to say, “It’s all new and it’s all old.” It is the new that becomes the focus of the presentation.
Jürgen’s raises the question of Google, and its responsibility to “not be evil.” How do the use of advanced personalization and its potential to influence users fit into this question? Jürgen’s says that “personalized search results further expand this potential because they explicitly aim at maximizing the relevance of delivered content with regard to selection decisions. Despite their relevance, these technologies have rarely been subject to social scientific scrutiny.” As a social scientist, Jürgen’s research focuses on the existence of this ‘filter bubble,’ the idea that the results we get are based on the results we want.
Jürgen’s determined that while results did fluctuate from one person to another, no real filter bubble appeared to exist. He went about determining this by creating multiple fake Google accounts. These accounts would have search histories created, with each having its own theme (politically left, young, old. These accounts would there query Google, and Jürgen’s would compare what results were returned. It the end he determined that the results were similar enough to disprove the existence of a more controlling filter bubble. During the Q&A session after the talk, Jurgens explained that the testing methods for his reserach need to expand, and he is planning on continuing to study the filter bubble.
Posted: November 11, 2013 at 4:54 pm |
By: Catalina Iorga |
Tags: 9/11, erik borra, Google, google search, pagerank algorithm, panda, rene konig, search algorithm
Erik Borra and René König were the second to last speakers of Society of the Query #2′s sixth and final session, The Filter Bubble Show, with a talk on why search engines are biased. As a case study, Borra and König chose the controversial topic of 9/11 and tried to answer how Google’s algorithm decides what is relevant for this particular query. The reason why chose 9/11 as an object of study is its status as a global phenomenon examined from diverse perspectives, including conspiracy theories of 9/11 Truth Movement variety, which questioned the mainstream version of events featured in the media.
For the past six years, a script made at the Digital Methods Initiative, queried Google daily with the term “9/11″ and stored the top 10 search results for each day. The corpus of Borra and König’s study consisted of results chosen from four dates per year, one every few months. The top 10 URLs for the selected days were then coded using an emergent coding scheme: reading through all the pages that the URLs pointed to, noticing content commonalities and constructing the main categories of ‘mainstream’, ‘conspiracy’, ‘meta’, ‘history / facts’, ‘memorial’, ‘aftermath’, ‘popular culture’ and ‘other’.
Read the rest of this entry »
Posted: November 11, 2013 at 3:56 pm |
By: Catalina Iorga |
Tags: cost benefit, Google, google search, Google Translate, human knowledge, language, local knowledge, Search Engines, Thomas Petzold
Thomas Petzold started the second session of Society of the Query #2, ‘Search Across the Border‘, on a more positive tone as he gave kudos to the search engine. He commended it for still being a great tool, one that has had a huge impact on not only the collective memory of our species, but also on how we collaborate when trying to solve problems.
However, when talking about languages and search, things are looking a bit grim. Out of the world’s approximately 6000 living languages, 95 percent have fewer than 1 millions speakers. Only 5 percent have more than 1 million speakers, while 1 percent of languages are spoken by more than 10 million people.Google only supports 5 percent of the world’s languages and has a huge preference for the most spoken ones: 40 percent of the languages it does support have more than 10 million speakers, 90 percent more than 1 million and only 10 percent fewer than 1 million speakers.
Read the rest of this entry »
Posted: November 11, 2013 at 10:26 am |
By: Serena Westra |
Tags: alternatives, Google, index, Lewandowski, Search Engines
How can we create real alternative search engines? German professor Dirk Lewandowski spoke as third speaker in the session ‘Google Domination’. He argues that we need an independent index of the web. “We don’t need publicly funded search engines. Instead, we need publicly funded search index.” Why? He argues that with an index we can do much more than just web search.
A search engine index collects, parses, and stores data to facilitate fast and accurate information retrieval. It is a local copy of the web; sarch engines create direct replicas of documents. This representation includes more information than just the text: information about the author, the length, title, keywords, decay, date, pagerank etc. are also stored. The representation of a website on a search engine does not always match the original page and Google’s copy is often lacking newly added information. It is impossible to be always up-to date, yet a local and up to date copy of the web is the ‘holy grail’ to create alternative search engines. However, this is easy established.
Read the rest of this entry »
Posted: November 10, 2013 at 10:36 pm |
By: katia |
Tags: alternatives, corporate dataveillance, data, Google, Google Domination, search, Siva, state surveillance, Vaidhyanathan
Society of the Query #2 kicks off with a mind-boggling presentation by Siva Vaidhyanathan, author of The Googlization of Everything. With the steady revelations throughout the summer of 2013 about the United States government’s programs and powers to monitor digital communication, mine metadata, and circumvent encryption, it has become clear that corporate habits once devoted to maximizing market share and targeting consumers serves a much larger and more nefarious interest. According to Siva Vaidhyanathan, the relation between governments and companies has been very interesting subject in the past few weeks. Let’s take a look at the company that decides what matters on the Web: Google.
Siva Vaidhyanathan at Society of the Query #2 (photo by Martin Risseeuw)
Google seems to read our minds. It knows a tremendous amount about us, Vaidhyanathan states. From the perspective of Google, we are not supposed to understand algorithms. All we have is a rough idea of how a page is ranked more high than others.
Beside that, Google is in a process of change constantly. Almost every year, there are substantial changes made. So every time that we try to get a sense of what Google is doing, it eludes us. A social science about search is therefore almost impossible. Every experiment counts for one day; Google remains a black box for us.
Read the rest of this entry »
Posted: November 10, 2013 at 8:00 pm |
By: Ihab Khiri |
Tags: Google, Ippolita Collective
During the second Society of the Query conference, the Ippolita collective presented their book entitled ”The Dark Side of Google” (2007; it-fr-es-en) firstly presented in 2006 .The book originally appeared in Italian, and has been translated into French, Spanish and consequently English. The distinct thing about writing is that it is a direct action, writing a book therefore is a good way to establish words, words that cannot be taken back. Translating the book into different languages has been a complex process, because every translation is subject to change.
The way we see Google has changed, where in the past we did not question the machine and used it without much criticism, nowadays everyone seems to be knowledgeable about algorithms and wonder how our results come to what they are. The idea of Algocracy suggests that the masters of clouds are becoming gods and from this different questions about religion arise. We do not know where our data stays and get the idea that our data is floating around in the sky, nevertheless we should keep in mind that even though machines are physical they are not immaterial.
Google has been taken as a case study because it is widely known, not as one would expect because of its high criticism. The book therefore could be seen as the only account that does not talk about the ”evilness” of Google, rather sees Google as a domination in which we [average citizens] are interested and which want to know more about. This interest comes from the tendency of Technology to become a domination drive in contemporary society.
What do you want from Tech Tools?
Another issue that the book discusses, is the notion that technology has to be improved all the time. The question one should ask is what precisely do we want to make better? The Ippolita collective is not interested in the capitalistic idea of improving technology i.e. to create something that is better than current technology and earn money with it. Rather one should ask what is better for us and wonder what we expect from technology. Current technologies already offer many aspects that we are looking for in life e.g. Facebook is a great tool for social encounters and Google is already offering a great variety of useful applications. We have to use the medium for what it is intended and therefore we are the only ones that have to wonder what our desires are, before we start craving for ”better”.
We should stop the crazy run for more, in order to answer our questions and satisfy our cravings we have distance ourselves from any technology and have a dialogue with oneself. There is no war, nor oppression and the ideology of infinite growth will not satisfy our desires.
By Ihab Khiri
Read more about the Ippolita collective:
Institute of Network Cultures
or download The Dark Side of Google
Posted: October 15, 2013 at 2:47 pm |
By: vince |
Tags: Google, russia, search, Search Engines, yandex
img – bisbos.com
The world is familiar with Sputnik as being the first artificial Earth satellite, launched by the Soviet Union in 1957. But this time, the famous name is being used for another ambitious project that competes with the US industry – a search engine that would challenge Google and even the local Yandex.
The project comes from Rostelecom, Russia’s state-controlled telecommunication service, which has been commissioned to come up with a search engine to compete with Google but also with the local search engine Yandex which is based in The Netherlands, The Hague.
This search engine would be www.sputnik.ru and, although state-backed, it would face fierce competition as Yandex already has 62% of the market share and Google comes in second with 25% of the traffic. However, sources at Rostelecom claim that the project had already cost $20 million, indexing about half of the Russian internet, and that it will be launching sometime in the first quarter of 2014.
Posted: October 10, 2013 at 3:51 pm |
By: vince |
Tags: facebook, Google, search, Search Engines
It is becoming more and more obvious that Facebook wants to have a piece of Google’s cake. Not long after Facebook’s newest updates to its Graph Search, the social query tool is getting yet another refresh. In a recent blog post, Facebook just announced that users are now able to search for “status updates, photo captions, check-ins and comments” posted on their timeline as well as others’.
Search examples given by the company include the possibility of searching for “posts about Dancing with the Stars by my friends” which will offer results comprised by what their friends have posted or commented regarding the show. Another search example of the new feature is looking for “Pictures of me and my dog” or “My posts from last year”.
An already classic reaction among users is the privacy concern to which Facebook wanted to reply beforehand, by pointing to its privacy page which lets users revise and hide their past activity. Since it is still in beta testing, Facebook announced it will be rolling out the new feature to a limited private group of users.
The “graph search”, which is the generic name given to Facebook’s search instruments, is regarded as more or less competing with Google’s search engine but also with other services which permit search functions. And even though its current characteristics are basic, the semantic approach in Facebook’s search engine – typing “people who are friends with Mark Zuckerberg and who live in The Netherlands” rather than “friends, Mark Zuckerberg, The Netherlands”, as Google is queried, might appeal to more and more users in the future.
Posted: September 30, 2013 at 11:57 am |
By: vince |
Tags: Google, search algorithm, Search Engines
On par with its 15th anniversary, Google just recently revealed a few updates to its search engine, containing a major refresh to its search algorithm which has been given the name “Hummingbird”. Amit Singhal, senior VP at Google, mentioned that the new algorithm influences about 90% of all searches and its purpose is to sustain longer and more complex queries that users input through Google, such as full questions which people might ask their friends online.
Another feature of the “Hummingbird” algorithm deals with more advanced voice queries, allowing a verbal interaction between users and their devices, allegedly making the search process easier and more natural.
Posted: August 30, 2013 at 1:14 pm |
By: vince |
Tags: Google, knowledge, Search Engine, society of the query
Albert Einstein was quoted for saying “Never memorize something that you can look up”. Of course, at that time, looking up something meant actually going through books and finding information. Nowadays, with Google being a few keystrokes away, not knowing something means you simply have to “Google it”!
But is the act of using a search engine the same as possessing knowledge? Come to the Society of the Query 2 conference on 7-8 November and let’s discuss it!