In September 2010 Google introduced autocomplete (also Google Suggest) to its search engine. Based on previous queries, it tries to predict what we want to search for while we are still typing. How does this impact search engine usage and what to do the suggested queries tell us about our societies? Already now it is clear that the suggestions are often problematic. They may violate personal rights and can be politically-loaded and controversial.
Trials against Google
As I briefly pointed out in my last blog post, Bettina Wulff (Germany´s former first lady) recently sued Google because its autocomplete feature supported the rumor that she worked as a prostitute. She was not the first one who saw her personal rights violated by Google´s suggestions. Earlier this year, a Japanese man fight and won a law case against the search engine provider because it affiliated his name to criminal acts. He was afraid that Google´s suggestions could damage his reputation and might cost him his job. In 2011 Google lost another law suit fought by an Italian business man who did not want to be associated with suggestions like “truffa” (fraud), similar to another case in France. In Ireland a legal settlement solved the conflict between Google and a hotel which did not like Google´s suggestion “receivership” next to its name. The courts worldwide seem to be rather on the complainants´ side than on Google´s. While most will agree that false allegations should not be supported by search engines we may also ask: what if they are correct? Where do personal rights end and where does manipulation begin?
Censorship and manipulation
Google usually argues that its algorithms simply mirror the web and that autocomplete is just based on previous queries. However, the company also declares it applies “a narrow set of removal policies for pornography, violence, hate speech, and terms that are frequently used to find content that infringes copyrights.” At least in some cases it is debatable what should fall under this category. For example, Emil Protalinski has pointed out that censoring “thepiratebay” was not justified because the platform also provides legitimate torrent links and not only pirated material.
In any case, this relatively new feature is another powerful way of influencing our information practices. Although it comes in the subtle shape of mere suggestions it may have massive impact on users´ search behavior. It disciplines us. We get rewarded if we follow the suggestions because they make us type less. Are we also going to feel bad if we search for something which does not appear here, knowing it might be illegal or at least a query on the margin of society?
Search engine optimizers have already acknowledged the power of autocomplete. They try to polish the images of brands by highlighting “the positive aspects or activities associated with the brand and push negative values out” (Brian Patterson 2012).
Image by Search Engine Land (2012)
Cultural impact
Such manipulations question the alleged democratic principle of autocomplete. Do the suggestions really represent what people search for? Even if they do, we may question the “wisdom of crowds” (Surowiecki) as masses have always had the potential to turn into mobs. Can autocomplete foster prejudices by re-producing them with its suggestions? Does it manipulate the public similar as big tabloids do, as Krystian Woznicki wonders? Romanians, for example, were confronted with not very flattering predictions for the query “Romanians are”: “scum”, “ugly”, “rude” were among Google´s associations. A campaign tried to change this by asking users to google for positive attributes like “Romanians are smart”. Try yourself if it was successful (at my computer in the Netherlands it doesn´t seem so).
I would love to hear more about your experience (maybe even research?) on autocomplete. What do the suggestions tell us about our societies and how can we use them for social research? Soon, I´m going to write more about a cross-cultural comparison of autocomplete suggestions here.
Stay tuned!