Alan’s aim today is to present a draft proposal for a non-reactionary policy for regulating and improving Web 2.0 knowledge. In doing so, he’s opening up the can of worms that is the Wikipedia debate.
(Read more and discuss below..)
We are taken through a couple of case studies, taken from Alan’s own learning experience. First we see the wiki from his class Creativity and Collaboration. The students were given free reign, allowed to change any page (with the exception of the grading policy), and this led to a humorous reworking of the traditional ‘panoptic’ setting of the classroom. (The site is definitely worth a look, see the class page here.)
Alan’s experience led to him writing the piece, Developing a Wikipedia Research Policy, where he discusses the problem of incorporating Wikipedia research in the university.
A second case study comes from Duke, where students were “10% of the Class of 2008 had been caught cheating on a take-home final exam”. This highlighted an important problem for the use of Web 2.0 tools at the University.
Thus the need for policy/policing. On the one hand, universities can choose for a reactive policy. For example, PAIRwise is software for universities that check students’ digitally submitted papers for plagiarism. But Alan wonders: where are the FAQs about plagiarism, and why not have lectures connecting plagiarism with pastiche? That is, shouldn’t there be a positive learning experience, rather than a reactionary policy?
The reactionary response to Web 2.0 is also found in Larry Sanger’s Citizendium. Sanger wants more policing and authority for experts, but also wants the amateur-driven speed and scope that characterizes Wikipedia. Time will tell if he may have his cake and eat it.
Another reaction to Wikipedia is that which polices along ‘moral’ lines. For example, the ‘commandments’ of Conservapedia are accompanied by threats of legal action against ‘vandals’.
How can we get past this, then, and think of policing network knowledge wihout these monopolies of force? Alan says we need to rethink the place of authority and authorship. With web 1.0, authorship and authority was still clearly in the hands of webmaster. The webmaster alone was responsible for the content. But even then there were the supposed ‘intermediaries’ of programmers, template-makers, and so on. Thus while the authority model still applied, this was troubled.
With Web 2.0, Wikipedia, social networking sites and others opened the floodgates to collaborative authorship. In so many ways, authority is now usurped (from vandals on Wikipedia to comment spam on your blog). And this leads to the question, who will be (or should be) in control after Web 2.0?
Tim O’Reilly, for example, says we can rely on the wisdom of crowds, while others say this only applies in certain situations, and that an encyclopedia needs to revert to a print-like model of authority. Alan argues that the tension lies in the problem of not having a ‘whole’ author who can take on responsibility.
Concluding, Alan offers a draft proposal for a new model of authority (this is accompanied by a disclaimer, it seems certain conference organizers wanted speakers to be more daring and provocative than they would have normally like to be).
First, he says, we must do better to tap into the Wisdom of Crowds – for example, the majority of work on Wikipedia is carried out by just 5% of its users. Second, much more can be taken out of data. With smarter datamining, we could move toward a Social Markup of collaboratively authored texts. This would be a markup that makes visible the social interaction behind production, that would reveal conflicts, (in)stability, volatility and so on. We would immediately visualize whether the text is highly policed by many users, or if it is perhaps the product of a single author.
In this way, he says, authority gets displaced once again, now to the user.
For discussion’s sake, it might be worth asking if Alan’s proposal upsets Siva’s earlier call for a breakdown of technological myths. To what extent should we be looking for a technological fix to what is presumably a problem caused by technology?