Steve Pemberton – „Have your own personal website!“

Posted: November 14, 2009 at 7:21 pm  |  By: dennis deicke  |  Tags: , , ,  |  4 Comments

Society of the Query

Steve Pemberton starts off with explaining the Sapir-Whorf hypothesis, which proposes a link between langauge and thought. If you do not have a word for something, you cannot think about it, and if you do not think about it, you will probably not invent a word for it. Pemberton applies this idea to the term of “Web 2.0″, which has been created by a publisher who wanted to organize some conferences about the idea that websites achieve value by users transferring their data to them. But the concept of Web 2.0 already existed in the Web 1.0 era, namely in the form of ebay. Nowadays, when we can utilize the established concept of the Web 2.0 one can talk about and discuss this phenomena.

In his speech he suggests that people should have their own machine readable websites instead of giving their data to middlemen and mediators as the model of Web 2.0 requires it. According to Pemberton the problem is that the network-organization  the Web 2.0 separates the web into several sub-webs. Referring to Metcalfe‘s Law Pemberton states that this separation reduces the value of the web as a whole.

Additionally he mentions further problems prevailing in regards to the Web 2.0. First of all using Web 2.0 applications like social networks and photo-sharing sites forces you to log in to certain kind of organization, you have to adapt to their data format to be able to publish and work on your contents, which is equipollent to a certain commitment. Subsequently there is the question about the case of deletion or closure of your account or even the death of the network (like mp3.com, Google video, Jaiku, Magnolia). One has to rely on the provider that he keeps your account running, so that the data and the work you have put into it do not get lost. Facebook for example closed a woman‘s account because they decided that an uploaded picture showing her breastfeeding is totally inappropriate.

The crucial point of Pemberton‘s view is that we need personal webpages which have to be readable by machines, so that the sites can be scanned and used by an along coming aggregator. According to him the solution to enable machine readable sites is the format RDFa, which Pemberton refers to as the „CSS of meaning“ and which represents the incarnation of the Web 3.0. The advantage of RDFa is that it joins together different data automatically, things do not have to be joined together because RDFa already linked data together from different places.  Futhermore Pemberton states that the usage of machine readable sites has several advantages for users. For example the browser can provide the user with better experiences, if it is able to identify addresses and dates it could directly offer the possibility to find it on the map or inscribe it to the calendar.

Responses

  1. Florian Cramer on “Why Semantic Search is Flawed” :: Society of the Query says:

    November 15th, 2009 at 1:02 pm (#)

    [...] for the web and search engine design in the near future: RFDa, which would make the shift to what Steven Pemberton named the web 3.0, and semantic search, as implemented in the Europeana [...]

  2. Victor says:

    November 15th, 2009 at 7:08 pm (#)

    Dear mister Pemberton,

    Thank you for your outstanding presentation.Your view on the possibilities of RFDa and the impact on the web is exciting.It would, like you clearly explained,improve the quality and acces of data and information in a huge way. But afterwards two questions came to my mind:(I don’t know if its possible to answer them)
    1.What rol can play a search engine like Wolfram Alpha to implement RFDa as a standard ? Is it supportive or maybe blocking it ?
    2. What is the influence of a Web 2.0 lobby (social networks like Facebook) who would be threatned in their current existence if RFDa becomes a standard? (They can’t stop and addaptation would seem to be logic)

    Any how I wish you good luck with your work.

  3. Florian Cramer on “Why Semantic Search is Flawed” « New Media. What Next? says:

    November 16th, 2009 at 12:41 pm (#)

    [...] for the web and search engine design in the near future: RFDa, which would make the shift to what Steven Pemberton named the web 3.0, and semantic search, as implemented in the Europeana [...]

  4. Steven Pemberton says:

    November 30th, 2009 at 12:41 pm (#)

    Hi Victor. Thanks for the comments.

    To answer your questions:

    1. Wolfram Alpha is really interested in combining data to extract new results, or interesting views. If it is able to extract data from web pages that are machine readable as well as human readable, it will be able to achieve its ends more easily and more reliably. I don’t believe that a Wolfram ignoring machine readable web pages would inhibit its further adoption.

    2. I remember giving a talk to a room full of Web 2.0-ers, and they were furious. One reviewer said “the crowd completely disagreed. In hindsight he could not have been more correct” (this remark came after a number of Web 2.0 sites started folding after the economic slowdown). But I see RDFa and similar technologies as a disruptive technology, in the sense of Christensen’s book ‘The innovator’s dilemma’; the changes are inevitable, and any Web 2.0 operator would be better off working out how to migrate their operations to a more open method of operation, lest someone else comes along and eats their lunch.

Leave a Response