Aankondiging symposium ‘Boek uit de Band’

Posted: January 16, 2012 at 3:44 pm  |  By: kimberley  |  Tags: , , , ,

Is het E-boek een hype die wel voorbij gaat?

Ontwikkelingen in Amerika – waar bij Amazon de verkoop van e-boeken die van fysieke boeken voorbij heeft gestreefd – suggereren dat e-boeken geen hype zijn. In China is het digitale boek niet meer weg te denken en bereikt het miljoenen lezers. Met de komst van de tablet is een nieuwe dimensie toegevoegd aan het uitgeven en distribueren van boeken. De keten van schrijver naar lezer en alles wat daar tussenin zit komt onder druk te staan.

Boek-uit-de-band organiseert op 22 en 23 maart 2012 in de Openbare Bibliotheek Amsterdam haar tweede symposium over de ontwikkelingen van het digitale boek waarin alle ontwikkelingen en aspecten van de digitale keten aan bod komen. De nieuwe praktijk wordt in beeld gebracht en het werken binnen die nieuwe praktijk vormgegeven in workshops. Dit symposium wil de deelnemers voorbereiden op het iPod moment van het boek. Noteer vast deze twee belangrijke dagen in uw agenda. Details over de programmering volgen spoedig.

De E-boek trend is voor Nederlandse lezers, schrijvers, redacteuren, vormgevers, uitgevers en distributeurs een werkelijkheid waar men niet meer om heen kan. Unbound / Boek uit de band organiseerde in mei 2011 haar eerste symposium over de digitale ontwikkelingen in de uitgeverij. In tegenstelling tot de eerste keer is het dit jaar in het Nederlands.

Videos of the Unbound Book Conference

Posted: May 31, 2011 at 12:18 pm  |  By: gerlofdonga  |  Tags: , , , , , , ,

For those who couldn't make it to the recent Unbound Book conference, all videos of the conference are now viewable on vimeo!

Videos are available for each of our five sessions which include:
1- What is a Book?
2- The Unbound Book
3- Ascent of E-Readers
4- Future Publishing Industries
5- Books by Design
6- Horizons of Education and Authoring

Below is Miha Kovac's compelling talk during the "What is a Book?" session on May 20th.

For more videos please visit our vimeo page here.

Florian Cramer on sober genealogies of the (un)bound dialectic

Posted: May 24, 2011 at 3:01 pm  |  By: gerlofdonga  |  Tags: , , , , , , , , , , , , , , , , , , , , , , , , , ,

Geert Lovink introduced this title panel of the conference by mainframing its attempt at Nietzchean thinking around the binding and unbinding of the book - not in terms of ethics or morality, beyond the book as a sentimental object, and more in terms of the exploded situation of the present.

Researcher and theorist Florian Cramer, currently, Centre for Creative Professions at Willem de Kooning Academy Hogeschool Rotterdam, threw up a series of very concrete genealogical provocations. Cramer came to new media as a classically trained philologist, precisely through interest in the situation of electronic literature 20 years ago, the 91 launch of electronic book applications such as Voyager and so on. The Unbound Book's title panel evokes for him a troublingly "strong sense of deja vu". Considering all the experimentation with multimedia writing in the 80s and early 90s that happened before net art and multimedia design, and that has now "completely stagnated" in the hands of its same early agents, Cramer asked provocatively about the elided techno-cultural links here: what does the history of artistic experimentation (indeed early electronic or not) have to do with this apparent present (nostalgic? or ahistoric?) conversation around unboundedness?

Florian Cramer @ the unbound book conference

Florian Cramer @ the unbound book conference - photo cc by-sa Sebastiaan ter Burg

David Stairs' Boundless (1983) provides an important theoretical reference point, being emblematic of the dialectic that Cramer emphasises is always at issue:

"Binding and unbinding exist in it in a fruitful paradox, a tension that nevertheless boils down to binding as the lowest common denominator of a book. A book, in other words, is almost anything bound together, or unbound in negative reference to the former. To be unbound, after all, does not mean to be boundless." Further, there are important spatial dimensions of being bound, alongside the temporal: bound "so that it doesn't fall apart", and bound in the sense of enduring coherently. For Cramer, "the idea of the book is one that can be read in 1, 5, and 100 years time." Exceptions presented by unstable books (citing here Dieter Roth and Jan Voss's work, available from Amsterdam's Bookie Woekie), only prove the rule. Yet this strong dialectical appreciation of bound/unbound "bookness" seems absent from the panel description which seems to incorrigibly describe the web rather than the book. If it were really a book, "links would be broken, social tags spammed, geo-location programming interfaces would have changed, the codecs for the video and sound … obsolete, and it wouldn't work on your screen in 2021 anyway."

Cramer's point is that this is exactly what happened with electronic literature 20 years ago, carrying itself on the "exact same slogans": "linking, multimedia, interactivity, networking." The Expanded Books series launched by Bob Stein's Voyager company, an apple-specific project inspired by the Powerbook in 91, is the near-same event as the ipad inspiring "unbound" literary experiments and ereading start-ups today. They are even 'unbinding' exactly the same texts! Noting the John Cage reference, Cramer sees that we're almost literally revisiting George Landow's hypertext media theory:

We must abandon conceptual systems founded upon ideas of centre, margin, hierarchy, and linearity and replace them with ones of multi-linearity, modes, links, and networks. Almost all parties to this paradigm shift, which marks a revolution in human thought, see electronic writing as a direct response to the strengths and weaknesses of the printed book. (Landow, Hypertext, 1992).

Similar enthusiasm surrounded the audiovisual media/theory of the early 90s, but film and games have stayed separate for the most part, and "it's the same with books and the web." Of course ebook culture has emerged, but it is embodied instead by two "commercial and anti-commercial extremes, Amazon's Kindle e-book store and aaaarg.org… the text-cultural equivalent of iTunes and mp3 file sharing respectively." The actual historical passage of digital music and audio is strikingly similar to the present situation of the book: "people simply shared and collected simple audio files", just as we today sample "plain vanilla PDFs, ascii and epub files." So in fact the book's trajectory is: "premedieval scroll, bounded codex, computer file." Cramer predicts: "Hardly anyone will buy interactive mulitmedia books, just as they didn't in the 1990s." The book becomes merely solidified by the contrary nature of the web.

From a history of artistic experimentation around the book we can be sure of this, as Drucker's work shows.

Even in their most experimental and unstable forms, books do not leave beyond their material unity or binding. They are persistently "thought of as a whole… an entity, to be reckoned with in (their) entirety" (Drucker, 122). This is not a conservative statement, Cramer emphasies. Even classical examples of "unbound" literary books such as Marc Saporta's Composition no. 1, Raymond Queneau's One hundred thousand billion poems, indeed "explode the corpus," but do so by evoking it "ex negativo." The binding here becomes only more accentuated.

Its interesting at this point to observe that Drucker's definition of "artist books," the continuity of their experimentalism, coincides almost directly with present technical definitions of epublications. This is Drucker:

To remain artist's books, rather than book-like objects or sculptural works with a book reference to them, these works have to maintain a connection to the idea of the book, to its basic form and function as the presentation of material in relation to a fixed sequence which provides access to its contents (or ideas) through some stable arrangement. Such a definition stretches elastically to reach around books which are card stacks, books which are solid pieces of bound material, and other books whose nature defies easy characterisation.

Meanwhile Cramer adumbrates more recent epub specifications in the following way:

Epublications are not limited to linear content… but the basic assumption is there is an order that is not achievable through html alone. A key concept of epublication is as multiple resources that may be consumed in a specific order. They are in essence offline media, self-contained documents with downloading features.

From this point of coincidence though, the technical, political, and aesthetic possibilities of epub experimentation is much more difficult than what the present discourses of unboundedness suggest. Cramer gives the example of the Boem Paukeslag project produced at Piet Zwarte, an effort to publish a visual poem as animation on an ereader, using entirely non-standardized code. This was only possible through extreme amounts of crude technical hacking, with the result restricted to reading on this single hacked device. The gesture of the work is this exercise of difficult possibility in the era of ereading.

Cramer ended by ruminating on the increased interest in and mainstreaming of artist books today, as a "genre of graphic design." Print itself here seems to be becoming a "boutique niche of materiality." This is its entropy: "all print books strive to become coffee table books, often with warm, fuzzy and unbound characteristics". The artist book becomes a real or auratic object, and tech art schools become implicated in "producing boutique collectiables for rich people," not unlike vinyl collection. The image of the young Nick Carraway in the Great Gatsby, enamoured by the great library at the houseparty of the Long Island bourgoise, and picking up up a book from a shelf only to realise that not one on the shelf had been read, seems to resonate even more strongly in the present. Electronic books in contrast are the cheap paperback books of our time, for better and for worse.

PDF of presentation available here: Unbound Book.

John Haltiwanger: Generative Typesetting

Posted: May 22, 2011 at 3:55 pm  |  By: gerlofdonga  |  Tags: , , , , ,

John Haltiwanger is a New Media MA graduate and an autodidactic programmer with a strong interest in typesetting and open source software. Haltiwanger collaborates with the Open Source Publishing platform and Universiteit van Amsterdam. The main focus of his presentation is generative typesetting, with his MA thesis used as an illustration. Haltiwanger argues for liberating humanities from proprietary control of tools such as Microsoft Office or Adobe Suite by implementing open source tools within academia. A man standing behind his beliefs, for his presentation he uses an open source version of Prezi (an alternative to PowerPoint) – Sozi.

John Haltiwanger @ The Unbound Book Conference photo cc by-sa Sebastiaan ter Burg

"It's not who or what you are, it's where you're at" (reference to Rakim's "It's not where you're from, it's where you're at") opens the third presentation in the Open Publishing Tools panel on Day 1 of the Unbound Book Conference. Haltiwanger starts by mentioning LaTeX and LyX, common libre tools which can be successfully used for typesetting documents such as theses and argued for their superior typographical and referencing management advantages. However, he also mentions that extensive stylistic customization in these tools can pose major problems and that such realization lead him to exploring other options and discovering ConTeXt.

Haltiwanger exemplifies the possibilities enabled by tools such as ConTeXt with his own Master thesis whose case study was its own typesetting. What follows is a discussion of the technicalities of producing the thesis using generative typesetting, such as the necessity of setting it in both HTML and PDF and being dependent on automation. Later he explains how people began to start applying the visually semantic developments found in email communications (such as ALL CAPS to indicate shouting or underscores for _emphasis_) to enable a precursor format for generating HTML (an example being Markdown) and concludes that in terms of informational impact and widespread use, MediaWiki has been the most successful visually semantic format. However, he doesn't see wikis as particularly fruitful in producing essays because of their fragility and not fully flowing visual semanticization. On the other hand, the relative popularity of wikis within the humanities proves that it is not so difficult for people to comprehend and work with visually semantic textuality.

The core of Haltiwanger's discourse on generative typesetting is unraveled within the introduction of Subtext, a tool he is designing. Its most distinguishing characteristic is transformability of both the semantics and procedures of dealing with them. In result, the same semantics can be interpreted in multiple ways and a file can be easily made into a PDF for screen or for print; an HTML version or ePub can also be generated. Thus, he believes that the Next Great Format does not pose threat to Subtext. While Microsoft Word privileges the human and HTML privileges the computer, Haltiwanger envisions Subtext as introducing a productive balance of agency between these two, while at the same time bringing out the best in the text itself. An effect of this balance is that tools for distributed source code development could be applied in a generative typesetting.

Some controversies during the Q&A session are driven by Haltiwanger's suggestion that the contribution of these developer tools could possibly revolutionize the class room in academic humanities' workflows, collaborative homework and peer review situations. While the server knows who each individual contributor is, it does not need to give this information to others and therefore enables for more just grading or collaborative work. While Haltiwanger imagines the tool to allow teachers to have new ways of having their stylistic wishes respected and for new ways of grading and reviewing, some of the audience members voice their concerns that he suggests machines (the server) grade human contributions based on the quantity and not the quality of input. Haltiwanger acknowledges those doubts with a clarification that this was not his suggestion and that by keeping the interface of the tools flexible, anything can be imagined: live anonymous peer review, conversations occurring without the power dynamics of names and granular grading of group writing are just the tip of the iceberg.

The conclusion of Haltiwanger's presentation is that while current generative typesetting workflows are still too complex for a widespread implementation, Subtext as a F/LOSS tool is capable of reflecting the relative simplicity of humanities' workflow. People need to care about open source tools in academia and give up the embodied comforts of the current proprietary workflow. Humanities writing can be successfully liberated from proprietary control through merging the toolsets of distributed programming and reconfiguring them for one's own specific needs. While rather technical, Haltiwanger's presentation is inspiring: although still a distant vision, a widespread implementation of open source tools within academia would no doubt enable many new possibilities.

View Presentation here: http://drippingdigital.com/conf/unbound-book/textual-liberation.svg
Text document of notes here


Femke Snelting: F/LOSS tools in graphic design

Posted: May 22, 2011 at 3:48 pm  |  By: gerlofdonga  |  Tags: , , , ,

Femke Snelting is an artist and designer who works with the interdisciplinary and international graphic design collective Open Source Publishing based in Brussels. During her presentation Snelting addresses the possibilities and realities of design, illustration and typography using a range of F/LOSS (free/libre/open-source software) tools. While modifying and expanding their toolbox, OSP uses solely open software since 2006 to investigate its potential in a professional design environment.

Femke Snelting @ The Unbound Book conference photo cc by-sa Sebastiaan ter Burg

Femke Snelting is the second speaker during the first panel (Open Publishing Tools) on Day 1 of the Unbound Book Conference. She explains how while growing tired of being tied to Macs with Adobe software, the founding members of OSP decided to move away from the suite and explore the rich landscape of other softwares. Switching to Linux and F/LOSS tools freed them from proprietary software and changed their ways of thinking about their practice. Amongst other activities, they started throwing “print parties” (where participants designed a book) in order to spread awareness of other options within a wider public.

She mentiones the possibility of a dialogue between OSP and libre software developers as one of the main advantages of switching from proprietary to open: “If we depend on the software, we need to be able to make it better”. She follows this stance with a story of experiencing technical problems with rendering PDF files while using Scribus (open source program for professional page layout). The problems were addressed in an e-mail correspondence between OSP and Scribus and in result OSP members became active members of the Scribus community. Snelting asserts that such involvement would have never been possible, had OSP been using Adobe packages.

OSP actively develops fonts and Snelting mentions univers else which is notable for being reproduced from the original univers font through custom software developed by OSP (which generates fonts from scanned sources). Linking to this, Snelting also mentions a project based on scanning a book, generating a font from its typeface and producing a PDF (the project is still unnamed but will make its debut at Verbindingen/Jonctions 13 this Fall in Brussels).

Femke Snelting's presentation proves that open source tools can be used as a viable publishing model. Open Source Publishing's book Verbindingen/Jonctions 10: Tracks in electr(on)ic fields is a Fernand Baudin 2009 prize-winning publication which was designed and typeset using only F/LOSS software. Pierre Huyghebaert and Femke Snelting collaborated on it using ConTeXt, Gimp, Inkscape and Scribus.

Anne Mangen on the Technologies and Haptics of Reading

Posted: May 22, 2011 at 2:17 pm  |  By: gerlofdonga  |  Tags: , , , , , , , , ,

'The Ascent of E-readers', the third session of the day, kicked off with Anne Mangen, Ph.D., an Associate professor in literacy and reading research and a reading specialist at The Reading Centre at the University of Stavanger in Norway. Her research interests mainly lie in the impact of digital technology on reading, writing and pedagogical methods. She is particularly concerned with cross-disciplinary approaches to reading, writing and comprehension, focusing on multisensory, embodied aspects.

Anne Mangen @ The Unbound Book Conference photo cc by-sa Sebastiaan ter Burg

Anne is primarily concerned with questioning the role of haptics in the reading experience and whether the use of hands engages the brain in ways that play a constitutive role in the reading process; what DOES the clicking do or add to the reading experience?She is particularly interested in evaluating and theorizing the impact that physical and technological affordances have on the phenomenological experience of immersion in narrative storyworlds and longer linear texts, as compared with reading a narrative by leafing through pages of a book. At the heart of her passionate talk are questions of what these physical/technological affordances do with the reading process cognitively, phenomenologically and perceptually, and how we experience a text differently when we handle it with an e-reader, mouse and screen as compared with the print medium. The talk reflects on these questions and related concerns using findings that address different aspects of reading from a host of empirical studies she surveys (though a large portion of findings range from a time before the experience of the digital reading and writing landscape substantially evolved to what it is today).

An Embodied Process
By investigating the role of gestures of readers and the way they use their hands for interacting, pointing, directing and sustaining attention, new media is also changing the role of the hands. For Anne, what is evolving as a fascinating, interesting and relevant paradigm for studying reading (and how reading changes with digitization of text), is the paradigm of embodied cognition - a cross-disciplinary paradigm evolving from psychology, evolutionary anthropology, neuroscience, and a wide-range of social sciences. She elaborates how it's important to see and be aware of how reading is an embodied process and activity by observing and identifying the way we use our hands differently with digital devices -- the way we click, read, handle or touch screen, and write – and what affordances and impacts this has on reading. In this way, sensory processes play crucial roles, particularly for pedagogy and reading instruction.

Referring to a study on the use of hands in shaping the brain, language, and human culture, Anne discusses findings that show how the human hand and brain became an integrated system for perception, cognition and action through a process of co-evolution. Thus, what we think of as human intelligence becomes embedded in the hand just as it is in the brain.

Redefining Reading
With all the talk about redefining the book -- bound and unbound – Anne wants to shift the conversation to redefine reading, and to highlight those perspectives of reading as a skill and process that haven't been duly dealt with, in her opinion, as becoming both apparent and important. She reminds us that reading is multisensory (not only visual) and is embodied (not only cognitively).

The Ergonomics of Reading
'Reading digitally also changes the ergonomic affordances provided by the interface, since a book on the computer or e-book "invites" us to do something different with it than a printed book, and so reading by clicking with the computer mouse versus turning the pages of a book changes our perception and impacts reading directly.' Various reading devices – an e-reader, iPhone, iPad, Kindle, etc. -- by way of their affordances, all invite us to do different things with our hands. Anne describes how this subsequently affects our perceptual processes and sensorimotor actions, and thus influences reading processes, comprehension processes, aesthetic experiences, and by implication then, reading.'

As an embodied cognition, ergonomics of reading devices become crucial to understand how reading is changing, for better or worse.

Print vs. Digital Reading Technologies
Anne then reflects on the fundamental differences between print & digital sensorimotor affordances. ‘Whereas print is tangible, fixed and imprinted on a physical substrate, digital is intangible, with the content and storage medium separated, and with a temporary visible display that is unstable; elements that could play a crucial role for children when they are beginning to learn how to read. In this way different relationships emerge between something that is printed and something that is digital, and it becomes necessary to ask how the intangibility of the text impacts reading on different levels, different kinds of text, and for different reading purposes.

The Multifunctionality of the Digital or the Physical Structure of Print?
The multifunctional character inherent to digital text on digital devices is that it has no status of external memory, Anne
points out. You cannot point to the iPad or Kindle to prompt its memory of where you read something - it contains thousands of additional materials. Conversely, in a printed book you can tell from the spine or cover, which serves as an eternal aid to memory. This role of intangibility leads Anne to further stress the role of body in perception and the phenomenology of the intangible. The emergent claim is that the nature of the digital technology has implications for our sensorimotor, perceptual and cognitive processes and experience of reading and comprehension for certain lengths of text. This is in part because the reconstruction of text is not only based on content, gist, meaning and story, but on the composition, layout and physical structure of a text.

Anne then shifts to hypertext and presents findings from empirical research selected over the course of the last two decades. Some claims that emerge from these studies:

  • despite the ubiquity of hypertext people who read linear text comprehend more, remember more, and learn more than those who read hypertext
  • writing in word processors interferes with the ability of the writer to form a sufficient mental representation (global perspective) of the text. (Eklundh 1992)
  • scrolling disrupts the user’s sense of physical structure and consequently disrupts their ability to form a global perspective of the text (Eklundh 1992; Piolat et al. 1997)
  • spatial mental representations of text are known to be useful for reading comprehension (Piolat et al. 1997)

Sense of Text.
Jumping from digital hypertext, Anne argues that a physical sense of the text becomes important to the way we mentally reconstruct the text as an entity, as something in a certain pattern or way. Spatial mental representation of text based on layout is known to be useful for reading comprehension, and this can be understood by the affordances of paper, which allow tactile clues to sense with your fingers the progress of a book, or to layer papers, for example.

To conclude, Anne reemphasizes the aspects of haptic affordances, insisting that the most lasting reading technology has been one we can comfortably hold in our hands, where the human hand-eye coordination is taken into consideration in optimal ways. Though people are increasingly willing to read periodicals in digital format, Anne points out that the experience of reading [intangible] text is different, less efficient and less focused. In the end, for her, materiality of reading matters, and is one of the key differences between reading print and digital – a distinctive aspect of new reading technologies she claims will have a huge impact on the way people learn how to read and comprehend.

For more, visit the Reading Centre of the University of Stavanger in Norway.

PDF of Anne Mangen's presentation available here: Mangen Presentation.

Gary Hall: New Notions of Individualism and Property in the Digital Age

Posted: May 22, 2011 at 2:14 pm  |  By: gerlofdonga  |  Tags: , , , , , , , ,

Gary Hall is a Professor of Media and Performing Arts at Coventry University, UK. He is author of Digitize This Book! The Politics of New Media, or Why We Need Open Access Now (2008) and Culture in Bits (2002), and co-editor of New Cultural Studies (2006) and Experimenting: Essays With Samuel Weber (2007). His work has appeared in numerous journals, including Angelaki, Cultural Politics, Cultural Studies, and The Oxford Literary Review.

Gary Hall Photographed by Sebastiaan ter Burg at the Unbound Book Conference.

In the ‘Digital Enclosures’ workshop, the panel presented their respective stances on the questions of ‘open access’, copyright laws and business models, in relation to e-books.

Gary Hall explained how the impetus for open access is due to the fact that the scholarly model of publishing is no longer working effectively for publishers. This is largely due to the fact that conventions of academic publishing have been taken over by media conglomerates where the majority of their energies go to music and other media that will generate more profit. Academic writing therefore must sell and be seen as a commodity in order to ensure its success and backing by conglomerates.

Hall mentions various business models for publishing. In the first example, for-profit publishers concentrate mostly on sales. In this case, they tend to sell textbooks, a hot commodity for students which the publishers know will sell because of course requirements. Scholarly-led open access publishing is when the scholar takes the means of production into their own hands. They need not be merely profit oriented. Finally, the third model is when various scholars come together and perform all tasks related to the text. External funding from various sources subsidizes business costs while still ensuring open access books. One of the benefits of this model is the high level of production and editorial standards in the process.

In regard to the question of copyright, Hall states that the main source of funding is from institutions paying employee salaries. Scholars are generally happy to give work away open access. What this means though is that open access cannot be translated to other industries or areas of society such as the Culture Industries. These producers/ creators must be compensated for their work in order for business to thrive. Ultimately, copyright is good for corporations. However many new technologies require new and specific copyright laws (evident when looking at internet piracy).

Is there an economic model for sustainable, long term, open access policy in the humanities? Hall concludes that we don’t know but we must address this not as an all-encompassing "one size fits all, magic bullet answer". Hall concludes that perhaps digital culture may provide us with an opportunity to think differently about these issues, away from our currently understood notions of individualism and property.

For more info on Gary Hall's work and research, please visit http://www.garyhall.info/

Bernhard Rieder: 81,498 Words: the Book as Data Object

Posted: May 21, 2011 at 4:42 pm  |  By: gerlofdonga  |  Tags: , , , , , , , ,

The second session of day 1 of the Unbound Book conference - also titled The Unbound Book - was moderated by Geert Lovink, and discussions of what a book becomes once it’s online and connected to information and people dominated the talks. Bernhard Rieder, Assistant Professor of New Media at the University of Amsterdam and Assistant Professor at the Hypermedia department at Paris VIII University, compelled the audience to think about what it means for the contemporary book to be meshed in digital structures from an information science meets media studies point of view. A refreshing talk not about the death of books but more about the new relationships and representations that digitization awards.

Bernhard Reader @ The Unbound Book Conference photo cc by-sa Sebastiaan ter Burg

Perhaps not at the top of discussions surrounding e-readers and digital publishing but an equally important aspect is the transformation of the book into a data object - the focus of Bernhard's talk. His interests lie in looking at the book in the age of the database, and by reflecting on the last fifteen years -- which has seen the emergence of digital book collections holding very large databases of titles -- two aspects of interest emerge for him: 1) the arrangement for discovering and reading devices that these large scale databases of books encourage and 2) the "computational potential", or the value of the data, of millions of scanned books.

With the rise of online and digital book culture coming face-to-face with data culture, it becomes worthy to look at e-books and digital publishing structurally. The power of digitization brings on the power of the database. And with the database comes powerful changes to our relationships and treatment of books, where the digital book function and form is being "unbound”.

What does this mean?

Books are being scaled and various statistical properties of them can be analyzed for other purposes. We see this reflected in online book sites where a wealth of ratings, reviews and lists of most popular, best and worst books permeate. Using the example of The Hunchback of Notre Dame, Bernhard shows us that Amazon's text stats allows for different indexing of statistical properties of books -- readability, complexity, number of words and fun facts (*The Hunchback of Notre Dame has 81,598 words). So thanks to the database you know just how many words per ounce a book contains and can you decide which printed book is right for you.

As Bernhard explains, historically institutions (ranging from family, school and library to bookstores, market forces and affordances) have always contributed to structuring the universe of books, shaping what we read and how we read it.  'The book in the age of the database adds a contemporary wave of new embedded practices and logistics of what do we read and how we read it’. In his view, three new practices emerge:

1) Exploring full text and metadata. This refers to the statistical projections of the whole text that allow various explorations of the catalogue's content such as Google's "common terms and phrases" or Amazon's "key phrases" feature, both of which link to relevant passages of the book.

2) Connecting by means of data. Specific to the 'database condition' emerges the possibilities of interconnecting books through data, and the connecting to and from books to other data, like the Web and Google Scholar, to name just a few. In other words, using Google's database you can have a popular passage extracted, and then be able to link to other citations that cover the same topic or provide a different perspective.

3) Capturing and inferring. Perhaps the most important new embedded practice to materialize out of the database is the actual use of the data – of capturing user gestures and practices (word positions, metadata, and user data such as tagging or clicking, number of citations, reads, sales, reviews, and where in a passage a user decided to stop reading), and then using that data to create individual navigational experiences and opportunities, aka the personalization of reading.

Systems that digitize books, like Amazon and Google, transform books into information, and then unbind and rebind it again as an interactive, social and semantic interface.

Bernhard proceeds to elaborate that such transformations allow the discovery of a book through all different representations that the database affords (as mentioned above). He strongly believes that more than anything else those database technologies are increasingly steering online our opportunities for navigation, how the age of personalization [for reading] is coming about, and how it will be shaped for the future. 'What we see online very much depends on what you may have already read and what you've clicked on'. So the experience a user will have, and the books they will stumble upon, becomes highly dependent on the competence of the user in the first place. The other important aspect to take into account when determining what a user will read is the actual role of the database technology and how it enables different forms of embedded and technology-mediated reading -- via suggestions, comments, reviews, statistics and links to how different texts relate to one another.

So what kind of book institution are we moving towards?

How we read was always a complicated and contested affair, continues Bernhard. The difference now is the database is altering and reconfiguring the structures that orient what we read and how we read it. The new tools afford the database and algorithm companies like Amazon to give customers more of what they want (low prices, vast selection, and convenience), and allow Google to “organize the world's information and make it universally accessible and useful". From a commercial perspective, these initiatives can be seen as the way to sell books and ads, create a one-stop shop, and profit from network effects -- but the impact perspective is yet to be assessed. According to Bernhard, it’s too early to say how the database system is actually affecting the way people read books. The larger questions --- of what we should read, what we could read, and how we can read -- is yet to be determined once we truly understand how the hierarchical and incentive system functions internally in the first place as a recommendation system.

Back to the original question of his talk: what does it mean to have a full database of all books ever published? What can you actually learn from so many books being scanned in a database?

Many applications are yet to be rendered feasible in the first place (much of it due to current legal constraints) but nonetheless, Bernhard points out quite a few useful applications that could emerge: the automatic translation of texts, knowledge engineering (knowing who has the best texts/concepts for a specific subject), and finally ‘culturomics’.

A great example is Google's N-gram viewer, which uses its computational potential to see what you can actually learn from having just 4% (6 million) of the world books scanned. What the tool essentially does is take pairs (grams) of terms and looks through Google's entire collection of digitized texts to determine the frequency of all the word combinations in the time period selected.

N-gram Bernhard showed for television, internet, radio and newspaper from 1800-2008

Looking at the results one can begin to see a whole breadth of insights emerge from rapidly quantifying cultural trends and in this way, '“culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities.” (Michel et al., 2011)'

Bernhard concludes his talk by reaffirming how even without changing form, and without becoming part of an e-reader or e-book, the book is nonetheless caught up in large scale databases. From reading and finding a book to engaging, sharing and discussing a book, the shift towards e-readers makes the database aspect more easily put into place as it becomes something of a standard in e-publications.

Just imagine yourself finding a fascinating passage in a book and then being able to jump to all books that refer to that passage or similar concepts. It is time that the debate around e-books moves to surround aspects of the database and how it can serve us to think about and integrate things from a cultural perspective.

For more, visit Bernhard Rieder’s homepage and his excellent research blog, The Politics of Systems.

PDF of Rieder's presentation available here: Bernard Rieder Presentation

Ray Siemens: Sturm und Drang, Sound and Fury? E-Reading Essentials in a Time of Change and unFixity

Posted: May 20, 2011 at 5:30 pm  |  By: gerlofdonga  |  Tags: , , , , , ,

Ray Siemens @ the unbound book conference

Ray Siemens @ The Unbound Book Conference - photo cc by-sa Sebastiaan ter Burg

Ray Siemens held a lecture during the theme 'The Ascent of E-readers'. His speech was called 'Sturm und Drang? E-Reading Essentials in a Time of Change and unFixity'. Siemens works with the INKE Research Initiative with his colleagues, mapping the challenges in the digital reading landscape.

E-Reading, an uncertain and challenging future

Siemens reflected on the themes that had been discussed during the morning session, when the lecturers discussed what perspectives on the future of reading they believed in. At the start of his lecture, Siemens voiced his opinion about an overarching question concerning the challenges that digital reading encounters have brought about. He spoke out both in favour of and against e-books, as he explained he was conflicted between the chances and threats that the future of publishing and reading holds in store. "Modelling the book in electronic form is not easy", Siemens remarked. The 'fuss', was about the lack of fixity of digital text, their unstable form and the non zoned-off reading experience. Siemens is all for enhanced reading, augmenting what the e-book has started.

He also said it was important to understand exactly what we are doing as we move forward, as it is uncertain where e-reader technology is going. Siemens continued by providing some examples of which devices we have before us when reading. These range from the traditional physical book to many smartphone-like devices, tablets and laptop pc's - which are not solely dedicated to e-reading. Some very ingenious ones never quite caught on, like this one (Image located through James Bridle's Twitter account) Siemens is looking beyond what the mass-market has for sale and he is researching the dedicated e-reader experience from an academic perspective.

He explained that his research field was at the intersection of several fields, ranging from humanities to computer sciences, and thereby integrating disciplines like usability design, robotics and philosophy. He went on to explain that our digital climate holds an exciting future for e-books in store, it is just the present that is inconvenient. Digital reading is not yet up to the standard of quality, content and functionality that half a millennium of print publication has brought us, to paraphrase Siemens. The disconnect between theoreticians and developers, he argued, has been the cause of an approach that was not pragmatic enough. In this context, Siemens also noted that the reading device itself is just one part of the ecosystem in which reading and communication find themselves. 

The reading experience

Siemens argued that more attention should be paid to the sensory experience of reading. Modelling after the book and the page is an approach which is doomed to fail. Taking away the uncertainty means researching the ways in which reading and writing have technologically and socioculturally evolved. It requires, as Siemens put it, an analysis of the mechanics and strategies of reading, as well as textual- and reader studies, researching interface design and information management. Siemens asks himself: "Has the way we read and experience information changed since the rise of the internet?" A change in the engagement of text and context leads us to formulate new practices in interface design, with perhaps more focus on the process of reading, making the interfaces more dynamic.

One point that came forward from the public discussion was that an important difference between the digital reading and print reading experience was the added social aspect. As Siemens said, we are able to respond quickly to the current book revolution, enabling us to model the social practice, evolve its features, change its direction and mashup rudimentary features prominent in the Gutenberg age. He said that the research team he is a part of will be simulating computation, social reading, and then scaling that experience towards a greater whole.

Siemens also discussed, in reaction to a remark by Bob Stein, that we know very little about what we are doing. We have little experience so the value lies in augmenting our current practices. He sees an important start-off point in visualizing and viewing information more dynamically.

The core of Siemens' lecture was the way in which technological progress relating to our reading methods and platforms disrupt our traditional thinking about what constitutes our reading experience and the way in which this disruption may allow us to gain insight into the essentials of this reading experience. Siemens does not take anything for granted and questions all the facets of the evolving reading experience that he encounters with his research team, while not being sceptical. This critical approach seems to be the right one to uncover both the possibilities and the threats that e-reading holds in store for our society.

Ray Siemens

Ray Siemens is Canada Research Chair in Humanities Computing and Professor of English at the University of Victoria with cross appointment in Computer Science. He is associated with several projects connecting the Humanities to digital culture. For a complete biography, visit his personal website

One of the important projects Siemens has been involved with, the HCI-Book, can be found here

Follow Ray on Twitter: @RayS6

PDF of presentation available here: Ray Siemens: Unbound Book