Public Debate: Future of the Public Domain in Europe

Posted: November 14, 2010 at 11:12 pm  |  By: morgancurrie  |  Tags: , , , , , , , , , , , ,

Friday session, 20.30– 22.30

Documents and sources on the Public Domain

Paul Keller from Kennisland opened the session with a bit of historical context: the 1990 Proposal for a Hypertext Project by sir Tim Berners-Lee. From the very beginning the internet has been a place of debate about what should and what shouldn’t be in the public domain – an influential text was the discussion started by Eric Kluitenberg on the nettime mailing list Frequently Asked Questions About the Public Domain.
James Boyle’s influential book from 2008, The Public Domain, has been the groundwork for anyone talking, thinking and/or reflecting on the subject since.
In the framework of the Communia project, the Public Domain Manifesto was published, which led to an official charter for the European Library Project Europeana: the Public Domain Charter.

Creative commons have become the public domain mark, but meanwhile many answers prevail, such as who should take care of this public domain and what infrastructures can we revert to.

James Boyle: Problems of the Public Domain

In his Skype session, Professor of Law at Duke Law School James Boyle laid out three main problems in discussions of the Public Domain debate – and what could be a number of solutions to them:

On the conceptual level, an essential task is to make politicians, institutional bodies and citizens aware about the ecology of knowledge, whereby a key driver for creativity stems from the interaction of the free and the controlled: We get creativity by control over the realm of the free – in culture, science, politics, etc. More common, however, seems to be an understanding that takes a universal stand only for the free. Yet, one may neglect the balance between the two realms on basis of such a conceptualization. Boyle illustrates this giving the example of a lawyer who believed that every breach of copyright may be understood as a violation of Human Rights and who was shocked by the idea that some people may see this very differently.

The second problem seems to be a cultural one. In the first place, when the copyright terms were extended, we applied the most speech-restrictive set of laws on most of 20th century culture. Since there is no speech-enhancing part of copyright law that could allow access and translation, we are denying ourselves access to most cultural expressions – even to orphaned works. Currently, 90% of creative and scientific materials are commercially unavailable but their copyright is still extended – the benefit of royalties for authors applies only to a very small fraction of historically produced documents. More often, there is no benefit to anyone.

Meanwhile, with e-culture rapidly growing and researchers looking less and less at off-line sources, the pyramid of knowledge seems to have been inversed: books have become the realm of the inaccessible. While spatial distance rendered inaccessibility before, actors such as Google now redefined access as immediate and disconnected from spatial fixation of cultural expressions.

The choice of where to publish what is persistently laid in the hands of the author – and without the conscious choice of an author none of us will have access to a wok produced by a contemporary in our lifetime. Free culture, public domain culture, will not contain any work made by our contemporaries unless they actively stipulated it – it is copyrighted by default. In such a way, we have cut ourselves off from our collective heritage, while generative production was always made by remixing.

The last problem identified by Boyle is based on the realm of science. The public domain is an essential component of scientific undertakings. While there are assumptions that issues around copyrights function better in this realm due to the relevance of technological progress and the resulting shorter term for patents of 20 years (in comparison to copyright terms of lifetime + 70 years), this seems not to hold true.

Referring to Berners-Lee, Boyle points out that the web was envisioned for science. As a tool to link and share scientific material, forming sets of hypertext links: a web of connections that would enable human knowledge to flourish. What we are confronted with now however, is that it works great for consumption and personal interests, yet for science the web hasn’t progressed very much: most literature is locked-up behind fire- or paying walls, which makes a dense set of connections to other online material impossible. Yet, the power of internet lies in these connections. Further, current copyright law regulates items which are not even covered by copyright law in the first place, such as footnotes. They are merely regulated by a technological accident, made exclusive by walls of payments.

Next to this, what we see is an expansion of the range of scientific subject matter. In the EU, the Database Directive had no empirical benefit to database industry while imposing economically inefficient structures on scientists and citizens. At the same time, we see an expansion of patent rights to cover new developments such as gene sequencing or synthetic biology, whereby fears exist that these expanded realms of intellectual property inhibit new scientific fields to grow. Could foundational truths established in new areas be protected under patent law?

Now what can be done to alleviate these processes? In the political sphere, orphan rights legislation could be feasible, since expansions of copyrighted material that is economically inaccessible is an embarrassment to cultural industries. Other stimuli lie in private hacks/privately created solutions such as general licenses in software, Creative Commons licenses expanding copyright by individual authors as open commons or maybe even Google books as an example for private initiatives. Playing into the political and privately constructed commons as alternatives, there seems to be an enormous role for public education. Initiatives such as the public domain manifesto and Communia are extremely valuable and in more domains, from music sampling over software development to libraries and the sciences, people need to realise what public domain means – and what it means if it’s taken away from them.

Bas Savenije: Challenges for libraries in the digital era

On basis of James Boyle’s talk, Keller notes that librarians may have become the keepers and custodians of material that is generally difficult to access, opening the podium for Bas Savenije, Director General of the Dutch Royal Library, Koninklijke Bibliotheek. In his talk, Savenije reflects on the changing role and challenges that libraries are confronted with in connection to current developments regarding the public domain.

Savenije makes the observation that our current generation more and more seems to perceive that knowledge which is not digitally accessible is non-existent. Documents which are not yet digitalised may therefore be threatened to be forgotten. To counter this, libraries increasingly turn to digital content and digitalisation of their stock. Currently, about 4 mil. items are preserved by the National Library of the Netherlands which is aiming for their full digitalisation. However, Savenije points out that current calculations estimate that digitalisation until 2013 would cover approximately only 10%. What reasons hinder the digitalisation progress?

The first obstacle is the lack of financial funding for such undertakings, as grants are often made available only for specific purposes such as the digitalisation of parliamentarian papers of the Netherlands or newspapers for research purposes. On the European level, there is money available to build infrastructures or better access but when it comes to the actual digitalisation of books, there is a lack of funding.

A way of dealing with these circumstances is seeking for public-private partnerships, as recently happened with Google. This cooperation however was based on three conditions: 1) everything that is in the public domain in print, should be in the public domain digitally forever; 2) there should be no exclusivity of the documents to Google as a contractor and 3) there would be no non-disclosure agreements. On basis of this agreement, the digital material is now available for almost any educational or research purposes as long as it is not commercial. A dilemma remains: old manuscripts are not digitalised by Google due to matters of insurance for these vulnerable manuscripts. But public-private partnerships with companies that take care of these materials often run under different conditions that may create exclusivity.

A specialised company like ProQuest, taking care of such projects for example for the National Library of Denmark, grants free accessibility to the documents only per country – access from anywhere else is locked behind a paywall for 10-15 years. Yet without such commercial partnerships, it is questionable to what degree the necessary progress towards digitalisation can be accomplished.

A second obstacle of course is copyright. Solutions to legal regulations, e.g. around orphan works, are being developed in various EU-countries in the form of extended collective licensing. A case which helped to gain attention for this issue was the Google Books Settlement as it brought discussions about copyright and issues of open access for scientific information on the European agenda.

Digital born content presents another challenge to the workings of libraries, as it demands quite different approaches to collection and preservation. Is the traditional task of libraries to cover everything ‘published’, i.e. in an operational definition any document that is published with an ISBN number, still valid? With the Library of Congress’ move towards tweet collection, should the National Library of the Netherlands collect tweets as well? Or would it rather be the task of the National Archive? How about scientific blogs? Common definitions of ‘publication’ do seem to fall short under the current wealth of data creation. Connected to this are the implications of the organizational diversity of heritage bodies facing these developments. Current publications sometimes work with annotated data models, integrating text and expressions of research-relevant data, audio and visual files in different media. How can the division of media over different organizations integrate multimedia? Since partial, media-based data collection would ruin the data, how does one arrange cooperation and build inclusive infrastructures?

Further, different types of libraries serving different parts of society are being funded by different sources. Being as a consequence different systems, how do the users of these libraries get access to data that is not available in ‘their’ specific library? An approach is needed that grants integrated access to data across territorial separation. It seems thus that the trend goes towards a National Digital Library with one common back-office where every library should provide access to their own community. While we have great examples such as Europeana, a big challenge is the envisioning of a ‘Nederlandeana’ that has a common infrastructure and responds to the changes across all domains induced by the Internet. Another issue remains securing the sustainability of such undertakings; however due to temporal reasons, this was not further elaborated upon.

James Boyle responds to the apparent dilemma of the increasing access to data connected to a shift towards integration of territoriality into the international public domain. How can one address these developments? According to Boyle, the first best solution would be to shorten copyright laws to about 8 to 17 years which seems to be optimal terms. However, that does essentially not remove territoriality. The second best solution then would be private or public-private initiatives, which would however also likely be territorial. An interesting case is that of Google, as the Google Booksearch Settlement may open up a market for competitors and thereby introduce new challenges for the public domain aside from territoriality. The adoption of the second best solution to Boyle seems more reasonable due to the potential of public licensing to achieve great things.

Bas Savenije adds that on a European level, the issues such as territoriality are being addressed several times per year in meetings of different National Libraries or Research Libraries. Conditions for public-private partnerships have been translated into a draft paper that is still being worked on. Responding to a question from the audience about the libraries’ access to interface and search-functions developed by private partners, Savenije mentions that own data bodies are larger that the database of scans produced by Google and thus need to be developed independently: “I hope we can be as good as Google is in that”.

Lucie Guibault: The European Dimension

European directive on copyright: recent discussion on public domain – including the World Intellectual Property Organisation (WIPO).

The digital agenda plays decisive role in stimulating discussion of the public domain. The piling up of rights may be counterintuitive and counterproductive, which is why the European Union plays an important role in a new wave of public domain discussions – focused in the thematic network COMMUNIA, which discusses what the public domain means for science, for the public and for the general interest.

A working group has been working on an adaption of the public domain manifesto, which is meant to take a bold and provocative stand against copyright law. When attempting to define what copyright law is, we notice that lots of writing on the public domain is US-based (Duke University, et al.). Communia puts it on the map of the European discussions.

The manifesto proposes a split between structural public domain (both works whose protection has expired and all works that aren’t copyrightable) and voluntary sharing of information (creative commons, …). It proposes the adoption and development of the public domain mark and includes a number of general principles:

  • We should be able to freely build upon all information that is available out there: Public domain should be the rule and copyright the exception.
  • Most info is not original enough or copyright protectable so should freely flow.
  • What is in the public domain should remain in the public domain
  • Copyright is a right limited in time.

Simona Levi: Citizens’ and artists’ rights in the digital era

Simona Levi, Director of Conservas and involved in the annual oXcars, shares her point of view on public domain issues with a stronger focus on the position of contemporary producers of cultural goods and reflects on the immediate challenges and contributions of the artist in relation the public domain. Levi is connected to the FCForum, a platform and think-tank which understands itself as an international, action-oriented space to build and coordinate tools that enable civil society to answer to urgent political changes in the cultural sector. The FCForum brings together voices from liberal culture interest groups, yet explicitly also reaches out to the general audience to prevent the absorption by institutional bodies. In 2009, the FCForum set up the first Charter for Innovation, Creativity and Access to Knowledge, a legal companion supporting work in the cultural domain by addressing copyright legislation in the digital era.

In 2010, the main focus of the forum was how to generate and defend new economic models for the digital era. Issues of the public domain are thereby especially approached from the understanding of the artists’ work being seated in shared spaces. The current charter 2.0.1 ‘Citizens’ and artists’ Rights in the Digital Age’ has particularly a practical focus, trying to challenge and influence political decision making on local and European level. While the points addressed in the charter are obvious and logic to those working in the artistic field, they may sadly not be to political bodies.

Some of the points mentioned by Levi are then:

  • Copyright terms should not exceed the minimum term set by the Berne convention (30 years), on the long term it should be shortened to about 8-17 years.
  • Jurisdiction should allow every publication to directly enter into the public domain.
  • Results of work and development funded by public money should be made accessible to everyone.
  • Research funded by educational institutions should be made accessible to the public.
  • There should be no restriction on the freedom to access, link or index any work that is already freely accessible to the public online, even if it not published under a shareable licence, an issue touching on the issue of private/non-private copying legislation.

According to Levi, another problem is posed by the legal framework around quoting, which is not allowed in many parts of Europe if the goal does not serve pedagogical or investigative reasons. Even if content creators support the quoting of their work, these limitations remain in power.

One major problem is connected to collecting societies. The problem here lies in the fact that there is few control on these bodies. They collect financial support in a public manner, yet redistribution of this money for their members works in a problematic way, since only a fraction of these members can vote on these decisions, based on royalties brought into the organization. This means that artists with a lower ability to bring financial assets into the group are essentially excluded from decision making. As a last point, Levi notes that they restrict the application of free licensing in the cultural industries and thereby silence potential interests of the artist in engaging with the pubic domain.

Respondents

Charlotte Hess: Protection of access in knowledge – In need of a movement.

Charlotte Hess, Associate Dean for Research, Collections & Scholarly Communication at Syracuse University Library and internationally renown commons theorists, briefly reacts to the different positions that have been mapped out by the previous speakers.

While she recognizes that there is still a much to do about issues such as open access, it seems that Europe however is on a good track concerning these developments. In 2001, the first conference ever on the public domain had been organised by James Boyle and Hess points out how important and influential his contribution also through his work on the intellectual enclosure movement had been. What is needed for now seems to be a movement similar to the Environmental movement, something that could draw together all sorts of different people to protect our access to knowledge.

While much of the issues we are facing in this context are based in the realm of law, there certainly is also a general lack of awareness, neglecting negotiating and fighting any of the legal restriction. Yet, in a world where the dominance of corporations is so strong, the youth needs to be encouraged to go into the political arena instead of being swallowed by corporate entities.

Marietje Schaake is a member of the European parliament on behalf of D66, member of the committee on culture, media and education and co-founder of the Intergroup on New Media of European Parliament members.

In the closing part of the Public Debate, she discussed what the European Parliament can do for the public domain and what the sentiment in the parliament is towards the public domain. Overall, due to heavy lobbywork, the suggestion is raised that counterfeiting and breaches of copyright are to be the next war after terrorism. Currently, the odds are against reform of copyright law – there’s a strong lobby in favor of keeping and strengthening the status quo and a severe lack of knowledge about public domain issues.

A lot can be done though, to influence the existing wave:

  • present facts & studies about the impact of new technologies
  • have artists proclaim their trust: conservative lobby currently seems to defend creativity
  • present data: seeming neutral helps alleviate the image of being “squatters of the internet who want to kill innovation”

We need to find a way to open up a polarized climate where it’s safer to choose the establishment, if we want to secure an internet and knowledge culture that relies on principles of the public domain.

Video on Wikipedia – Ben Moskowitz and Michael Dale

Posted: November 14, 2010 at 9:00 pm  |  By: morgancurrie  |  Tags: , , , , , , , , , , , , , , ,

Thursday 11 November, Hilversum
by Serena Westra
After the lunch, the pre-conference seminar continues with three parallel working groups. I joined the working group ‘Video on Wikipedia’, which was moderated by Ben Moskowitz and Michael Dale. This working group was held in a smaller room where all the attenders, about 14, sat around a table. Ben and Michael introduce themselves. Before starting the discussion on video on Wikipedia, they ask us to introduce ourselves and explain our interest in this workshop. There is a big variety of people in the room, from video journalists to hackers and from students to researchers.

Ben starts the discussion. He wants to get rid of the top-down structure of video and broadcasting, and spread video. But how can you do this? Open source software can play a significant role in the solution. ‘We don’t need the entire community to use open source software, as long as a part does.’ There needs to be a standard system and browsers need to support it. The structures needs to be collaborative. Video is already used in Wikipedia. It is working, but can we go beyond it? There are three questions Ben Moskovitz and Michael Dale want to address in the discussion about video on Wikipedia.

First, how do we get content and where does it come from?

Some people in the room try to give an answer to this question, but it is hard to find one that fits. For example, the content can come from the users, like in YouTube, but as Ben says: ‘Wikipedia will never be YouTube.’ How can we convince the mass to spend time on video for Wikipedia? This is incredibly difficult, the tools are immature and there are some technical complications and Wikimedia cultural implications. ‘The people [of Wikimedia Foundation] are very consistent, could be good or bad.’ Another problem is that the best users who contribute to Wikipedia, are a bit resistant about video coming on Wikipedia. Some think it should be purely text based. Geert Lovink disagrees with this point: ‘It was never purely texted based, there has always been use of images and maps’.

There are some other solutions, like Geert Lovink suggests: ‘Maybe we can start with some experts as an example, like TED does only in a slightly different way. It needs to be open.’ Some one else agrees that there are some good examples that work already, like Open Images and Beeld en Geluid. Maybe we can work with them?

Another problem is that if you want to build on this software, you need a really solid base. Wikipedia doesn’t really have this. Do you want to change this too? As Michael Dale points out, Wikipedia is experimenting with software to solve this problem. This is more valuable that something perfect planned to him. Video should be accessible for people all over the world.

The second question of the addressed in the workshop is: What should/will be the relationship between the encyclopaedia and video?

Wikipedia is a genre, it is relatively fixt. Video is going to blow this away. It has to be verified, but how do you use the Wikipedia policy on video? Is it own research? You filmed it. How can you use NPOV [natural point of view] on video? Maybe the existing rules need to be set a side for video. For example, the users could decide if something is neutral. Or, the video can be seen as an artefact. They have a specific point of view, but are a part of a certain context.

What the role of video on Wikipedia will be is a difficult question. The video can be an illustration, supplanting the article or be something else? The people in the workshop can’t come to a perfect answer to this question. I guess we have to wait and see how it will turn out in a few years.

The last question addressed in the workshop was: Can the collaborative editing model work with video?

Michael wonders if the open, collaborative editing model of Wikipedia can really work on video. Ben answering this question: ‘no, I’m sorry Michael but I don’t think so.’ But Michael is not so sure about this: ‘the tools can change as well.’ For example, the collaborative model can be realised through editing the basic time line. Everybody can provide a time line; maybe an user can choose the best one. Another example, suggested by Michael, is to create subsections. When you divide the video in smaller bits, which people can own, it is easier to use a collaborative model.
Beside that, according to Geert Lovink, tv, radio and film has always been collaborative. That is what the credits is all about: to see who collaborated.
Another attender of the workshop suggests the sandbox idea: person A has an idea and makes a raw version, person B has a the right technical equipment and can make the movie thanks to the creativity of person A.

However, the problem is not a technical one, as Michael discovered, but a social one. Will the users come? And how will they use it? According to Ben, video will be based on conflict. The video whit the most time and effort invested in it will win.

To find out how video on Wikipedia really works, the group is divided in two parts. The first group is taking a look at the technical elements of Wikipedia, the second group wants to post a video on Wikipedia. By the end of the workshop, they have uploaded two videos. One of them replaced an existing article on the online encyclopaedia, as a small experiment how it works and how long it stands. The second video addresses a new subject on Wikipedia where no article existed about yet.

As Ben and Michael concluded in their workshop, the direction of video on Wikipedia is not clear yet and will show in one and a half or two years. I think we just have to wait and see!

example of video on wikipedia: Polar bear

Peter Kaufman on appreciating audiovisual value

Posted: November 14, 2010 at 5:57 pm  |  By: morgancurrie  |  Tags: , , , , , ,

In the “open content, tools and technology” panel, right after Ben Moskowitz and Michael Dale, it came the turn for Peter Kaufman to talk about appreciating audiovisual value, and do his bit to “achieve some positive social change within our lifetimes”, as he saw the ultimate goal of the Economies of the Commons conference was.

The path to follow to achieve that social improvement, and the challenges we face are, in his own words, “to make more good audiovisual material and tools more freely available for everyone”, making it possible for everybody to speak in the language of video.

He continued to remind the audience that “Wikipedia is the largest public educational commons” (at least until the Smithsonian gigantic project finally takes off), as a starting point to bring an important point on the table: how the commons, the public domain and the market are wrongfully pitted against each other and how we stand to lose a lot if we isolate ourselves from business, from those building the infrastructure of the web. The counterpoint he proposes is a commitment from education, culture, justice…to not stand in isolated, but seek collective arrangements for true openness.

Luckily, he didn’t stay in the shoulds and general ideas, but he dared giving five bullet points to consider in order to make them real:

1. Content needs to be more smart and self-aware. After all, we are in a scenario where all video material is archival material and online video will be, no doubt about it a big part of the future of our video.
So in order to make this content easy to share, reuse, remix across borders and languages, and so that it can break the ultimate barrier: making it readable not only by people, but also by machines, we have to make it also smart and self aware.

Machines will be reading, sorting, ranking…the video, and “if we spend attention in machine readability the assets discussed will become apropiately hyper valuable”.
The good news at this point is that the big companies that control the market seem to be aware of it and seems like “intelligent” or “smart” are words we are going to see more and more in TV devices: you only need to have a look at the new products flooding all the gadget blogs and stores like GoogleTV (“Your TV just got smarter”), Intel’s Smart TV, Youview (BBC’s project Canvas), Samsung Smart TVs

2. Search: Google & Google Images can search images by liscence type (so does flickr) and Wikimedia can suck in images from flickr liscenced with creative commons.
In order to achieve true openness and generate a good and complete educational open platform, video needs the same: searchability from google and aceessibility from wikipedia, and by video, I mean of course, open video.

3. Peter’s third point is really interesting. He introduced how the Pandora project in America succeeds with its model of a very powerful recommendation engine based on the “music genome”, that classifies each song with around 400 parameters/characteristics. He suggested creating a workgroup to start a similar system that can work with video.

4. Number for is (and had to be) money, as, afterall, the conference was about money. Today in the world, money from government funds, fundraisers, the Mozilla Foundation, philanthropy and many other sources is injected into de development and expansion of open video tools, standards, promotion and distribution…but it’s still too small compared to the huge budgets of mainstream media and Hollywood. We are in a critical moment where, if the open video community is unable to incentivise the corporate world to get involved in open video, we will fall short in the attempt to build an open web.

5. In relation with that, we have to consider something that might give us the key to that connection with the corporate world: the web is a commercial universe, filled with ads in every corner of our browser, and that doesn’t change because the video is open. The only problem is that all that revenue, produced out of our time and information, doesn’t go to our bank accounts, or to the content creator’s, but to Google.
But is there a way to take control over this market, that is, as Kaufman said, built over our stuff without us?
The answer, guessing there is one, is maybe, but for that, we have to start appreciating the ecosystem of value that our material represents and we have to be more aware about what rights we want to give away before we go broke giving our culture away for free.

Peter B.Kaufman is the president and founder of Intelligent Television.
You can listen to the whole talk here.Part1Part2

Materiality and Sustainability of Culture – Birte Christensen-Dalsgaard and the Cost Model for Digital Preservation project

Posted: November 14, 2010 at 10:28 am  |  By: morgancurrie  |  Tags: , ,

by Nicola Bozzi

Birte Christensen-Dalsgaard holds a Ph.D. in Theoretical Atomic Physics, but she has been working for media archiving institutions involved in digital preservation – like the Aarhus University Library and the Royal Library – for many years now. Even if digital archives don’t sound as complex as theoretical atomic physics, in her presentation Christensen-Dalsgaard showed us that running them involves some pretty complicated reasoning. Starting with the premise that an archive should provide the best possible version of an object, and an appropriate context to access it, Birte and her team have worked hard on algorithms and models to lay out cost-effective strategies.

First of all, archives need to provide a navigation structure, which has to be kept up to date. Christensen-Dalsgaard and her team have to make sure access and presentation are maintained, while user experience has to abide by the last web x.0 principles (currently they are implementing the semantic layer introduced by web3.0). In order to keep the costs down, one of the strategies they have employed has consisted in ESP games, where users are encouraged to insert complex metadata, that a computer couldn’t do on its own, while playing a relaxing online game. This way everybody wins: the institution doesn’t spend all its money on human labor and users have a little fun while helping to make the service better. Read the rest of this entry »

(Pro)-Active Archives: Celluloid Remix – Annelies Termeer

Posted: November 13, 2010 at 10:26 pm  |  By: morgancurrie  |  Tags: , , ,

By Fenneke Mink

Annelies Termeer presents the Celluloid Remix online video contest organized by EYE Film Institute Netherlands and Images for the Future. In this 7 years during project four public archive institutes digitize, save, preserve and share the Dutch audiovisual heritage for the future.

What comes after digitization is the question Termeer answers by presenting the practices of experimenting with new possibilities of digitized commons in the Celluloid Remix contest. For five month contesters were asked to make a remix by reusing the available video content with the theme of modern times. The content made available for the contest is a great part of the 1917 – 1932 silent film collection of the EYE Film Institute. The fact that most movement could be used without audio made the challenge manageable for remixing the motion. Celluloid Remix

Before starting the contest the EYE Film Institute had some challenges of their own to overcome. Mainly copyright and property issues of the material. After the kick off by video artist Eboman as the project ambassador the quality standard was set.  And the institute launched the different platforms for communication means of a website, motion upload page Blip.tv, Facebook page and different workshops at higher art education institutes of applied science. The results were more than expected, a short list was shown at the Dutch Film Festival and the winner was awarded at the award ceremony at the festival. Movement by Jata Haan

The lessons learned as an archive are for Termeer the use full workshops held as part of the remix project. These were necessary to give the contesters the grip they needed for the project. The EYE Institute learned by this that contest involving user generated content, or user participation are in need of guidance by the instate provided to the participants. This is the first step to an open and free environment of cultural practice and sharing of content and creativity as archive of the commons. The first focus should herewith be on the aim target and communication together with the right timing matched to the audience. For future project this focus will be applied together with the infrastructure of a other archives to create an even larger sharing of the cultural commons by (open) archives.

Hans Westerhof: Paying the Cost of Access

Posted: November 13, 2010 at 7:04 pm  |  By: morgancurrie  |  Tags: , , , , , , , , , , , , , , , , , , ,

Hans Westerhof, deputy director at the Netherlands Institute for Sound and Vision and program manager of the Images for the Future project spoke about the cost that access bears on archives in a digital world in the panel Materiality and Sustainability of Culture.

The traditional archive of Sound & Vision consists out of 21 vaults, spread out over 5 floors in a building that opened in 2006. In the digital domain, the institute collects over 1 petabytes a year in both daily broadcasting ingest and the results of the Images for the Future project. The physical archive is contiuously starting to look very different: servers are replacing vaults (13-15 PB exected in 2014).

But what really weighs upon the budget, is not necessarily the storage costs (however we, as archives, have a firm disadvantage when it comes to negotiating server costs, as this is a new terrain to us), but the cost of access. Broadcast professionals and public users expect immediate digital hi-res downloads, which brings along:

  • robot tape-arms
  • proxies for all hi-res videos
  • software for creating proxies & restore
  • management system for data files

Sound and Vision is working hard at other ways of access through user generated content and metadata (wiki, openimages, waisda, collaborations with wikipedia) and education programs which tend to be project-based (academia, ed-it).

We can control the cost of access in numeorous ways, but the bottomline is that by going digital we create a lot more (re)use, which is a costly success.

We (the cultural heritage institutions) need to become better at:

  • going digital (get real, get digital, understand and own the subject matter which is often new to our institutions)
  • collaborating ( think and act beyond institutions boundaries, share platforms, create economies of scale)
  • negotiate (with service providers & private companies)
  • improve on arguing the value & benefits of our case (we’re creating monetary value for others and should start thinking within the framework of people that can help us out)


Materiality and Sustainability of Culture – Inge Angevaare and the costs of digital preservation

Posted: November 13, 2010 at 6:26 pm  |  By: morgancurrie  |  Tags: , ,

by Nicola Bozzi

With her 11-year long experience at the Koninklijke Bibliotheek, the National Library of the Netherlands, Inge Angevaare knows a good deal about archiving. Her presentation pointed out a very important and often underestimated aspect of digital information: its long-term preservation.
As pointed out in the past by theorists like Geert Lovink (the internet, no matter what, needs and depends on an infrastructure) and Katherine Hayles (digital objects have their own materiality), Angevaare focused on the very real and tangible costs – in terms of both storage and human labor – that the prolonged maintenance of digital objects implies. Digital files are more fragile than we think, and even a missing bit can totally compromise the visualization of an image. For these reasons, as formats and supports are replaced over time, digital repositories need to keep up with technological evolution. Read the rest of this entry »

Revenue Models – Jaromil and Marco Sachy tell us about Cyclos and their own dyndy.net

Posted: November 13, 2010 at 3:46 pm  |  By: morgancurrie  |  Tags: , , , , , , ,

by Nicola Bozzi

As a part of the Revenue Models panel at the Ecommons conference, the presentation by Jaromil and Marco Sachy focused on the decentralization of currencies and credit. The former began by introducing their own website, dyndy.net, an online lab providing “Tools, practices and experiences for the conceptualization, development and deployment of currency”; the latter analyzed in more detail the case of Cyclos, an open-source software providing an alternative to traditional banking systems. Read the rest of this entry »

Economies of the Commons 2 – Video Trailer

Posted: November 11, 2010 at 3:27 am  |  By: admin  |  Tags: , , , , , , , , , , , , , , , ,

Economies of the Commons II HD Trailer on Vimeo.

Download or play Mpeg4 (Mp4) video

CREDITS:
Graphic design and leader: Jeroen Joosse
Sound and music: Hugo Verweij
Production assistance: Crookedline

Sound samples:
BBC: Breaking news of Lady Diana crash, 1997
NTS: Prinsjesdag, 1960
Vara: Eerste uitzending Lingo, 1989
ITV: A major fraud: Who wants to be a millionaire, 2003
BBC: Now the news intro, 2001