Participatory Livecasting

What can participatory livecasting look like? Is there a playfulness in its future, and a true on- and offline collaboration between the physical world and the virtual one?

“At the moment, as an online visitor of a hybrid event, you can just ask a question to the speaker, which may or may not be seen by the moderator. Of course, you can also chat with each other, but there are relatively very few other ways to show what you think. Or what knowledge you have to share. That return channel from the online audience to the event and to the on-site has not yet been developed at all. That is still very simple.”

This is a quote from Monique van Dusseldorp (programmer, moderator, and researcher on the future of events) during an interview The Hmm, affect lab and MU did for their research on hybrid events for the development of the Toolkit for the Inbetween. This research and The Hmm’s and H&D’s own experience from organizing and experimenting with online and hybrid events revealed that it’s important for an online audience to feel seen and have a sort of agency during hybrid events. Online audiences are often treated as a nice addition, but they are never important for an event to continue. In a way, they are more like spectators. What if the event is influenced by the presence of an online audience? How can we give online visitors more agency? These are questions we want to work with during this work group, ‘Participatory Livecasting’.

During one of the first meetings we had in this new group formation, we agreed that a better hybrid live experience is not necessarily more immersive. We’ve experienced that more connection between audiences (on-site, online, and among each other) can also be achieved in a low-barrier collaborative spreadsheet drawing session, for example. At The Hmm’s live stream website, developed by Karl Moubarak and designed by Toni Brell, online visitors are visualized by a simple dot at the top of the page. When people send an emote, the dot of that person for a moment changes in the emote they chose to send. This is a very subtle way to let the online audience feel seen and acquire some agency: they literally claim a bit of space on the live stream page and can operate somewhat autonomously by changing the contents of this 45px * 45px area. But this agency remains tied to the online environment. We’ve decided that in this work package we want to research how the online audience’s agency can extend beyond this 45px*45px area and into the physical space of the event, and how to make a more direct connection between online and on-site audiences. We want to develop mechanisms and prototypes that enable the translation of input from the online audience to outputs in the physical space and vice versa.

During our second meeting, we were brainstorming about what an online audience could influence in the space, and we decided to not fill this in yet. We’re first focusing on the development of the tool, which translates changes in the online environments (like chat inputs, a new online visitor entering the space, or a received emote) into something that happens in the physical space (like a light goes on, a smoke machine turns a vote is cast, or the program shifts).  When a working prototype is there, it becomes easier to think with it in order to experiment. And it is nice that it can be used in different ways, for different purposes. We’re using sprint sessions to develop the tool. During a workshop we’ll organize at the end of November, we’ll invite an audience to help us join us in building a system encompassing several different hybrid networking experiments. We’ll exchange knowledge about assembling systems like these and think further about how to situate them in the context of hybrid events. And during future The Hmm events, we’ll test specific implementations of the tool and collect input which we’ll use to develop it further.

At the beginning of October, we had a sprint session where we successfully managed to get the system to work. First, we connected an ESP module to The Hmm’s live stream website and let a LED light shine when certain chat inputs were given. Then we connected a red button to the ESP module, and when it was pressed an emoji appeared on The Hmm’s live stream page. We prepared two upcoming activities:

  • On 9 November, we had a first test case of the system during The Hmm’s event on the Metaverse at Pakhuis de Zwijger. When the audience type something in the chat something physical will happen. (we’ll have to define what)
  • On 26 November, we’re organizing our workshop ‘Emoji Proxies & Ghost Messengers’, where we explore the implementation of the tool with an audience and teach them to develop their own ESP32 module. The workshop will be held at Page Not Found in The Hague and online. More info here or here.

Based on these experiences, we’ll continue developing the system. Keep an eye on this blog for more!