Inspiration – Even more David Claerbout

Studio David Claerbout just published the video of a lecture at The Cloud Collection, Nanjing on the occasion of the opening “Flow of Time. David Claerbout and Zhou Tao” exhibition.

At 17:50, he explains how he made the Woodcarver and the Forest. Prompting ChatGPT to create a script and images, then filming what ChatGPT suggested, and then giving it back to ChatGPT and asking whether it recognises itself.

I took the rough auto-generated YouTube transcript of this talk, and prompted ChatGPT as follows: “take this transcript, delete the time markers, delete the uhm’s, and put it together in readable sentences”, and this is what it came up with (highlights by Petervan)

+++ by ChatGPT

Thank you for coming. When the invitation arrived to make a double exhibition with the work of Zhou Tao, we were of course very curious. It was clear that there was a real reason we were invited to exhibit together, but there was also a lot of mystery because I didn’t know his work very well, and I had not yet realized the intrinsic relationship to time in his work. It was only by coming here a few days ago that it started to sink in—what the subtle relationships between the two of us might be.

First of all, I would like to thank the Cloud Collection for bringing us together. We obviously have very different ways of working, which you can see in the works. For Joe, being in a location, being physically in a place with his camera as his partner, is crucial. In my practice, we spend a long time and work with many people, sometimes for at least a year. If I look around at the pieces in this exhibition, I think the shortest production time is one full year. Other works took two or three years. Our record, if I remember well, is sixteen years—sixteen years of thinking back and forth about how to do something until we finally finished a production.

What we do have in common is that we like to use the duration of the film as the acting force—not so much the actors, not so much the motives, but the simple fact of being in front of a situation. This approach to film is relatively recent and has to do with the availability of digital time. I call it digital time because it is no longer expensive time; it’s virtual time of which we can gather a lot. For our generation, duration is no longer exclusive, expensive, or spectacular, but something broad and long.

When I came here two days ago and saw the combination with the work of Tao, it made me question myself: is it really necessary that I work so long on a single image? The airplane is a single motive. The birdcage is a single motive. These works revolve around very simple motives. I have to admit that whenever I work on one film, I am actually thinking about two films. This is one of the reasons I keep my motives simple: because I try to work with two identities.

For example, the film behind you, The Wood Carver, has the identity of a meditative work that calms you down, but also another identity that is almost the complete opposite. I’ve always been fascinated by what happens when you let go of narrative film—when you let go of talkies, psychological realism, and story, and instead go with time, with duration, with the flicker of the images. Could I make a very minimalist film where I use the least possible narrative and still generate narrative inside the heads of the visitors?

As you walk around, you’ll notice there are few sounds—no soundtracks, only what I call “witness sounds”: bird songs, nature, wind, footsteps. It wasn’t always like this. I made films with soundtracks, musical scores, conversations between actors. But my focus was always on the background, and more and more the birds became a symbol for that background—giving the film back to the witnesses rather than the actors. In cinema we often speak about foreground and background, like in painting. I realized I have a preference for what is behind—for what is far away, not in the foreground.

One of my very first films, made in 2003, is a 14-hour film where three actors perform a short 12-minute scene repeatedly for a full day, until they start making errors or falling apart. Only then do you slowly begin to see that the film is really about the light, the changes of light, and not about the narrative in the foreground. I am very much an advocate of the cinema of the witness, not the cinema of the actor.

A word also on ecology: I avoid entering specific subject matter, but I cannot help noticing that we spend a lot of time in front of screens and very little in nature. This makes me think about the relationship we have with technology. On one hand, I love technology—I’m a technological buff, and whenever something new appears, I try to catch up with it. But at the same time, my works are not about technology. They are about light and shadow, about composition, about the slow pace of time. Again, there are two tracks.

Any cinematographer knows that the moving image is a technological construction—25 frames per second. It is a prison of time; you cannot escape it. So why would artists choose to work in this prison in order to liberate time? To find alternatives for thinking about the flow of time, as Suzu beautifully mentioned in his text.

Let me elaborate on the black-and-white film behind the wall, titled Aircraft Final Assembly Line. Like many of my films, it is based on an image or an idea I found somewhere—an archive image, something with no particular message. I found a black-and-white photograph of this aircraft. It was originally painted in black matte aluminum. I was fascinated by the enormous wooden hall in Chicago where it was constructed—a space that no longer exists. This polished aluminum aircraft stood there, brand new in the past, yet I look at it now from the future, as a witness. I know the aircraft is probably destroyed by now. The work became about the dialectic between materials: polished aluminum, rough wood, concrete floor, improvised-looking scaffolding—yet airplanes themselves are not improvised. They must be perfect. Airplanes are like perfect arrows of time: they promise the future.

This is typical of how I work: I don’t invent; I let myself be inspired by archive images, almost orphaned images from the past.

When we move mentally to Bird Cage, the film with the explosion, this was a follow-up to a pandemic-era film, Wildfire. I continued with the motive of the explosion because it is the perfect index of a moment—after an explosion, nothing is ever the same. I was fascinated by the idea of remaining inside that moment of change for a long time, looking at all the pieces of the world flying apart, but in a peaceful way. Again, a paradox between destructive and meditative energies.

In the middle of this are two birds that appear twice—once inside the explosion, at the last fraction of their lives, and again later, alive and peaceful in a garden. I hesitated for a long time to use this imagery because it is almost ridiculous—two birds in an explosion, like a crucifix of birds. But I kept it because I was fascinated by the elliptical camera movement. If you look closely, there is no cut: the camera moves from the explosion back to a normal, sunlit garden, with the same two birds sitting together, almost as if they are trying to tell you something. Ten minutes later, the same ellipse brings you back to the explosion, frozen in time—an “immobile explosion,” a contradiction in itself.

I like to work within a single mental space, not jumping between spaces but taking the viewer on a minimalist journey through one situation.

The confetti piece around the corner, the double screen, was made from 2016 to 2018. It corresponds to a series I’ve done for about 15 years: dozens or hundreds of digitally constructed photographs combined into a slideshow. A conventional parade of images, except that it’s about the absence of sound. In the film, a little boy is screaming, but you hear nothing. His scream becomes internalized. The scene is exaggerated—a Black boy surrounded by white adults. I was warned against using it because it might appear divisive. But I said yes, that tension is exactly what we need in order to reach something more universal. The real fascination for me was the internalized scream, and the way spending time with the work might let the viewer internalize it too. It is as close as I can get to a painting with an action attached to it.

Finally, behind us is the latest work we made, just finished for this exhibition: The Wood Carver in the Forest. The subtitle is “A ruthless deforestation machine disguised as a meditative film.” Again, it has two identities. Most spectators will identify with the relaxation—the small sounds, the details of oil, wood, knives, carving, micro-movements outside any big narrative.

We designed this work using ChatGPT. ChatGPT is one big confabulation machine; it always gives you an answer, even when it has no clue. We wrote the storyboard with it, then requested illustrations, and used those as the basis for a cinematographic team. We worked for a year to reproduce what AI had suggested.

It came up with scenes like the one by the window: a person staring at nature but actually staring at something artificial—as if looking at a screen. In the images, if you look carefully, the “trees” outside are actually vertically stacked television screens. We prompted an image of the garden and it produced about ninety trees. Using its own calculation, it estimated it would take thirteen years for the protagonist to turn those trees into wooden objects. From those “thirteen years,” we selected three days—one early, one mid-process, one at the end—and produced only those.

In the end, it becomes a loop: AI proposes something, we produce it, then return it to AI like an alien creature and ask, “Do you recognize yourself?” About 5–7% of the imagery is completely fake—AI reprocessing its own outputs.

The idea of the work is that someone else—not me—could complete those thirteen years of destruction. My job was to create images with two simultaneous identities, properly schizophrenic: destructive and generative at the same time.

I think I will stop here. Thank you.

+++ end ChatGPT

PXL Immersive Music Day 24 Apr 2025 – Conference notes

As part of the research for my immersive projects and performances, I am trying to better understand the visual and audio aspects of XR experiences. In that context, I attended the Immersive Music Day at PXL in Hasselt, Belgium, organized by the PXL Music Research team. The full program, schedule, and lineup are here.

It is a relatively small-scale event (I guess about 100 PAX), which is great as it enables networking with the participants and the speakers. The event was held at a location with great immersive audio infrastructure (3 rooms with full 360 sound set-up). For the rest, it was a no-frills event with super-friendly staff and good food at breakfast and lunch.

Example of an immersive music room set-up

I was also pleasantly surprised by the mix of ages, ranging from fresh-faced high school students to seasoned audio veterans and legends, plus corporate fossils like myself. That kind of diversity usually signals that something truly interesting is about to unfold.

But the best part was the content and the speakers.

If there was an intended or unintended theme, it would be the subjective aspects of the immersive experience (how sound “feels”, or about the experiential coherence of auditive, visual, and spatial input) vs. the technological aspects of immersive sound (like precise localisation of sound in space). But I am sure that in some other sessions, the content was quite nerdy, up to the detailed coded and mathematical aspects of encoders/decoders.

Here are a few notes and reflections from the sessions I attended.

Immersive Space – An Agent for Creating and Experiencing Music

Speaker: Wieslaw Woszczyk, Director of the McGill Recording Studios and the Laboratory of Virtual Acoustics Technology at the Schulich School of Music of McGill University.

Program synopsis: Humans have sensory capabilities for recognizing their presence and immersion in space. Music ideally matches these capabilities by presenting dynamic, tonal, harmonic, and rhythmic structures in sound. Musicians use space to generate and blend sounds of ensemble, to hide and reveal musical voices, to dramatize perspectives, and to harness emotion in music making and listening. The talk explores immersive space as a modern technological tool for augmenting people’s experience of music

CIRMMT Dome with 32 speakers

Notes:

I had never considered immersive sound as a medium for live music performance—being physically present in one space while listening to live musicians through a 360° sound system that simulates the acoustics of an entirely different environment. Wieslaw talked about auditory “fingerprints” of spaces. This goes way beyond sound effects like reverb that simulate the reverb of a cathedral. No, this fingerprint captures the full acoustic character of a space—every corner, every height, every nuance. And there are plug-ins available that let you import this detailed acoustic profile directly into consumer-level digital audio workstations like Logic Pro and others.

This allows performing artists to shape and test their artistic expression for a specific space, like the San Francisco Cathedral, or lets the audience experience the music as if they were actually there, immersed in that very acoustic environment.

Altering the Immersive Potential: The Case of the Heilung Concert at Roskilde Festival

Speakers: Birgitte Folmann, Head of Research, Sonic College, and Lars Tirsbæk, Consultant in Sound & Emerging Technologies, Educator 3D audio, Sonic College

Program synopsis: Immersive concert experiences are often described as specific, emotionally moving, dynamic, and complex – qualities that require experimental and interdisciplinary methods to be meaningfully understood. In this talk, we explore the immersive and engaging potential of live concerts through the lens of the Heilung performance at Roskilde Festival. Drawing on anthropological fieldwork and insights into the technical systems that supported the experience, we discuss how a deeper understanding of immersion can inform both artistic and technological development to enhance future audience experiences 

Notes:

The talk was about the Heilung Concert at Roskilde Festival in 2024, in a festival tent holding about 17,000 people. Details about the technical set-up by Meyer Sound here.

What struck me was that the concert wasn’t branded as an “immersive” experience—there was no expectation set in advance. Yet, the immersion began the moment people entered the tent: birdsong filled the air, subtly blurring the line between environment and performance. It reminded me of my Innotribe days, where we also paid close attention to how people entered a space. After all, arrival and departure are integral parts of both the performance and the scenography.

The first part of the talk by Lars was about the technical challenges of delivering a 360 immersive sound experience in such a huge space. The second part by Birgitte was about the anthropological and subjective aesthetic experience of immersive music by the audience. Her slogan, “Aesthetics is a Verb” is great t-shirt material. They also talked about the “attunement” of the audience to the experience, and that you can’t fight the visuals: for example, when the drums play on the front stage, having the 360 sound coming from behind you does not work for the human brain.

Their team is now starting to document the findings of their field research. More to come.

Designing the Live Immersive Music Experience

Speaker: Paul Geluso, Music Assistant Professor, Director of the Music Technology Program – NYU Steinhardt University

Program synopsis: Paul Geluso’s work simultaneously encompasses theoretical, practical, and artistic dimensions of immersive sound recording and reproduction. His first book, “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” published by Focal Press-Routledge, has become a standard textbook in its field. Geluso will share his research experience while providing exclusive previews of interviews and insights with featured immersive audio masters from his forthcoming book, “Immersive Sound II: the Design and Practice of Binaural and Multi-Channel Experiences” set to be published in fall of 2025. This presentation will also included discussions on his 3DCC microphone technique, a 3D Sound Object speaker design capable of holophonic sound playback, and his work on in-air sound synthesis and other site-specific immersive sound experience building techniques. 

Notes:

Paul Geluso is God. Some years ago, he published “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” considered by audiophiles as “The Bible”. He is also good friends with Flanders’ best artist, Piet Goddaer aka Ozark Henry, who specializes in immersive sound and music.

Ozark Henry in his studio

Paul took us on a journey of his research on immersive recording (making custom made 3D microphones and codes) and playback (making his own “Ambi-Speaker Objects”.

Paul Geluso’s immersive 3D Sound Object (Ambi-Speaker)

This was more of a backdrop for his upcoming book. While his first book was more about the how – the technology to record and playback immersive music – his new book will focus on the why – in essence, about leading with the story and the artistic intent. He hopes the new book will be out in 2025.

I had the chance to have a short 1-1 conversation with Paul, who seemed interested in our immersive performance ideas, which was exciting to know.

Subjective Evaluation of Immersive Microphone Techniques for Drums

Speaker: Arthur Moelants, Researcher PXL-Music

Program synopsis: When presenting a group of listeners with four immersive microphone techniques in two songs, will they always choose the most objectively correct one? An experiment with drum recordings in different acoustics and musical contexts challenges the assumption that objective parameters like ICTD and ICLD should always determine the best choice. While non-coincident techniques often score better in these metrics, listener preferences can shift depending on the musical context, as other techniques offer different sonic and practical qualities that might benefit the production more.

A microphone set-up for drums

Notes:

Arthur is part of my team for our immersive performances, like The New New Babylon, where he acts as both a cinematographer and immersive music expert. He is a member of the PXL-Music Research team. I was curious to see how he’d handle public speaking and delivery, and he did not disappoint. I’m always impressed by how some young professionals manage to blend deep, almost nerd-level technical expertise with polished communication and presentation skills.

His talk was about his research on the subjective experience of drums, and how that experience differs depending on the recording technique and on the context of the drums as part of a song. I really like the simple graphics of his slides to explain some quite technical aspects of immersive music. Not an easy talk to deliver as he was also giving live demos on a 360 system to let us hear the subtle differences.

That’s it. Hope you enjoyed these notes

Warmest,

Petervan Studios – April 2025 Update

Head measuring device – seen in GUM Science Museum – Wunderkammer of Truth

General Status

Green Green Grass of Hope – Bicycle ride 26 Oct 2024

The Art Studio

Example of Ableton Live with Envelop for Live 3D Source Panner

An example of a simple VCV Rack set-up

Static example from Wave Unstable rule in CAPOW software by Rudy Rucker

Petervan Studios © 2025 – Seaside – Acryl on Canvas – 80x100cm

Crop from Curves work Anni Albers – Gouache on paper – ca 1955

Petervan Studios © 2025 – Braid Amulets – Chinese Ink on A4 printer paper

Petervan Studios © 2025 – Gnarly Curves – Chinese Ink on Steinbach A1 paper

Petervan Studios © 2025 – Gnarly Curves – Digital in Procreate iPad

Back to School

Summer of Protocols 2024 (SoP24)

Toolmaking for Spatial Intelligence

Masterclass XR in Industry

1) Unity Essentials

2) Enhanced reality use cases

3) Enabling remote expertise (remote assistance)

4) Augmented inspection (BIM/3D concept visualization)

5) Visualizing the unseen (IOT data visualization/digital twin)

6) Virtual control (interaction with machines/robots via XR)

Performances

Performance: Claim Your Cybernetic Word – 17 June 2024

Resulting world cloud

Performance: What Makes Us Human? – 28 August 2024

Trailer: 

Performance: New New Babylon

Performance Dream My Dream

Trailer

17 Minute Video simulation of the performance

Artistic Research Project: New New Babylon

  • A beta version of an Urbanistic Artistic Rendering VR Environment, inspired by an existing or planned City or Real Estate project
  • Artistic Performance (minimum Online, ideally IRL), see above
  • Art Book

Stealth

A theory of space/time dimensions

Concept drawing by Petervan Studios © 2025

Image generated by Gemini 2.0

Delicacies

Books

  • Geometry, Relativity and the 4th Dimension – by Rudy Rucker (1977)
  • Love & Math – by Edward Frenkl (2013)
  • Behave – by Robert Sapolsky (2017)
  • Mind in Motion – by Barbara Tversky (2019)
  • Soft City – by David Sim (2019)
  • and re-reading The Lifebox, the Seashell, and the Soul – by Rudy Rucker (second edition 2016)

For other books I am reading, see my GoodReads.

Exhibitions

SMAK – Ghent – Tarek Atoui

Sarah De Vos – detail – Galerie Sofie Van de Velde

What’s next?

  • The “Dream My Dream” performance 
  • The New New Babylon project
  • The Stealth Project
  • XR in the Industry Masterclass

I have joined The Futures Agency

I am super excited to let you know I have joined Gerd Leonhard‘s The Futures Agency as a speaker and as a part-time researcher and curator.

Gerd is Futurist and Humanist, Keynote Speaker, Author, Film Maker. His latest book “Technology vs. Humanity” – published in 2016 – is a best-seller and has been translated in ten different languages.

I first met Gerd when curating the program for Innotribe@Sibos 2016 in Geneva, where we experimented with a new format for his keynotes, labelled “The Future Show Live”. After lots of rehearsals this resulted into a stunning presentation using the full real estate of a huge HD wall.

Gerd Leonhard speaking at Innotribe@Sibos 2016 in Geneva

Continue reading

Future of Money and/or Value

If you’re interested in discounted tickets for one of the coolest Future of Money conferences of Q1 2011, bear with me and read till the end of this post.

Print

As most of you will remember, SWIFT’s innovation initiative “Innotribe” was one of the Executive Sponsors of Future of Money video production.

image

The Future of Money from KS12 on Vimeo.

The video is in my opinion a milestone. Not only for it’s content and the way it was produced (co-funding), but the whole movement that followed.

It is only now that it became clear to me that the event – in this case Sibos – is not the end-point in a process, but the middle. The post-event discussions and dynamics are at least as important, if not more important. Just check-out for example the animated discussion on Chris Skinner’s blog in November 2010 on “Why banks and socials agree to disagree”.

“Social” – as in Social tools, Social Currencies and Social Capitalism – is in my opinion a very strong force to take into account in our long-term thinking about financial services. It is one of the suggested topics we have in mind for Innotribe at Sibos 2011.

  • I am preparing another blog post “The Long Direction” on this subject and some other deep understreams that are going to change fundamentally how we think about corporations, banks and economy and corporate culture in general.
  • With Innotribe will sponsor a new research on Social Cognition by Stowe Boyd, the most important Social Philosopher and Webthropologist at this moment.

The Future Of Money crew produced post-event the following interesting infographic. I love the sharpness and detail of their analysis. In one view, you see how Creation, Storage and Access of VALUE intersect and how these intersections are each interesting opportunities to be taken up by start-ups or modern capitalists. Some indeed have taken their chances already: see the bottom of the chart with a number of start-ups in this space.

I would like to emphasize that the intersections in the infographic do NOT talk about the Future of MONEY, but about the Future of VALUE. More about this as well in the upcoming “The Long Direction” post. At this stage it’s enough to point you to Umair Hague’s latest book “The New Capitalist Manifesto: Building a Disruptively Better Business” (AmazonAssociates Link).

But I divert… Here is the Future of Money infographic (you can also download a nice PDF version of this by clicking on the graph below).

FOM_infographic_540x764

What an impressive list of start-ups and new contacts ! Indeed,  one of the biggest wins of our Innotribe initiative is the network of people we connect with.

This network is a very powerful force. Here is another example of this network-effect:

Given our work on Future of Money at Sibos, Mike Sigal  – Founder and CEO of Guidewire Group and part of our start-up judge panel at Sibos – introduced me recently to Brian Zisk, founder and organizer of the Future of Money and Technology conference in San-Francisco on 28 Feb 2011.

image

When looking at the conference program and the list of confirmed speakers, I thought “Wow” and two days later I was on a confcall with Brian. Besides being the Executive Producer of this Future of Money & Technology conference, Brian Zisk is a serial entrepreneur and technology industry consultant specializing in digital media, web broadcasting and distribution technologies.

brianzisk

Brian wanted us to speak about Innotribe and Future of Money at Sibos 2010, but unfortunately, given to some other commitments and plans, none of our team could make it to San Francisco on that day.

But we both quickly spotted the possible synergies – wouldn’t it be great to get a subset of these speakers to Sibos into the Innotribe stream for example – and we came to the following pragmatic agreement.

  • I was going to write a blog about his event, and in return my readers could get some discounted tickets for his show. And he would promote our Mumbai and Toronto events later that year. Yes, it can be that dead-easy. No strings attached, pragmatic. Piece of cake if you share the same passion. If you want such a discounted ticket, see the end of this post.
  • But we kept on talking… It suddenly crossed my mind that only 2 weeks later, SWIFT was organizing its SWIFT Operational Forum Americas on 8-9 March 2011 (SOFA). As we had an Innovation slot in the Special Session on day-2 of that event, why not ask Brian to come over and give a wrap-up of his conference ? Btw, watch this space on the Innotribe activities at SOFA: we are working on an impressive list of speakers for this Innovation Slot on 9 March 2011. Will be subject of another post.
  • And why not continue in this direction and see what we can do together for the first stand-alone Innotribe event in Mumbai, later this year on 1-2 June 2011 ? This event – hopefully a first in a long series, will be titled “Unpacked” and this Mumbai edition will focus on Mobile Payments. More on that later as well.
  • And then let the whole movement culminate to a climax at Sibos Toronto from 19-23 Sep 2011 ? I have a first meeting with the Sibos 2011 organizing committee in 2 weeks. Yes, we start early °-)

So how to get a discounted ticket for the Future of Money & Technology conference on 28 Feb 2011 in San-Francisco ?

Enhanced by Zemanta