Inspiration – Even more David Claerbout

Studio David Claerbout just published the video of a lecture at The Cloud Collection, Nanjing on the occasion of the opening “Flow of Time. David Claerbout and Zhou Tao” exhibition.

At 17:50, he explains how he made the Woodcarver and the Forest. Prompting ChatGPT to create a script and images, then filming what ChatGPT suggested, and then giving it back to ChatGPT and asking whether it recognises itself.

I took the rough auto-generated YouTube transcript of this talk, and prompted ChatGPT as follows: “take this transcript, delete the time markers, delete the uhm’s, and put it together in readable sentences”, and this is what it came up with (highlights by Petervan)

+++ by ChatGPT

Thank you for coming. When the invitation arrived to make a double exhibition with the work of Zhou Tao, we were of course very curious. It was clear that there was a real reason we were invited to exhibit together, but there was also a lot of mystery because I didn’t know his work very well, and I had not yet realized the intrinsic relationship to time in his work. It was only by coming here a few days ago that it started to sink in—what the subtle relationships between the two of us might be.

First of all, I would like to thank the Cloud Collection for bringing us together. We obviously have very different ways of working, which you can see in the works. For Joe, being in a location, being physically in a place with his camera as his partner, is crucial. In my practice, we spend a long time and work with many people, sometimes for at least a year. If I look around at the pieces in this exhibition, I think the shortest production time is one full year. Other works took two or three years. Our record, if I remember well, is sixteen years—sixteen years of thinking back and forth about how to do something until we finally finished a production.

What we do have in common is that we like to use the duration of the film as the acting force—not so much the actors, not so much the motives, but the simple fact of being in front of a situation. This approach to film is relatively recent and has to do with the availability of digital time. I call it digital time because it is no longer expensive time; it’s virtual time of which we can gather a lot. For our generation, duration is no longer exclusive, expensive, or spectacular, but something broad and long.

When I came here two days ago and saw the combination with the work of Tao, it made me question myself: is it really necessary that I work so long on a single image? The airplane is a single motive. The birdcage is a single motive. These works revolve around very simple motives. I have to admit that whenever I work on one film, I am actually thinking about two films. This is one of the reasons I keep my motives simple: because I try to work with two identities.

For example, the film behind you, The Wood Carver, has the identity of a meditative work that calms you down, but also another identity that is almost the complete opposite. I’ve always been fascinated by what happens when you let go of narrative film—when you let go of talkies, psychological realism, and story, and instead go with time, with duration, with the flicker of the images. Could I make a very minimalist film where I use the least possible narrative and still generate narrative inside the heads of the visitors?

As you walk around, you’ll notice there are few sounds—no soundtracks, only what I call “witness sounds”: bird songs, nature, wind, footsteps. It wasn’t always like this. I made films with soundtracks, musical scores, conversations between actors. But my focus was always on the background, and more and more the birds became a symbol for that background—giving the film back to the witnesses rather than the actors. In cinema we often speak about foreground and background, like in painting. I realized I have a preference for what is behind—for what is far away, not in the foreground.

One of my very first films, made in 2003, is a 14-hour film where three actors perform a short 12-minute scene repeatedly for a full day, until they start making errors or falling apart. Only then do you slowly begin to see that the film is really about the light, the changes of light, and not about the narrative in the foreground. I am very much an advocate of the cinema of the witness, not the cinema of the actor.

A word also on ecology: I avoid entering specific subject matter, but I cannot help noticing that we spend a lot of time in front of screens and very little in nature. This makes me think about the relationship we have with technology. On one hand, I love technology—I’m a technological buff, and whenever something new appears, I try to catch up with it. But at the same time, my works are not about technology. They are about light and shadow, about composition, about the slow pace of time. Again, there are two tracks.

Any cinematographer knows that the moving image is a technological construction—25 frames per second. It is a prison of time; you cannot escape it. So why would artists choose to work in this prison in order to liberate time? To find alternatives for thinking about the flow of time, as Suzu beautifully mentioned in his text.

Let me elaborate on the black-and-white film behind the wall, titled Aircraft Final Assembly Line. Like many of my films, it is based on an image or an idea I found somewhere—an archive image, something with no particular message. I found a black-and-white photograph of this aircraft. It was originally painted in black matte aluminum. I was fascinated by the enormous wooden hall in Chicago where it was constructed—a space that no longer exists. This polished aluminum aircraft stood there, brand new in the past, yet I look at it now from the future, as a witness. I know the aircraft is probably destroyed by now. The work became about the dialectic between materials: polished aluminum, rough wood, concrete floor, improvised-looking scaffolding—yet airplanes themselves are not improvised. They must be perfect. Airplanes are like perfect arrows of time: they promise the future.

This is typical of how I work: I don’t invent; I let myself be inspired by archive images, almost orphaned images from the past.

When we move mentally to Bird Cage, the film with the explosion, this was a follow-up to a pandemic-era film, Wildfire. I continued with the motive of the explosion because it is the perfect index of a moment—after an explosion, nothing is ever the same. I was fascinated by the idea of remaining inside that moment of change for a long time, looking at all the pieces of the world flying apart, but in a peaceful way. Again, a paradox between destructive and meditative energies.

In the middle of this are two birds that appear twice—once inside the explosion, at the last fraction of their lives, and again later, alive and peaceful in a garden. I hesitated for a long time to use this imagery because it is almost ridiculous—two birds in an explosion, like a crucifix of birds. But I kept it because I was fascinated by the elliptical camera movement. If you look closely, there is no cut: the camera moves from the explosion back to a normal, sunlit garden, with the same two birds sitting together, almost as if they are trying to tell you something. Ten minutes later, the same ellipse brings you back to the explosion, frozen in time—an “immobile explosion,” a contradiction in itself.

I like to work within a single mental space, not jumping between spaces but taking the viewer on a minimalist journey through one situation.

The confetti piece around the corner, the double screen, was made from 2016 to 2018. It corresponds to a series I’ve done for about 15 years: dozens or hundreds of digitally constructed photographs combined into a slideshow. A conventional parade of images, except that it’s about the absence of sound. In the film, a little boy is screaming, but you hear nothing. His scream becomes internalized. The scene is exaggerated—a Black boy surrounded by white adults. I was warned against using it because it might appear divisive. But I said yes, that tension is exactly what we need in order to reach something more universal. The real fascination for me was the internalized scream, and the way spending time with the work might let the viewer internalize it too. It is as close as I can get to a painting with an action attached to it.

Finally, behind us is the latest work we made, just finished for this exhibition: The Wood Carver in the Forest. The subtitle is “A ruthless deforestation machine disguised as a meditative film.” Again, it has two identities. Most spectators will identify with the relaxation—the small sounds, the details of oil, wood, knives, carving, micro-movements outside any big narrative.

We designed this work using ChatGPT. ChatGPT is one big confabulation machine; it always gives you an answer, even when it has no clue. We wrote the storyboard with it, then requested illustrations, and used those as the basis for a cinematographic team. We worked for a year to reproduce what AI had suggested.

It came up with scenes like the one by the window: a person staring at nature but actually staring at something artificial—as if looking at a screen. In the images, if you look carefully, the “trees” outside are actually vertically stacked television screens. We prompted an image of the garden and it produced about ninety trees. Using its own calculation, it estimated it would take thirteen years for the protagonist to turn those trees into wooden objects. From those “thirteen years,” we selected three days—one early, one mid-process, one at the end—and produced only those.

In the end, it becomes a loop: AI proposes something, we produce it, then return it to AI like an alien creature and ask, “Do you recognize yourself?” About 5–7% of the imagery is completely fake—AI reprocessing its own outputs.

The idea of the work is that someone else—not me—could complete those thirteen years of destruction. My job was to create images with two simultaneous identities, properly schizophrenic: destructive and generative at the same time.

I think I will stop here. Thank you.

+++ end ChatGPT

The Weak Image Speaks

The camera—
that profoundly liberal invention—
whispers:

I’m ready for anything.
Give me chemicals, give me a little light,
give me time and no shaking,
and I will be done.

A pocket-sized Enlightenment,
believing every world is reachable,
every surface printable,
every body open to possibility.

And now generative machines produce punctum—
yes, Barthes’ punctum—
the involuntary meaning that slips through
the cracks of intention.
Not planned by the author,
nor by the algorithm,
but arriving later,
after you’ve slept on it,
after the dopamine subsides.
Fast food for intellectual minds,
rewarding at first bite,
quickly stale.
You return in the morning and mutter:
It was not that great after all.

Style appears.
Style overload.
Those who lack craft run toward it—
high, abstract, fast—
while you work the old way,
learning the hand,
the long path,
refusing to choose sides.

Spend time with it—
real duration—
and you’ll see how expensive time has become.
Only unproductive duration is free.
Yet we abandoned that when we entered the cinema,
trading mobility for the promise of instant return.
No one waits for tomorrow in a theater.
In a museum, though—
time’s ticking clock can’t be heard.
There we look forward to looking back.

And somewhere in this,
the black box—
practical, yes,
but also a symptom
of our incapacity to coexist.
The dark room becomes a social problem,
a refusal of interference,
a denial of shared space.

Everything becomes a question of time,
of how little we have left,
of how duration is mined
like ore.

Growing old treated as disease,
dementia as enemy,
while software dreams
of pure disembodiment—
young, innocent, clean.

And yet—
beneath all this—
you remind us:
we are bifocal,
split,
never individuals.
We are believers,
especially visually.

The camera says:
I’m ready for anything.

But the eye says:
I am not a camera.

And the brain says:
I choose no side.

And the forest says:
Take your time.

And the weak image
whispers from the periphery:
Here is the non-event—
stay long enough, and you may hear it breathe.

+++

The text above is an artistic experiment inspired by the insights David Claerbout shared in his presentation Reclaiming Our Agency and in BIRDSONG, the publication accompanying the premiere of The Woodcarver and the Forest at the Castle of Gaasbeek in August 2025.

I first edited the full transcript from the presentation, and then OCR scanned text ‘The Time Spent” from the BIRDSONG book. Then I made a personal selection of the sentences that resonated with me. Then I gave that to ChatGPT and asked it to condense all this into a 1000-word poem, then 500 words, then 100 words. Then again, I made a personal selection of the best GPT snippets. And further edited them to my personal (un)taste.

PXL Immersive Music Day 24 Apr 2025 – Conference notes

As part of the research for my immersive projects and performances, I am trying to better understand the visual and audio aspects of XR experiences. In that context, I attended the Immersive Music Day at PXL in Hasselt, Belgium, organized by the PXL Music Research team. The full program, schedule, and lineup are here.

It is a relatively small-scale event (I guess about 100 PAX), which is great as it enables networking with the participants and the speakers. The event was held at a location with great immersive audio infrastructure (3 rooms with full 360 sound set-up). For the rest, it was a no-frills event with super-friendly staff and good food at breakfast and lunch.

Example of an immersive music room set-up

I was also pleasantly surprised by the mix of ages, ranging from fresh-faced high school students to seasoned audio veterans and legends, plus corporate fossils like myself. That kind of diversity usually signals that something truly interesting is about to unfold.

But the best part was the content and the speakers.

If there was an intended or unintended theme, it would be the subjective aspects of the immersive experience (how sound “feels”, or about the experiential coherence of auditive, visual, and spatial input) vs. the technological aspects of immersive sound (like precise localisation of sound in space). But I am sure that in some other sessions, the content was quite nerdy, up to the detailed coded and mathematical aspects of encoders/decoders.

Here are a few notes and reflections from the sessions I attended.

Immersive Space – An Agent for Creating and Experiencing Music

Speaker: Wieslaw Woszczyk, Director of the McGill Recording Studios and the Laboratory of Virtual Acoustics Technology at the Schulich School of Music of McGill University.

Program synopsis: Humans have sensory capabilities for recognizing their presence and immersion in space. Music ideally matches these capabilities by presenting dynamic, tonal, harmonic, and rhythmic structures in sound. Musicians use space to generate and blend sounds of ensemble, to hide and reveal musical voices, to dramatize perspectives, and to harness emotion in music making and listening. The talk explores immersive space as a modern technological tool for augmenting people’s experience of music

CIRMMT Dome with 32 speakers

Notes:

I had never considered immersive sound as a medium for live music performance—being physically present in one space while listening to live musicians through a 360° sound system that simulates the acoustics of an entirely different environment. Wieslaw talked about auditory “fingerprints” of spaces. This goes way beyond sound effects like reverb that simulate the reverb of a cathedral. No, this fingerprint captures the full acoustic character of a space—every corner, every height, every nuance. And there are plug-ins available that let you import this detailed acoustic profile directly into consumer-level digital audio workstations like Logic Pro and others.

This allows performing artists to shape and test their artistic expression for a specific space, like the San Francisco Cathedral, or lets the audience experience the music as if they were actually there, immersed in that very acoustic environment.

Altering the Immersive Potential: The Case of the Heilung Concert at Roskilde Festival

Speakers: Birgitte Folmann, Head of Research, Sonic College, and Lars Tirsbæk, Consultant in Sound & Emerging Technologies, Educator 3D audio, Sonic College

Program synopsis: Immersive concert experiences are often described as specific, emotionally moving, dynamic, and complex – qualities that require experimental and interdisciplinary methods to be meaningfully understood. In this talk, we explore the immersive and engaging potential of live concerts through the lens of the Heilung performance at Roskilde Festival. Drawing on anthropological fieldwork and insights into the technical systems that supported the experience, we discuss how a deeper understanding of immersion can inform both artistic and technological development to enhance future audience experiences 

Notes:

The talk was about the Heilung Concert at Roskilde Festival in 2024, in a festival tent holding about 17,000 people. Details about the technical set-up by Meyer Sound here.

What struck me was that the concert wasn’t branded as an “immersive” experience—there was no expectation set in advance. Yet, the immersion began the moment people entered the tent: birdsong filled the air, subtly blurring the line between environment and performance. It reminded me of my Innotribe days, where we also paid close attention to how people entered a space. After all, arrival and departure are integral parts of both the performance and the scenography.

The first part of the talk by Lars was about the technical challenges of delivering a 360 immersive sound experience in such a huge space. The second part by Birgitte was about the anthropological and subjective aesthetic experience of immersive music by the audience. Her slogan, “Aesthetics is a Verb” is great t-shirt material. They also talked about the “attunement” of the audience to the experience, and that you can’t fight the visuals: for example, when the drums play on the front stage, having the 360 sound coming from behind you does not work for the human brain.

Their team is now starting to document the findings of their field research. More to come.

Designing the Live Immersive Music Experience

Speaker: Paul Geluso, Music Assistant Professor, Director of the Music Technology Program – NYU Steinhardt University

Program synopsis: Paul Geluso’s work simultaneously encompasses theoretical, practical, and artistic dimensions of immersive sound recording and reproduction. His first book, “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” published by Focal Press-Routledge, has become a standard textbook in its field. Geluso will share his research experience while providing exclusive previews of interviews and insights with featured immersive audio masters from his forthcoming book, “Immersive Sound II: the Design and Practice of Binaural and Multi-Channel Experiences” set to be published in fall of 2025. This presentation will also included discussions on his 3DCC microphone technique, a 3D Sound Object speaker design capable of holophonic sound playback, and his work on in-air sound synthesis and other site-specific immersive sound experience building techniques. 

Notes:

Paul Geluso is God. Some years ago, he published “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” considered by audiophiles as “The Bible”. He is also good friends with Flanders’ best artist, Piet Goddaer aka Ozark Henry, who specializes in immersive sound and music.

Ozark Henry in his studio

Paul took us on a journey of his research on immersive recording (making custom made 3D microphones and codes) and playback (making his own “Ambi-Speaker Objects”.

Paul Geluso’s immersive 3D Sound Object (Ambi-Speaker)

This was more of a backdrop for his upcoming book. While his first book was more about the how – the technology to record and playback immersive music – his new book will focus on the why – in essence, about leading with the story and the artistic intent. He hopes the new book will be out in 2025.

I had the chance to have a short 1-1 conversation with Paul, who seemed interested in our immersive performance ideas, which was exciting to know.

Subjective Evaluation of Immersive Microphone Techniques for Drums

Speaker: Arthur Moelants, Researcher PXL-Music

Program synopsis: When presenting a group of listeners with four immersive microphone techniques in two songs, will they always choose the most objectively correct one? An experiment with drum recordings in different acoustics and musical contexts challenges the assumption that objective parameters like ICTD and ICLD should always determine the best choice. While non-coincident techniques often score better in these metrics, listener preferences can shift depending on the musical context, as other techniques offer different sonic and practical qualities that might benefit the production more.

A microphone set-up for drums

Notes:

Arthur is part of my team for our immersive performances, like The New New Babylon, where he acts as both a cinematographer and immersive music expert. He is a member of the PXL-Music Research team. I was curious to see how he’d handle public speaking and delivery, and he did not disappoint. I’m always impressed by how some young professionals manage to blend deep, almost nerd-level technical expertise with polished communication and presentation skills.

His talk was about his research on the subjective experience of drums, and how that experience differs depending on the recording technique and on the context of the drums as part of a song. I really like the simple graphics of his slides to explain some quite technical aspects of immersive music. Not an easy talk to deliver as he was also giving live demos on a 360 system to let us hear the subtle differences.

That’s it. Hope you enjoyed these notes

Warmest,