As promised in that post, the most compelling part of the book appears in its opening chapters, where he introduces “The Orphic Experience.”
The short summary is in the video below, from 0:59 to 2:23. The latter part of the video is about the three key elements of dub-techno: spontaneous repetition, atmosphere, and embracing noise.
TL;DR: The orphic experience uses music to alter perception, evoke deep emotions, and influence the listener’s state of mind. It creates a unique space and time for introspection and reflection.
Let me unpack this in stages: first the “orphic” aspect, then the “experience” element, and finally a synthesis.
Orphic
The “orphic” part originates from Orpheus, a character in the ancient Greek poem Argonautica, dating back to the 3rd century BC. The Argonauts are travellers on the boat Argo and are on a quest for the golden fleece. Somewhere along the route, sirens are trying to seduce the boatsmen. Still, Orpheus – a talented singer/musician on the boat – can shield the boatsmen from the Sirens’ temptations through his celestial, beautiful songs and voice. In other words, he was a noise canceller avant la lettre.
Some salient quotes from Bahadırhan Koçer:
The orphic experience, therefore, refers to the transformative way sound and media technologies can be used to control one’s sonic environment, creating a personalized auditory space that shields individuals from the overwhelming stimuli of modern life.
It is conceivable to argue that the nature of this transformation lies fundamentally in a shift from communal to individual listening.
The protected space needed for “sensory and emotional self-care”
In this sense, orphic experience can be seen as a way of escaping from the demands of the real world and constructing a self-contained, artificial reality.
By carefully curating their auditory environment and creating a personalized soundtrack to their lives, the individual can signal their taste and distinction to others, and distinguish themselves from those who do not possess the same level of cultural capital.
The “orphic” concerns the creation of a protected, isolated space in which the rules constraining clear thought can be suspended.
Experience
The second part is about “experience”. The words “Narrative” and “Experience” have become catch-all words. Washed-out. Weak. And they all suggest a passive audience.
Also here, a David Claerbout quote is appropriate:
“I think the recent proliferation of black boxes for film and video-art is not just a practical solution to a problem of sound and light interference, but also reflects an incapability to coexist. This can become apparent in large group exhibitions, where media installations appear strong when they are shown by themselves in a small or large dark space, but they easily collapse when shown in a social space where people move about and interact. The black box is a social phenomenon, for me it is a problem.” Ulrichs, David, ‘David Claerbout. Q/A, in: Modern Painters, May 2011, pp. 64-66
“Designed Conspiracy” would be better to describe what I have in mind. With an active audience. Or even better, where there is no stage hosting the expert speaker and no passive audience just leaning back in chairs, incapable of truly internalising knowledge.
I imagine us inside a 360° immersive room: a six-metre-high LED screen, full 360 Dolby Atmos sound, LiDAR tracking, and high-definition cameras—paired with exceptional content and facilitation. A complete experience in a box, ready to tour and deploy anywhere in the world. Am I exaggerating? Maybe not. I’ve just met someone who is building exactly this.
Synthesis
Obviously, I am using all of the above as a metaphor to try to explain what I do with my artistic interventions, provocations, and interruptions. These qualities inform my work/play. Whether that is soundscapes, installations, performances, or group expeditions.
Now that we have our protected, isolated space and a designed conspiracy, it is time to play the music. Music is the content. Content is the music.
Experiencing our music – individually or as part of a group – can feel like a trip, a trance, like digital psychedelics.
The music/content is presented in the right space, with the appropriate emotional and psychological atmosphere—the backdrop, if you will—inviting and sustaining safety, interest, curiosity, awe, and growth.
The rhythm is softer, slower, quieter vs. harder, faster, louder.
We embrace – and even design – flaws and imperfections, spontaneous repetition, and noise, inviting the participants to connect with being human, and to internalise the content at an embodied level of sensory experience.
We design with fifty shades of sophistication: avant-garde activism shaped by counterculture, driven by intention and direction. We build a relational infrastructure capable of holding shared ambitions, carrying a map as a symbol of movement, of becoming. These are maps that make meaning—shifting the question from the adolescent “Where are we going?” to the more deliberate “What direction do we want?”
We are all Argonauts again. We are experiens-explorers. We want to create the right spaces and conditions for debating the new rules and the associated structures of reality, then acting them out as if those rules were in place. As explorers, we want to play with new rules to dream, new rules to hope, but also – not to sound too cheesy or utopian – new rules to suffer and cope with what is evil and sin. In that sense, we become all part of a shared conspiracy.
We are not in the business of homo sapiens, ludens, or faber, but in the business of homo experiens.
As already mentioned in my September 2025 Delicacies, I got a crush on the latest album by “Mister Dub” Adrian Sherwood, and went down the Dub Techno rabbit hole.
From the review in De Standaard newspaper (Google Translate and highlights by myself):
“With his label On-U Sound, Adrian Sherwood has created a unique musical universe over the past half century, rooted in Jamaican dub but with tentacles reaching out to punk, funk, and psychedelia, peppered with samples, echoes, and sound effects. His new album features only one track with a recognizable reggae rhythm; the others are driven by slow bass lines and stimulating drum patterns. Many of these tracks are played by real musicians, just like the cinematic fragments of flute, saxophone, organ, cello, trumpet, percussion, piano, Roland 60, and harmonica (“Spaghetti Best Western” exudes Ennio Morricone). Sherwood can call upon a host of loyal musicians (including Brian Eno and hip-hop legends Doug Wimbish and Keith LeBlanc) who add color and human warmth to his boundless imagination as a studio wizard. In an interview, Sherwood did admit that this was the first time he’d used AI to create a record. It seems like a logical evolution for a man who has spent his life innovating and experimenting with new equipment.“ (km in The Standaard)
Here is some older material from Adrian Sherwood. Watch his body language while performing 😉
And the song “Trapped Here” from his previous album, Survival & Resistance
The album comes with a beautiful cover (designed by Peter Harris). The cover and the album’s atmosphere remind me of Rustin Man’s 2020 album ClockDust (I wrote a post about that one in 2020). It’s no surprise: after playing bass in a local reggae band in Southend, Rustin Man (Paul Webb) and his schoolmate, drummer Lee Harris, went on to form the rhythm section and become founding members of Talk Talk, alongside the exceptionally talented Mark Hollis and Simon Brenner.
The covers of Adrian Sherwood and Rustin Man respectively
So the starting point is dub reggae, which these days has evolved into a genre called “Dub Techno”. There is something melancholic about both albums, in sound, lyrics, artwork, and, at times, kinky living.
I don’t have real musicians available in my studio, and I’m hesitant to rely on AI. I’ve experimented with AI-generated music before, but it doesn’t bring me the same joy or sense of satisfaction as creating it myself. So I started studying and exploring the Dub Techno style, and found this book, “Dub Techno – The Orphic Experience of Sound” by Bahadırhan Koçer.
On page 56, Koçer begins discussing the concept of the riddim—Jamaican patois for “rhythm”—first examining drum patterns, and later turning to bass lines and melodic structures.
I started implementing them into Ableton Live. Here is an example of the “stepper” variant on a 64 Pads Dub Techno Kit.
Ableton Live 12.1 implementation “Stepper” by the author
That was easy. Then I tried to build a song using other out-of-the-box and/or free devices, clips, and samples in Ableton Live 12.1 and Logic Pro 11.2.2 (btw, the new bass and keyboard session-players, and the new studio piano and studio bass in Logic are amazing).
The new Studio Bass in Logic Pro 11.2
Creating a song was more of a challenge. What Adrian Sherwood and his real musicians were doing was not so simple after all. Although all the individual clips sounded simple, the art is in being subtle and sophisticated in launching clips and echo/delay effects.
As with writing, the real effort lay in removing the superfluous rather than adding more to the mix. Still, to make it a bit more my own, I included a few AI voice clips from the New New Babylon performance.
Short experiment by the author
But I am an amateur/bricoleur after all. No way I will ever get close to Adrian Sherwood and his musicians, at least not as a musician. But maybe in real life? Adrian and the band are touring North America and Europe in 1Q 2026. They will perform in Wintercircus Ghent on 6 Feb 2026. See/hear you there?
As part of the research for my immersive projects and performances, I am trying to better understand the visual and audio aspects of XR experiences. In that context, I attended the Immersive Music Day at PXL in Hasselt, Belgium, organized by the PXL Music Research team. The full program, schedule, and lineup are here.
It is a relatively small-scale event (I guess about 100 PAX), which is great as it enables networking with the participants and the speakers. The event was held at a location with great immersive audio infrastructure (3 rooms with full 360 sound set-up). For the rest, it was a no-frills event with super-friendly staff and good food at breakfast and lunch.
Example of an immersive music room set-up
I was also pleasantly surprised by the mix of ages, ranging from fresh-faced high school students to seasoned audio veterans and legends, plus corporate fossils like myself. That kind of diversity usually signals that something truly interesting is about to unfold.
But the best part was the content and the speakers.
If there was an intended or unintended theme, it would be the subjective aspects of the immersive experience (how sound “feels”, or about the experiential coherence of auditive, visual, and spatial input) vs. the technological aspects of immersive sound (like precise localisation of sound in space). But I am sure that in some other sessions, the content was quite nerdy, up to the detailed coded and mathematical aspects of encoders/decoders.
Here are a few notes and reflections from the sessions I attended.
Immersive Space – An Agent for Creating and Experiencing Music
Program synopsis: Humans have sensory capabilities for recognizing their presence and immersion in space. Music ideally matches these capabilities by presenting dynamic, tonal, harmonic, and rhythmic structures in sound. Musicians use space to generate and blend sounds of ensemble, to hide and reveal musical voices, to dramatize perspectives, and to harness emotion in music making and listening. The talk explores immersive space as a modern technological tool for augmenting people’s experience of music
CIRMMT Dome with 32 speakers
Notes:
I had never considered immersive sound as a medium for live music performance—being physically present in one space while listening to live musicians through a 360° sound system that simulates the acoustics of an entirely different environment. Wieslaw talked about auditory “fingerprints” of spaces. This goes way beyond sound effects like reverb that simulate the reverb of a cathedral. No, this fingerprint captures the full acoustic character of a space—every corner, every height, every nuance. And there are plug-ins available that let you import this detailed acoustic profile directly into consumer-level digital audio workstations like Logic Pro and others.
This allows performing artists to shape and test their artistic expression for a specific space, like the San Francisco Cathedral, or lets the audience experience the music as if they were actually there, immersed in that very acoustic environment.
Altering the Immersive Potential: The Case of the Heilung Concert at Roskilde Festival
Speakers: Birgitte Folmann, Head of Research, Sonic College, and Lars Tirsbæk, Consultant in Sound & Emerging Technologies, Educator 3D audio, Sonic College
Program synopsis: Immersive concert experiences are often described as specific, emotionally moving, dynamic, and complex – qualities that require experimental and interdisciplinary methods to be meaningfully understood. In this talk, we explore the immersive and engaging potential of live concerts through the lens of the Heilung performance at Roskilde Festival. Drawing on anthropological fieldwork and insights into the technical systems that supported the experience, we discuss how a deeper understanding of immersion can inform both artistic and technological development to enhance future audience experiences
Notes:
The talk was about the Heilung Concert at Roskilde Festival in 2024, in a festival tent holding about 17,000 people. Details about the technical set-up by Meyer Sound here.
What struck me was that the concert wasn’t branded as an “immersive” experience—there was no expectation set in advance. Yet, the immersion began the moment people entered the tent: birdsong filled the air, subtly blurring the line between environment and performance. It reminded me of my Innotribe days, where we also paid close attention to how people entered a space. After all, arrival and departure are integral parts of both the performance and the scenography.
The first part of the talk by Lars was about the technical challenges of delivering a 360 immersive sound experience in such a huge space. The second part by Birgitte was about the anthropological and subjective aesthetic experience of immersive music by the audience. Her slogan, “Aesthetics is a Verb” is great t-shirt material. They also talked about the “attunement” of the audience to the experience, and that you can’t fight the visuals: for example, when the drums play on the front stage, having the 360 sound coming from behind you does not work for the human brain.
Their team is now starting to document the findings of their field research. More to come.
Designing the Live Immersive Music Experience
Speaker: Paul Geluso, Music Assistant Professor, Director of the Music Technology Program – NYU Steinhardt University
Program synopsis: Paul Geluso’s work simultaneously encompasses theoretical, practical, and artistic dimensions of immersive sound recording and reproduction. His first book, “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” published by Focal Press-Routledge, has become a standard textbook in its field. Geluso will share his research experience while providing exclusive previews of interviews and insights with featured immersive audio masters from his forthcoming book, “Immersive Sound II: the Design and Practice of Binaural and Multi-Channel Experiences” set to be published in fall of 2025. This presentation will also included discussions on his 3DCC microphone technique, a 3D Sound Object speaker design capable of holophonic sound playback, and his work on in-air sound synthesis and other site-specific immersive sound experience building techniques.
Notes:
Paul Geluso is God. Some years ago, he published “Immersive Sound: The Art and Science of Binaural and Multi-channel Audio,” considered by audiophiles as “The Bible”. He is also good friends with Flanders’ best artist, Piet Goddaer aka Ozark Henry,who specializes in immersive sound and music.
Ozark Henry in his studio
Paul took us on a journey of his research on immersive recording (making custom made 3D microphones and codes) and playback (making his own “Ambi-Speaker Objects”.
Paul Geluso’s immersive 3D Sound Object (Ambi-Speaker)
This was more of a backdrop for his upcoming book. While his first book was more about the how – the technology to record and playback immersive music – his new book will focus on the why – in essence, about leading with the story and the artistic intent. He hopes the new book will be out in 2025.
I had the chance to have a short 1-1 conversation with Paul, who seemed interested in our immersive performance ideas, which was exciting to know.
Subjective Evaluation of Immersive Microphone Techniques for Drums
Speaker: Arthur Moelants, Researcher PXL-Music
Program synopsis: When presenting a group of listeners with four immersive microphone techniques in two songs, will they always choose the most objectively correct one? An experiment with drum recordings in different acoustics and musical contexts challenges the assumption that objective parameters like ICTD and ICLD should always determine the best choice. While non-coincident techniques often score better in these metrics, listener preferences can shift depending on the musical context, as other techniques offer different sonic and practical qualities that might benefit the production more.
A microphone set-up for drums
Notes:
Arthur is part of my team for our immersive performances, like The New New Babylon, where he acts as both a cinematographer and immersive music expert. He is a member of the PXL-Music Research team. I was curious to see how he’d handle public speaking and delivery, and he did not disappoint. I’m always impressed by how some young professionals manage to blend deep, almost nerd-level technical expertise with polished communication and presentation skills.
His talk was about his research on the subjective experience of drums, and how that experience differs depending on the recording technique and on the context of the drums as part of a song. I really like the simple graphics of his slides to explain some quite technical aspects of immersive music. Not an easy talk to deliver as he was also giving live demos on a 360 system to let us hear the subtle differences.
Spring update on Petervan Studios. The previous update was one year ago! It is not that nothing happened. A lot has happened since then. Let’s have a look at what’s on/in my head.
Head measuring device – seen in GUM Science Museum – Wunderkammer of Truth
General Status
Since February 2024, I have disconnected from all social media, including FB, Twitter, and LinkedIn: You won’t find me there anymore
There have been fewer conversations, but the remaining contacts have become true friends, project & sparring partners.
It is challenging to find budgets for anything that even smells artistic.
On the family front, there was both joy and grief. Joy: Astrid passed the entry exam and started her bachelor’s at the Faculty of Veterinary Medicine of the University of Ghent. Grief: My mother-in-law passed away on 1 June 2024. She was a saint. The mourning set some of the tone for the rest of the year. Join me in wishing my father-in-law, my wife Mieke, and my daughter Astrid strength in dealing with this loss.
Green Green Grass of Hope – Bicycle ride 26 Oct 2024
The Art Studio
The main focus of the art studio was digital. I did a deep dive into what I would call “immersive software”. A deep dive means spending a lot of time in the Unity Editor, following numerous online courses, and doing a lot of little experiments.
Example of Ableton Live with Envelop for Live 3D Source Panner
An example of a simple VCV Rack set-up
Static example from Wave Unstable rule in CAPOW software by Rudy Rucker
Although not intended this way, most of the knowledge and skills acquired culminated in the first and subsequent versions of the New New Babylon performance, giving leeway to other projects. Some of these projects are detailed below.
I did make some analog work, mainly pencil and Chinese ink on paper, and very little with paint on canvas.
Together with some friends, we submitted a SoP24 Protocol Improvement Grant proposal for Conversation Protocols for Humans and Machines. Unfortunately, our team did not make it to the 2024 season of the Summer of Protocols. There were 130 candidates and only 5 residencies available. We learned a lot in preparing the submission material.
Toolmaking for Spatial Intelligence
Followed DigitalFutures Workshop “Toolmaking for Spatial Intelligence” and got my certificate
Masterclass XR in Industry
I am following the Masterclass XR in the Industry (an online course with some on-site assignments) at the Howest Academy in Kortrijk (with Digital Arts & Entertainment as one of the best game schools in the world) and HITLab (Human Interface Technology Lab) until June 2025.
5) Visualizing the unseen (IOT data visualization/digital twin)
6) Virtual control (interaction with machines/robots via XR)
Performances
Performance: Claim Your Cybernetic Word – 17 June 2024
I was invited to do an online performance for the 60th-anniversary conference of the American Society for Cybernetics.
Attendees were encouraged to participate actively by offering cybernetic terminology. We discussed for example the Paskian Knobs required to steer randomness and the style of the outcome. The session resulted in more than 300 generated words. They were consolidated in an on-the-spot generated word cloud. We also created an AI-generated cybernetic song.
Resulting world cloud
Performance: What Makes Us Human? – 28 August 2024
The Cybernetic performance uses a new format for delivering and creating content in some dream-state flow. I showed it to Josie Gibson from the Catalyst Network, who invited me to create a similar online workshop “What Makes Us Human?”
This performance is an engaging, immersive, and poetic screen-foray crafted to elicit compelling language embodying The Catalyst Network’s human dimensions. Attendees are encouraged to offer catalyst and humanistic terminology, visually depicted in an immersive cloud-like interactive video installation and a bespoke soundscape. The session opens with an artistic cinematic dream sequence. Through facilitated brainstorming sessions with the audience, participants can fine-tune word generation. The session closes with a cinematic catalyst song outro “A Woven World of Humans”.
Trailer:
Performance: New New Babylon
This has been my main focus over the last months, and that paid off: I made good progress on this performance. This performance is now available for virtual and physical on-stage delivery.
The New New Babylon performance is a 45-60 minute immersive experience, divided into six chapters—Awakening, Stepping, Flying, Gliding, Folding, and Vertigo. Each chapter explores different facets of the New New Babylon concept, blending art and interaction. Audience members are invited to actively participate, engaging with interactive elements that promote a sense of community and shared creativity. The performance integrates rich soundscapes, video projections, visual art, poetry, masks, VR, and stage props to create a multisensory journey.
To get here, I have spent loads of time in Unity Editor (a software tool to create 2D, 3D and VR environments, well known in the game development industry), did some in-depth reading on modern urbanism, registered for a masterclass “XR in the Industry”, and partnered with NUMENA, a renowned interdisciplinary creative studio from Germany, specializing in award-winning spatial design and programming.
For the next iteration, we plan an API-supported LLM infrastructure to enable live interactions with AI Agents. We aim to facilitate real-time exploration of historical New Babylon research resources during on-stage and online sessions. This infrastructure is currently being developed by Thomas McLeish, Adjunct Lecturer – at Berkeley Master of Design, and master creator of the 2018 replica of the Colloquy of Mobiles. Arthur Moelants – a talented young cinematographer and immersive audio expert from Flanders – also joined our project.
We submitted the performance as a candidate for the Venice Biennale College 2025, but we did not make it.
Performance Dream My Dream
The team decided to re-work the New New Babylon performance into a live experience that does not require any hardware (headsets) for the audience. We have renamed the performance to “Dream My Dream”
Trailer
17 Minute Video simulation of the performance
Dream My Dream is an immersive performance experience in six dream states: Awakening, Stepping, Flying, Gliding, Folding, and Vertigo. A live VR performer embodies an architect-researcher and dreams about the New New Babylon, a speculative future society transformed and eroded by automation, artificial intelligence, and digital technology. The dream explores profound and universal questions about the essence of existence. The audience is invited to interpret the dream in their own unique way.
This artistic performance has a poetic, gentle, and profoundly human touch, evoking a dreamy, Magritte-like surrealism. The atmosphere is calm and harmonious, steering from Sci-Fi or dystopian themes toward a non-aggressive, understated, and subtly utopian vision.
We submitted “Dream My Dream” to the Cannes Festival Immersive 2025 competition but did not make it to the final ten.
We are now revisiting the synopsis and tagline of this performance and making some adaptations to the treatment with the ultimate goal of premiering at one of the major film/immersive festivals.
Artistic Research Project: New New Babylon
The performance is one of the deliverables of the main artistic research project. There has been renewed interest from several parties interested in partnering on the main New New Babylon project.
We are looking for a team/consortium to overlay an existing city (district) with a VR environment for A/B Testing of the urbanistic, economic, and governance aspects.
The deliverables of Phase-1, the Vision-Phase, are:
A beta version of an Urbanistic Artistic Rendering VR Environment, inspired by an existing or planned City or Real Estate project
Artistic Performance (minimum Online, ideally IRL), see above
Art Expo (minimum Online, ideally IRL)
Art Book
Stealth
A new project, very embryonic, written and directed together with Andreea Ion Cojocura, complemented by my cousin (a 17th-century art expert) and a world-renowned artist as the MC.
The project is an experimental alternate reality experience about the nature of flesh, human suffering, and technological advances. It seeks to find an answer to the question: “Who are the new Gods that can deliver us from suffering?”
It is a surreal experience to see the truth. It happens over a three-year timeline. Unfolding in real time, the project involves participants in the preparation, execution, and aftermath of the fictitious latest ecumenical council.
The multi-year project entails a prelude phase as a mockumentary in 360 video, audio, and VR, followed by an in-person council, concluding in new canons and summarising mockumentary.
At the moment of writing, we are finalising the pitch.
A theory of space/time dimensions
The analog gnarly curved art projects, the 3D software learnings, the few conversations, a couple of computation and math books about flat and 3D dimensions (see books section below in this blog), and especially the Stealth project led me to fantasize about a multi-dimensional (not multiverse) world that is suddenly revealed and that changes everything we know about science.
One of the fantasies envisions a world encircled by a Saturn-like ring of knowledge-infused water, unveiling stereoscopic windows into higher and lower dimensions of time and space—if such concepts even exist.
Don’t take anything here too seriously. To quote Rudy Rucker: “I am a science fiction writer and the secret of science fiction is pile on the bullshit and keep a straight face”
Delicacies
“Delicacies” is my incoherent, irregular, unpredictable collection of interesting sparks I came across online. Handpicked by a human, no robots, no AI. A form of tripping, wandering, dérivé, with some loosely undefined theme holding them together. Delicacies have no fixed frequency: I hit the publish button when there is enough material. That can be after a week or after 3 months. No pressure, literally.
Check out the May, June, September 2024, and February 2025 editions here.
Books
Highlights:
Geometry, Relativity and the 4th Dimension – by Rudy Rucker (1977)
Art, Technology, Consciousness: Mind@Large – by Roy Ascott (2000)
Love & Math – by Edward Frenkl (2013)
Behave – by Robert Sapolsky (2017)
Mind in Motion – by Barbara Tversky (2019)
Soft City – by David Sim (2019)
and re-reading The Lifebox, the Seashell, and the Soul – by Rudy Rucker (second edition 2016)