Monster Voice II - Split Tongue
for live audiovisual performance
live audiovisual performance / 4-channel audio


AUG 25 - 26, 2023
⟪Sounds On Showcase⟫
Box theater, Seoul Art Space Mullae,
Seoul, South Korea
(Project grant from Seoul Foundation of Arts and Culture)

Program Note
⟪Monster Voice: Split Tongue⟫ is a live recitation composition of a spoken language dedicated to female pain, one that may become a new method of communication for women who hurt. A new sound language based on phonetic symbols, designed using SuperCollider, is read aloud on the custom-built body-worn Arduino instruments.

The pain of a hurting woman is difficult to convey to others. Instead of language the content of pain has been vocalized through whimpers, cries, screams, or silence. Following in the footsteps of generations of women who have attempted to verbalize their pain, the language dismantles the existing [consonant+vowel] language system. Instead, it incorporates the structure of [sound+vowel] to interpret the writings of women who came before us. At the same time, the language is an attempt to vocally communicate the chronic pain of the composer/performer herself, who suffers from fibromyalgia. The process of physically translating the new language, Monster Voice, through the custom-built body-worn instruments revive and give voice to the pain of the composer/performer and the women of previous generations.

Preface: A Translative Performance That Readily Fails to Translate by Donghwi Kim

Techniques Employed
SuperCollider
Praat (linguistic program for formant analysis)
Python
TouchDesigner
   - Interactive projection
   - Interactive LED lighting
Arduino
Fabrication
   - 3D printing
   - Acrylic laser-cut
   - Soldering

Credit
Director・Sound: Youngjoo Jennifer Ryu
Performance: Youngjoo Jennifer Ryu
Text Editing・Preface: Donghwi Kim
Visual・Technical Support: Loksu
Physical Computing: Jinsu Eun
Graphic Design・Translation: Soyoung Chong
Sound Engineer: Yeabon Jo
Stage Director: Sinsil Lee
Costume: Stromovka/ Saesam Hwang
Video Documentation: Changgu Kim
PD: Chaeyoung Lee
Special Thanks to: Min Oh, Isak Han, Hoonida Kim, Jeho Yun, Joosun Hwang
Sponsor: Seoul Foundation for Arts and Culture

Production Details
Goals
  1. Transforming the initial audiovisual: convert the Monster Voice I from the research, 'Monster Voice: Reinventing a Lost Language by Building a New Language,' into an interactive audiovisual live performance.
  2. Building digital musical instruments: create digital musical instruments to play "Monster Voice,” a new sound language for monsters with otherness.
  3. Including visual interactivity: add visual elements to the performance that interact with the instruments.
  4. Projecting the performer’s image: use sharkstooth fabric in front of the performer to project and transform her image in real time and make the image interacts with the instruments.
  5. Creating immersive experience: apply 4-channel sound and sound-visual-lighting interaction to allow the audience to experience time and space.

Improving Monster Voice I
Building instruments
I aimed to create an instrument that could take the next step in real-time audiovisual performance, overcoming Monster Voice I’s limitation of not showing the real-time translation of English text into ‘Monster Voice.’

Communicating with the audience
I emphasized the narrative and used the text as a live performance score, projecting it as part of the visuals to help the audience understand the ‘Monster Voice’, which was difficult in the previous video due to lack of text.

Emphasizing interactivity
In Monster Voice I, although integration between the tape in Ableton Live and the visual in TouchDesigner was in real-time, sharing the actual interaction was difficult due to its fixed recording, and stereo channels prevented 4-channel audio use. To solve this problem, I developed an instrument that directly interacts with sound and visuals, enabling the audience to intuitively understand how the performer's gestures affect them.

For visuals, I collaborated with a TouchDesigner specialist, incorporating feedback from test performances. We ensured the performance was built to incorporate feedback from test performances. The performance consisted of lip-syncing a set text sequence. I received feedback that during the beta test performance, it was difficult to see the shape of the mouth due to the relatively large distance between the audience and the stage and the mask instrument covering the face. To resolve this, we used visual assets to highlight facial gestures and the relationship between mouth opening and sound.

I utilized 4-channel sound and mapped sound amplitude to RGB values (0-255) on LED lights via  TouchDesigner, allowing the LED pannels placed next to the 4-channel speakers so that the state from off to on could be interactively reflected throughout the venue, depending on the volume and position of the sound.

Text as Score
Previously, I used excerpts from Maja Lee Langvad's poetry, but to broaden my research, I collaborated with a text editor to create a script more connected to my personal experiences with fibromyalgia, making the narrative more personal.

The text is divided into two parts, resembling a child learning language from her mother. In Part 1, the performer learns ‘Monster Voice’ using texts from women writers (Hak Kyung Cha, Soo-young Hong, Adrienne Rich, Clarice Lispector, Nella Larsen) who attempted to verbalize their suffering. In Part 2, the performer becomes proficient and tells her own story through a poem written by the text editor after interviewing me.

Sound Composition
The sound of this piece went through a process of translating English text into phonetic symbols and creating a sequence of sound functions in Supercollider corresponding to the phonetic symbols, establishing a close relationship with the text.

I started by creating a sound function (SynthDef) for every IPA phonetic symbol and a list (Synths) of different sounds by changing the parameters of the sound function. The edited English text was converted into phonetic symbols, and corresponding sound sequences were created in SuperCollider using Routine function and the .yield method, ensuring that the sequence only advances once a trigger signal is received.

In Part 1, the sequence was manually written in SuperCollider. By creating Part 1, I established the rules for generating Part 2 with a Python programmer. Due to the complexity of Part 2, I collaborated with the Python software developer. I matrixed the text in Excel, created a super set for vowel and consonant used in Part 2, filled in the sound parameters, and described generation rules of the 'Monster Voice' language. The developer then generated the SuperCollider sequence, which I fine-tuned to achieve the desired musicality.

The text matrix was essential for the necklace instruments to read sequences sequentially, synchronizing speech gestures with sound. The instrument triggered sound when mouth opening exceeded a threshold, and stopped when closed, simulating speech.

Effects
The conductive rubber cord on the mask instrument is connected to a filter bank in SuperCollider, affecting only vowel SynthDefs. This filter bank is created by combining several DynKlanks with a bus applied to only the vowel SynthDefs. Changing the cord's value alters the filter frequency, letting the performer manipulate vowel pitch being played. This effect gives the impression of a grunt or scream in pain. Part 2 features an effect that reduces the consonant amplitude, replacing them with two high-pitched sine waves around 9500Hz, triggered by a pressure sensor.

Communication Methods
Because SuperCollider couldn’t easily accept Arduino serial inputs, the TouchDesigner expert and I set up TouchDesigner on the performer’s laptop to accept Arduino inputs and communicate with SuperCollider via OSC communication. A separate computer of TouchDesigner specialist handled visual projection and LED interaction, with OSC communication between all devices over LAN, ensuring real-time interaction. Within SuperCollider, I structured component communicate using Bus and Group, and coded OSC communication to send amplitude data to the visual desktop.

Building Gesture-based Instruments
I began the prototype design by deciding what gestures I wanted to use, and after three prototypes, the current version was born. The gestures were designed to be everyday gestures than musical gestures:

  1. Speech Gesture: opening/closing the mouth to start/stop a sound as if the performer is speaking.
  2. Expressing Frustration Gesture: hand movements to open the mouth area wider with the expression of the frustration of failed communication. Also using rubber bands to simulate bouncing or stretching an instrument.
  3. Pressing Gesture: pressing/stretching a sore spot, representing physical pain.

Using a slide potentiometer, tension spring, conductive rubber band, and pressure sensor sheet, I tested simple prototypes for each part (necklace, mask, and waist sensor) and refined them. I created a functional version, adjusted for desired gestures and the performance’s overall aesthetic. I made the prototype blueprint, ordered the acrylic laser-cut parts and 3D printed the chin component for the final necklace instruments.

For the masks with four conductive rubber cords, my fabric collaborator modified hat linings and fishnet stockings and attached the cords. To prevent abrupt resistance value changes when cords contact, two rubber bands were wrapped with cloth.

To improve practicality, I modularized the instrument into three parts, easing connection and disconnection of wires from the board on the back of the mask, reducing risk of damage when putting it on and taking it off.

Creating Interactive Visual Projection
I set up a system for the visual expert and myself to work independently. I shared the text sequence matrix with the visual collaborator so he could focus on the necessary parts of the 40-minute performance. From mid-July, we met weekly to record data, enabling interactive experimentation with visuals synced to sound via TouchDesigner and SuperCollider. We also built a network of instruments, laptop (sound), desktop (visual), and monitor (for prompter when lip-syncing; effectively the score).

During a test audio live performance of the Part 1, I received feedback that it was hard for the audience to see that the performer was triggering the instrument by opening mouth or lip-syncing, due to distance between the stage and the seats. Some mistook the instrument for the vocoder, and stretching of the conductive cords was hard to see. Initially, I planned to project a 3D-modeled projection of the monster onto a translucent screen overlapping the performer, but realized it would worsen the issue. We decided to use the projection differently to better serve the performance and address the audience's feedback.

Since speech, mouth movements, and gestures are crucial, we mounted an infrared webcam in front of the performer to project a larger image of the performer manipulating the instrument, feeding that image live into TouchDesigner as the video source. The visual collaborator applied effects in synchronized with gestures and sounds. For example, triggering the necklace instrument would change a visual effect, and pulling the rubber cord would maximize it - an out-of-focus silhouette becomes sharper, or an inverted black-and-white face gains layers. Pressing the pressure sensor, fragmented the visual assets in the projection, reflecting the effect of moving away.

To achieve this, I provided the visual collaborator with multiple references and effects as a guide. Rather than using much color, we focused on creating a black-and-white CCTV footage aesthetic. In a dark and heavy atmosphere, LED lights in front of each speaker respond to the amplitude of the four channels.

I tested the materials for the semi-transparent screen at the Dongdaemun fabric market, evaluating them with the projector. After selecting the fabric, I ordered the screen for the performance based on our theater visits and simulations.

Text Score
Part1: The Dream of a Common Language
It hurts. It hurts like hell.
But it doesn't matter, if no one knows.

There’s something inside of me that hurts.
A twofold restraint with no exit.

When the pain comes, I became a different body every day.

What am I in this instant?
Am I a monster or is this what it means to be a person?

When the pain comes, I flow everywhere.
Saying nothing against the pain to speak.

It festers, it festers inside.

Let me be an object that screams.

I am the living mind you fail to describe in your dead language,
the lost noun, the verb surviving only in the infinitive.

I am an object that screams.
I am an urgent object.

No one lives in this room.
No one sleeps in this room
without the dream of a common language.

Part2: What Kind of Beast Would Turn Its Life into Words?
When I opened my mouth, pieces fell out of my body
Shards of glass out of my side
Slivers of steel out of the chest

The doctor threw them in the big trash can
More and more hands on my back weighed me down
As if they didn’t want to fall
As if they didn’t want to die

Why am I sick
How long will I be sick
I ask and the doctor doesn’t answer

Where does it hurt
How much does it hurt
The doctor asks and I can’t answer

Is there a ruler to measure pain?
Can you also draw the marks on your body?
Do screams similar to mine come from your marks?

The doctor threw my questions in the trash can
along with my diagnosis
ㅡPlease take with you what is yours

Holding the lidless huge trash can
labeled <Indescribable>
I walked out of the doctor’s office

The can was filled with pieces of the dead
Sick bodies are made of pieces, not words
As I made a tongue out of the borrowed pieces
it made a beautiful sound

The sound of a blade piercing a full balloon
The sound of a rice cooker bursting and white flesh spurting

The pain living in the right body loved that sound
The pieces danced and clapped, until one day
the left side of the body, where the words lived, was erased from the map

In the body I no longer inhabited
The screams of the dead moved in, lived happily ever after

A map drawn out of the pain
became a score I never wrote

Pieces gush out of my split tongue
Hands on my tendons strum me
Who is playing my body?
Who is beating my body?

Hands burrowed in my joints grab me

As if it doesn’t want to fall
As if it doesn’t want to die



Comiled Excerpts of Part 1 from
- Hak Kyung Cha, Dictee
- Soo-young Hong, Body and Words
- Adrienne Rich, Planetarium
- Adrienne Rich, POWER
- Adrienne Rich, ORIGINS AND HISTORY OF CONSCIOUSNESS
- Clarice Lispector, The Stream of Life
- Clarice Lispector, A Hora Da Estrela
- Nella Larsen, Passing




2025 Youngjoo Jennifer Ryu