ems25_banner_1.png

Programme > Conference abstracts - Day 2

Session 4A: History

Laughter in Japanese sound performances as a shared conceptual point

Mikako Mizuno, Nagoya College of Music

Some remarkable Japanese sound artists of the 21st century share conceptual links with earlier artists t like Group Ongaku, Taj Mahal Travelers, and Akio Suzuki who began their activities in the 1960s. Presenting Methodicism (Hoho-shugi), a group of Japanese media artists including Masahiro Miwa, this paper will discuss a sense of laughter as a key concept.

Miwa defines Reverse-simulation music in terms of three aspects: rule-based generation, interpretation, and naming. For Miwa, Reverse-simulation music is the only feasible and practical way to restore “discipline” in contemporary music. Restoring “discipline” is a serious artistic topic as well as an absurd rule which regulates human bodies. The performance based on unhuman rules leads to sense of laughter.

Motoharu Kawashima asserts that laughter is the foundation of human culture, citing Aristotle’s words. This concept forms the basis of Kawashima’s action music, which means focusing on consideration of what actions are involved in producing that sound rather than focusing on what kind of sound will ultimately be produced.

Both the reversed situation in Miwa and the deviation of meaning in Kawashima led to laughter from the audience, and the composers searched for laughter as the foundation of their art.

A sense of laughter has been accepted by the next generation of Japanese sound artists. Taro Yasuno, former student of Miwa, has developed three series of works: Search Engine, MISUCINEMA, and Zombie Music. All three evoke laughter due to the simplicity and absurdity of the performers' actions. Yasuno’s Zombie system is a conceptual instrument that critiques the relationship between technology and the human body in the present day. Zombie Music is highly developed and effective system for visualizing the physical labor necessary for sound production.

Electronic Studio Basel and 50 years of vivid, fragmentary history

Tatiana Eichenberger, University of Applied Sciences and Arts Nordwestern Switzerland, Basel University of Music, Research Department

The first electronic studio of a Swiss educational institution was founded in 1975 at the Basel University of Music. The Electronic Studio Basel (ESB) has since written history not only with its recognised directors David Johnson, Thomas Kessler, Hanspeter Kyburz, Erik Oña, Volker Böhm and Svetlana Maraš, but also with guest lecturers such as Guy Reibel, Françoise Barrière, Herbert Brün, Henri Pousseur, Michel Waisvisz, Mesias Maiguashca, Donald Buchla, Mauricio Kagel or Vinko Globokar. The studio has continuously established its international presence by organising its own festivals with renowned guests as well as through its concert activities as a guest at various concerts and festivals worldwide.

With more than 50 years of history, the Electronic Studio Basel has witnessed and shaped a significant part of the history of electroacoustic music. Emerging in the analogue era of electroacoustic music, it participated in the transition to the digital age, which is why technologies and composition methods in both domains are practiced and trained today. The studio therefore looks back on a long tradition of training composers of electroacoustic music, audio designers and sound artists. Always focused on its primary goal of educating the next generation of audio professionals and also pursuing a vivid work production and concert activities, it has never taken on the function of an archive. The historical awareness has increasingly developed over the last few years in the light of the studio’s long history.

On the occasion of the 50th anniversary of the Electronic Studio Basel, the Research Centre of the Basel University of Music, in collaboration with the Electronic Studio, launched a project at the end of 2024 with the aim of researching, compiling and writing the studio's history and creating a studio archive that encompasses the past but will also serve in the future.

In my proposed paper I would like to highlight the following three aspects: 1. genesis of the Electronic Studio Basel, the circumstances of its foundation and key milestones in its history; 2. positioning of the studio in the international historical context as an educational institution and creative production site for electroacoustic music, 3. challenges of historical research and reconstruction of a fragmentary history.

Foundation and milestones of the studio

The foundation of the studio has been accompanied by certain challenges: The efforts of Gerald Bennett, the director of the conservatory of the Basel Music Academy (the precursor to the University of Music) in 1972 were unsuccessful. The exact reasons for this have yet to be clarified by research. As early as 1972, the initiators were in contact with renowned electronic studios at Westdeutscher Rundfunk (WDR) in Cologne (David Johnson), Studio for Sonology in Utrecht (Michael Koenig), Groupe de Recherches Musicales (GRM) in Paris and private companies such as Electronic Music Systems (EMS) in London, Philips and Hewlett Packard.

The history of the Electronic Studio in terms of training audio professionals dates even back to the 1950s: in 1950, the first school for sound engineers (Tonmeisterschule) in Switzerland was set up at the Music Academy, which was run by radio sound engineer and teacher of acoustics Max Adam until his retirement in 1970. The training was organised in cooperation with the Basel radio studio and the vocational school. The connection with the electronic studio still needs to be explored.

In October 1975, David Johnson, former assistant of Karlheinz Stockhausen, was appointed head of the newly founded studio. The first piece of equipment purchased for the new studio was the MCI JH-16, a 16-track tape recorder. This set the standard of professionalism for the studio's basic equipment. Further devices soon followed. In 1984, with the new director Thomas Kessler and the planned relocation of the studio to the campus of the Music Academy, preparations began for the complete new construction of the studio. The gradual digitalization of the production processes proceeded. The studio provided courses in electroacoustic and computer music for students at the music academy, but did not yet have an own diploma degree. This changed in 1995 with the opening of the audio design degree programme.

Many puzzle pieces of the multi-layered history of the studio must be put together. Many questions remain unanswered at the moment: How was the Tonmeisterschule related to the founding of the Electronic Studio? Why was the founding of the studio not approved in 1972? Why did it take 20 years before a degree programme for the studio was officially established?

Positioning in an international landscape

The Electronic Studio Basel is part of the tradition of electronic studios that have been established at music academies or universities around the world since the late 1950s. It is one of the four types of electronic studio alongside studios run by broadcasting corporations, companies or private individuals. Together with the broadcasting electronic studios, it belongs to the type of state-funded electronic studios. In contrast to the electronic studios at the broadcasting corporations, the focus has always been on the training of audio professionals. Through its activities, the Electronic Studio Basel has attempted to establish itself as a serious player among the prestigious studios for electronic music - that is, to position itself in the international landscape as a production site of electroacoustic music alongside its core task of educating. What challenges were associated with the accomplishment of this task?

With David Johnson and Thomas Kessler as directors, the studio focussed primary on live electronics. This changed when Erik Oña took over the directorship in 2003, and today both areas are equally important. From the very beginning, the studio was characterised by a certain pragmatism, relying on small, flexible facilities rather than large apparatus in comparison to the studios at the broadcasting corporations. As a result, it became a valued partner and received numerous invitations to concerts and festivals, as evidenced by the many concert tours it undertook, often carrying almost the entire equipment along with it. The studio also focussed on the historically informed performance practice of pieces with live electronics, for example Karlheinz Stockhausen's Mixtur (1964) in 1994.

The challenges of historical research

From the 50 years of a vivid and productive activity of the studio, which has never been an archive and has never written its own history, many fragments remain - puzzle pieces that now have to be put together to form a big picture. Several moving boxes filled with various documents and media from the studio's holdings remain. These include student works, scores, application letters and portfolios, correspondence as well as DAT cassettes, CDs, VHS tapes or floppy discs. Another part of the studio documents from around 2003 onwards are available on hard drives and external disc storage devices. A large part of the documents from the earlier period are in the archive of the University of Music. There are numerous analogue instruments in the studio that are actually part of the archive but are still in productive use. Furthermore, there are the memories of the numerous people who have frequented the studio over the past 50 years, such as former students, former teachers and staff as well as guest composers, performers or lecturers. The archive is therefore available in several physical and digital locations and must be organised in a sustainable form that is suitable for both the past and the future.

In my proposed contribution I would like to offer an insight in my ongoing research and discuss with the conference participants their own challenges and experiences in archival research, preservation and writing or revising history of electroacoustic music.

The Experimental Core: How LADIM-USB builds Venezuelan Electroacoustic Music

Luis Ernesto Gómez, Universidad Simon Bolivar

Diego Morales, Universidad Simon Bolivar

Electronic music started in Venezuela 60 years ago. With the organization of the 3rd “Festival Latinoamericano de Música” it was built, in Caracas, the “Fonology Studio” (1966). The first director of the laboratory was Chilean composer José Vicente Asuar, and the first Venezuelan composer that used those spaces and who composed the first electronic pieces ever composed in Venezuela was Alfredo Del Mónaco. The “Studio” went under several restructuring processes that carried the distinctive sign of lack of funding and weak maintenance protocols, nonetheless, it did mobilize and model a concrete inclination toward sound synthesis in the country with relevant outcomes. Albeit it developed alongside public and private ventures, the electronic initiative kept venturing itself within the systematic paths of academia and we found it in the midst of the composition class at “Conservatorio de Música ‘Juan José Landaeta’” (Segnini, 1994; Noya, 2007) conducted by Argentinean composer Eduardo Kusnir. Such a dynamic movement influenced the founding of ensembles like ‘Ensamble Nova Música’ or the creation of the ‘Venezuelan Society of Electroacoustic Music’ (1984), with the intention to be represented in the Festival International des Musique et Créations Electroniques de Bourges, or the ‘Documentation and Acoustic-Music Research Center’ at Universidad Central de Venezuela.

In a country with a marked tendency toward acoustic compositions using symphonic and soloist instruments (violin, piano, guitar, or clarinet), the electronic vocation has emerged as an occupation not universally practiced by all composers. In this sense, electroacoustic music in Venezuela thrives on the hard work of small groups, supported by individuals and a few institutions, among them the ‘Digital Music Laboratory’ at Simón Bolívar University.

The ‘Digital Music Laboratory’ (LADIM) is located at the heart of Simón Bolívar University in Caracas, Venezuela, and has played a leading role in the curation and performance of electroacoustic and technologymediated music at the most prestigious international contemporary music festivals held in Venezuela (Latin American Music Festival 2004-2016 and Atempo 2006), as well as others held in Argentina and Mexico. In addition to organizing independent concerts and events, training composers and musicians, and producing a research corpus within the Master of Music program at Simón Bolívar University, it has shared events in Venezuela with institutions such as the Swedish Electronic Music Society (SEAMS) and the Center for Computer Research in Music and Acoustics (CCRMA, Stanford). Throughout its 22-year history (2003-2025), LADIM has helped in shaping the contemporary musical landscape of the 21st-century Venezuelan scene, enabling the creation of a repertoire comprised of 75 electroacoustic works by composers associated with LADIM, including acousmatic, mixed, video, and live electronic pieces. Currently, it is the only electronic music laboratory associated with a Master's program in music composition and interpretation and necessarily the only one with such a vibrant activity. LADIM's electroacoustic practice covers more than a third of the entire history of electronic music in Venezuela (1966-2025), so it is important to consider and reflect on the laboratory's efforts and challenges and to envision the future of its continuity.

This presentation is part of the research themes of the history of electroacoustic music in Latin America, taking as a case study the experience in a Venezuelan university, and the pedagogy of computer-assisted composition. On the one hand, it emphasizes a review of the genesis, development, and activities of an electronic music laboratory, specifically LADIM-USB, which has described a central connection with an academic institution (Babbitt, 1960), and Graduate program in Music, as well as the identification of its historical milestones. Additionally, we present the balance between compositional practice and theory, based on the training materials used and the programming languages learned, with the aim of strengthening the academic offering. It also examines the literature produced with the intention of deducing practices and strategies, as well as the creation of an online digital library to showcase and raise awareness of the material produced. Both thematic axes share a common concern: the curricular renewal of existing programs and the connection and commitment to the repertoire produced in the laboratory in order to synthesize and aim for the short-term application of a specialization program oriented to this field of study within Simón Bolívar University. This program can address the creative vocations of Venezuelans and Latin Americans and can open new avenues for experimentation in collaboration with similar laboratories around the world.

“Qi," "Dao," and Tape Music: The Philosophy of Technology in Zhang Xiaofu's Dialogue Between Different Spaces

Jiamin Sun, Peking University, School of Arts

Chinese electronic music emerged in the mid-1980s, nearly half a century after the West, leading Chinese composers to bypass musique concrète and tape music and to enter directly into digital production techniques. Zhang Xiaofu made an unconventional technological choice in his early work Dialogue Between Different Spaces (1992–93), however: while his contemporaries were embracing digital composition, Zhang revisited Pierre Schaeffer’s model of musique concrète from the 1940s-50s. With this ten-minute suite for magnetic tape, Zhang did not simply ‘return’ to musique concrète; he conceptualized tape as a ‘container’ for spatial manipulations influenced by Chinese metaphysical thought. In the suite’s first part (“Earth and Heaven”), for instance, he used cross delay effects between channels to echo the dialogue between “earth” and “heaven,” depicting the “complementary generation of Yin and Yang.” For Zhang, tape went beyond carrying Chinese instrumental sounds into forming a dynamic medium for the exploration of Chinese philosophical concepts.

Alongside my analytical account of Dialogue Between Different Spaces, I will consider the work’s broader implications for the historiography of Chinese electronic music. Chinese-language scholarship has tended to treat “technology” as a Western phenomenon and “aesthetics” as an Eastern issue. I challenge this model by turning to a Chinese philosophical account of technology, focusing on the concepts of “Qi” and “Dao,” as developed Xu Yu. Xu’s work establishes a theoretical perspective for understanding Zhang’s attempt to link “Qi” and “Dao” (器道合一) as an integration of technical objects (器) with cosmic order (道).

Session 4B: Transcription and Representation

A Paradigm Inverted: From Sonic Capture to Instrumental Rematerialization in Mixed Music

Keita Matsumiya, Kyushu University, Nagoya City University

This research introduces a method of mixed music composition that translates environmental sounds and electroacoustic material into real-time musical notation, enabling immediate performance on a Yamaha Disklavier. By algorithmically extracting features such as pitch contours, rhythmic gestures, and dynamic envelopes, non-instrumental sonic sources—particularly environmental soundscapes—are recontextualized as instrumental music through score-based mediation. The generated notation is transmitted directly as MIDI data to the piano, creating a feedback loop in which contingent sound events become instrumental action.

The Disklavier is further hybridized with exciters attached to its soundboard, allowing sonic components resistant to transcription to be reproduced through the piano’s resonance. This dual configuration produces a layered sound world where notated and unnotated, instrumental and electronic, symbolic and algorithmic elements coexist and interact. The piano thus functions simultaneously as a score-performing instrument and as a resonant body for electroacoustic playback.

By inverting the conventional paradigm of mixed music—traditionally live performers responding to fixed electronics—this project positions electronic and environmental material as score-producing agents rather than backdrops. The approach expands compositional strategies for engaging with sound environments, opens possibilities for hybrid instrument design and algorithmic practice, and reframes what it means to play, write, and listen within a system where machine, environment, and score converge into a unified compositional entity.

Flow and Form as Modulation: Henri Pousseur's 8 Études Paraboliques and the Transformation of Electronic Music

Paulo C. Chagas, University of California, Riverside

Ivana Petković Lozo, University of California, Riverside

Henri Pousseur’s 8 Études Paraboliques (1972) mark a pivotal moment in the history of electronic music, reshaping its aesthetic foundations and compositional strategies. At a time when musique concrète and elektronische Musik focused on fixed sound objects and discrete variations, Pousseur advanced a radically different vision: sound as continuous modulation, a fluid and evolving field rather than a collection of stable entities.

Through voltage-controlled synthesis enabling real-time transformations of frequency, timbre, amplitude, and spatial diffusion, he shifted emphasis from constructing sonic objects to articulating dynamic processes. His concept of generalized periodicity, expressed through parabolic forms, created trajectories that unfold organically and unpredictably, privileging immersion and perceptual fluidity over formal closure. Études such as Ailes d’Icare and Mnémosyne disparueexemplify this approach, where pitch, timbre, rhythm, and space interact in multilayered and nonlinear flows.

Historically, the Études anticipate developments in algorithmic composition, live electronics, and immersive audiovisual practice. Aesthetically, they reject representational functions of music, instead proposing sound as a relational and emergent field of meaning, resonating with phenomenological and semiotic perspectives on listening.

Historical Performance Practice in Electroacoustic Music through Restoring Lost Media (“Interface Concerto” by Shigenobu Nakamura as a Case Study)

Hyunmook Lim, Tokyo University of the Arts

This paper discusses a case study of Japanese electroacoustic music that has become technologically obsolete and consequently unperformable in the present day. While Japan produced numerous electronic and electroacoustic works throughout the late twentieth century, the majority of these compositions face significant challenges in being realized with contemporary technologies. This difficulty arises largely because the majority of works were not systematically archived or professionally managed, resulting in the obsolescence and disappearance of the specialized hardware and software on which they depended.

In response to this situation, the author has undertaken a project dedicated to the restoration of such unperformable compositions. As a case study of this project, this paper focuses on the restoration of Shigenobu Nakamura’s (中村滋延, 1950–) “Interface Concerto” (インターフェイス協奏曲) for Keyboard and MIDI System (1992). This work originally employed custom-developed MIDI devices, specialized sound modules, and composition software. Since none of these tools remains operational as of 2025, “Interface Concerto” effectively became a lost work. The present study aims to re-establish its performability by analyzing the composition, with particular attention to its use of now-outdated MIDI technology, and by detailing the restoration process through which it has been revived.

Session 5A: Performance

Playing in multiple spaces. Musical interactions in hybrid performance settings

Miriam Akkermann, Freie Universität Berlin

Latest since the restrictions imposed by the COVID-19 pandemic, the internet has become a new ‘normal’ connective setting to use for music performances – and music performances in virtual spaces have become a new normal at least for a couple of months. This happens against the background of a tradition of in-person concerts and performances, i.e. performers and listeners were usually in the same (physical) room. Over the curse of the 20th century, several new kinds of settings have been developed for relocating performers, sounds, visuals and the audience. These performance situations developed alongside with a wide range of digital media and transmission systems, ranging from early formats such as computer network music, to new constellations combining video, prepared sound, and live sound production in various ways and with multiple goals. All these new formats, however, seem to share still the impetus of playing together in real-time bearing the idea in mind to create a musical performance together. The technology enables hereby to explore the genuinly creative potential of real-time data-sharing, while also opening up the possibility to (virtually) bring together performers at different locations for an audience that is connected to the performer’s system from any place with internet. But what does it mean to make music in this virtual or hybrid settings? What changes in musical performances when performed on-site only, in hybrid and in virtual space – for the composition, the interpretation by the performers, and the perception for the audience? This talk will reflect on the concept of ‘presence’, and focus in particular on the question, how ‘presence’ can influence the interaction between the involved actors in hybrid music performance spaces.

The Orchestra as a Multiplicitous Organism: Collaborative Creation and Emergent Ontologies in the Large Laptop Ensemble

Eldad Tsabary, Concordia University, Montreal

This paper investigates the large laptop orchestra as a site for radical collaborative praxis and the emergence of "multiplicity" as a foundational creative principle. In an era where digital technologies can foster both hyper-individualism and unprecedented connectivity, the laptop orchestra presents a unique laboratory for exploring new models of collective musical creation. Moving beyond hierarchical structures, how can a large electroacoustic ensemble function as a dynamic, co-creative organism where diverse artistic voices coalesce without sacrificing individual identity? What novel strategies for performance, pedagogy, and social organization emerge from this framework?

Drawing upon over a decade of practice-based research with the Concordia Laptop Orchestra (CLOrk), this paper analyzes the methodologies developed to foster a ground-up, inclusive, and highly participatory creative environment. The research employs a research-creation framework to examine how processes of collective improvisation, networked performance, and interdisciplinary co-creation challenge traditional notions of the musical work and authorship. We explore "multiplicity" not as a cacophony of competing inputs, but as a rich ecosystem of coexisting and interdependent creative agencies.

Case studies will be drawn from CLOrk's diverse performance history, including telematic collaborations with international ensembles and joint creations with the RISE (Reflective Iterative Scenario Enactments) opera project, such as the collectively improvised opera "Why Do We Dream?". These examples will illustrate a methodology founded on several key strategies, including fluid role distribution, where roles are dynamically assigned based on members' interests and the evolving needs of a creation; structured improvisation guided by custom networking tools and conceptual frameworks like game pieces and soundpainting; and a deep interdisciplinary integration that establishes synergistic collaborations with dancers, VJs, and theatre practitioners, treating all inputs as integral to the creative fabric.

The paper argues that this multiplicitous approach provides a powerful model for electroacoustic music that is socially resonant, pedagogically innovative, and aesthetically expansive. By examining the orchestra as a complex adaptive system, we can derive valuable insights into decentralized creativity and the socio-cultural ramifications of networked artistic practice, directly addressing the conference's call to explore collaborative practices and the interstitial spaces between art, technology, and cultural studies.

The portfolio of CLOrk’s performances, which forms the basis for this research-creation analysis, can be explored at: https://laptoporchestra.ca/.

Session 5B: Gender Issues

The Archival Invisibility of Michiko Toyama in the Historiography of Electroacoustic Music

Chikako Morishita, Composer

Ai Watanabe, Tokyo University of the Arts

This presentation examines how selective historiography and archival practice in electroacoustic music have obscured the Japanese composer Michiko Toyama (1913–2006), who worked internationally across the pre- and postwar eras. It is widely recognized that gendered canons have long centered male musicians; electroacoustic music is no exception. Recent efforts – for example, the 2021 online symposium “UNSUNG STORIES: Women at Columbia’s Computer Music Center” – have spurred a reassessment. Toyama was the first Japanese composer to win a major prize at the 1937 International Music Festival (now the ISCM World Music Days); in the late 1950s she worked as a visiting composer at Columbia University’s electronic music studio (later the Columbia–Princeton Electronic Music Center, CPEMC) and released an LP in 1960 on Folkways Records, featuring works realized there. She later presented research on shakuhachi acoustics at international conferences. Despite such verifiable achievements, her position has been marginalized by a complex interplay of institutional and social factors: a transdisciplinary, transcultural practice spanning instrumental composition, electroacoustic work, and acoustic research; short-term or peripheral affiliations in France, the United States, and Japan; and broader forces of gender, racialization, Cold War geopolitics, and institutional narratives privileging technical mastery and continuity. These dynamics have shaped what has and has not been recorded. The paper seeks to examine mechanisms that may have contributed to Toyama’s archival invisibility and, more broadly, to sketch the politics of documentation and institutional exclusion, taking a preliminary step toward new historiographical frameworks.

From Where We Create: Gendered and Decolonial Perspectives on Situated Artistic Practices in Mixed Electroacoustic Music

Iracema De Andrade Almeida, Carlos Chávez National Center for Music Research, Documentation, and Information. Mexico

While the historical subordination of women creators and performers has been widely discussed and made evident in dominant Western narratives (Bull, 2019; Citron, 1993; Cumming, 2000; Cusick, 1994, 1999; Green, 1997; McClary, 1991; Yoshihara, 2007; Koskoff, 2014; McCormick, 2015; McMullen, 2006; Ramos, 2013), many of these ideological frameworks continue to persist in the Global South. Within electroacoustic music, patriarchal and Eurocentric values rooted in colonial discourse still shape aesthetic hierarchies, institutional practices, and production models. This presentation examines the work of the collective Féminas Sonoras as a response to such dynamics, asking whether audiovisual and electroacoustic media can serve as critical spaces for resistance and re-signification. Drawing from interdisciplinary feminist practices that recuperate and reconfigure artistic gestures to disrupt binary structures and resist essentialist representations, the project extends these strategies into visual, sonic, and performative arts. Through works for electroacoustic sounds, five-string electric cello, synthesizer, and video, Féminas Sonoras interrogates the relationship between gendered embodiment, sound technologies, improvisation, and creative agency. The collective foregrounds the notion that electroacoustic media is not gender-neutral but historically coded within structures of power. By integrating autobiographical narratives, cultures of remembrance, and the defiance of the disciplined female body paradigm (Lagarde, 1996; Green, 1997), the performances underscore how identity, corporeality, and gender generate indissoluble continuities between stage presence and the intrinsic meanings of these works.

Session 6A: Transcription and Taxonomy

Soundsketcher: A Visual Interface for Perceptual Exploration of Electroacoustic Sound

Konstantinos Velenis, Aristotle University of Thessaloniki

The expanding field of electroacoustic music has long challenged pitch-based notation, which fails to represent the temporal, spectral, and textural richness of sonic phenomena. Rooted in spectromorphological theory and research on crossmodal perception, Soundsketcher proposes a perceptually grounded system that visualizes sound. Building on Smalley’s spectromorphology and audiovisual correspondence studies, the system tries to bridge analytical precision and intuitive engagement, extending the lineage of aural and graphic scores through computational means.

Soundsketcher continues a conversation with systems like Couprie’s Acousmographe, Partiels, and EAnalysis, while extending them through automation and perceptual modeling. It combines audio-feature extraction, unsupervised learning, and semantic segmentation to translate sound properties into expressive visual shapes. Rather than relying on single descriptors, the system uses combinations of low- and mid-level features—such as pitch, loudness, spectral centroid, brightness, roughness, and periodicity—to shape multidimensional visual behaviors. These combinations are mapped to parameters including position, length, angularity, and texture, producing dynamic interactions that mirror perceptual integration in listening. Automatic segmentation with Wav2Vec 2.0 and CLAP embeddings enables the identification of perceptual sound objects, rendered as ribbons or blob-like forms whose geometry and inner patterns reflect evolving timbral qualities. Users can modify mappings to explore perceptual correspondences and interpret sonic morphology interactively.

The project addresses methodological challenges such as scaling and outlier control of features, continuity between frames, and the reliability of psychoacoustic descriptors. Future directions include refining source separation for polyphonic textures, expanding the perceptual dictionary of semantic descriptors, and evaluating perceptual coherence.

Documenting and Translating Electroacoustic Lectures in China through ‘Lexonomy’ — A Case Study from Musicacoustica 2024 and Beyond

Ruibo Zhang (Mungo), De Montfort University (Leicester, UK)

Jinbo Xie, MA, Independent Researcher

Mingyue Guan, MA, Independent Researcher

The initiative builds on the work of CHEARS (China Electroacoustic Resource Survey), a platform established shortly after the 2006 EMS Conference themed as “Languages”, held jointly with Musicacoustica in Beijing. CHEARS.info functions as a digital repository that uses both taxonomies (top-down structured glossaries) and folksonomies (bottom-up generated tags) to catalogue and analyse electroacoustic activity in China. Originally developed as a doctoral research project (2012–2024), one key component of CHEARS, “lexonomy”, was the design of a dedicated lecture documentation module, realised through physical data modelling and implemented within the CHEARS database. Now entering its postdoctoral phase, CHEARS continues to evolve and enters a new stage of application.

This paper takes the 2024 edition of Musicacoustica festival, after being relocated since 2023, as a focused case study to examine how effectively the platform’s lecture module can capture, classify, and analyse academic talks. It is acknowledged that the first author of this paper participated as a human interpreter in one of the 2024 lectures. To ensure objectivity, analysis of that particular case will be conducted and written by the second author, who attended the lecture in person as an audience member and provides first-hand feedback. The third author was not involved in the festival but offers an external perspective on the results derived from AI-based translation of the equivalent materials.

Session 6B: Analysis 2

Analysis of "Waka" by Michiko Toyama: Cultural Influence on the Creation of the First Electroacoustic Music by a Japanese Woman

Yuriko Hase Kojima, Shobi University

Michiko Toyama (1912-2006) was the first Japanese woman composer of electroacoustic music. Yet, she has long been unrecognized and sometimes even ignored in the music history of Japan and the world. Toyama led an unusual life. She began her formal piano performance studies in Japan. In the 1930s, she went to Paris to study piano. In 1936 and thereafter, toward the end of her piano study in Paris, she studied composition with Nadia Boulanger at École Normale de Musique de Paris. As a result, her ensemble piece "Yamato No Koe (The Voice of Yamato)" was selected for the ISCM World Music Days in Paris in 1937. Toyama became the first Japanese composer to be chosen for this prestigious festival of contemporary music. She studied composition with Darius Milhaud and Olivier Messiaen at the Paris Conservatory. She also met Pierre Schaeffer and learned about the creation of musique concrète. She moved to New York in 1956 to attend Columbia University, where she studied with Vladimir Ussachevsky and Otto Luening, who had been creating electronic music since around 1951, before the Columbia-Princeton Electronic Music Center was formally established in 1958. Toyama composed two electroacoustic tape pieces, "Waka" and "Aoi No Ue," in 1958 and 1959, respectively. "Waka" may be regarded as one of the women composers' first concert-electroacoustic fixed-media pieces. "Waka" is a piece for narration and electroacoustic music, featuring a standard music score that includes flute and cello parts, as well as a narrative component. Her music shows sensitivity in the transformation between timbres and intonation of the phrases. This research focuses on how the score of the piece, representing what Toyama imagined, was realized in the 1960 recording for Smithsonian Folkways Records. The cultural influence on the piece will also be closely examined.

Archaeology and sound preservation. Analysis of Sgorgo N by Pierluigi Billone

Iván Adriano Zetina, IReMus, Paris

The music of Pierluigi Billone (1960-) represents an exploration where composition becomes a means of musical invention, grounded in a deep understanding of the acoustic potential of instruments His compositional approach appears to avoid electroacoustic thinking — not through explicit rejection, but as a consequence of an aesthetic privileging direct, corporeal engagement with sound. Nonetheless, the frequent use of the electric guitar, and its particular integration into his musical language, suggests his work reflects on electroacoustic aesthetics. Billone’s Sgorgo trilogy (2012–2013) for solo electric guitar provides fertile ground for this inquiry. Among these large-scale works, Sgorgo N (2013) — an intimate tribute to Luigi Nono — distinguishes itself by its exclusive use of the left hand on a single string, producing subtle electroacoustic sound qualities that invite extended, immersive listening. Analyzing this piece from a performance perspective reveals significant insights into the conception of musical sound, highlighting electroacoustic performance as a multidisciplinary practice encompassing instrumental technique, sonic perception, and embodied interaction. Employing the metaphor of archaeological research Billone conceives composition as a practice of sound preservation, in contrast to innovation associated with technological advancement. This perspective aligns with concepts of dematerialization —where sound transcends its physical origin — and embodiment, emphasizing the performer's bodily involvement in the conception and production of sound. Sgorgo N thus generates a reflective space that challenges dominant notions of composition as a manipulation of materials, while simultaneously transcending listening categories focused on the pedagogy of the musical form.

Temporal Logic and Causality in Electroacoustic Music: From Polychrony to Chronotopes

Kevin Dahan, LISAA, Gustave Eiffel University

Temporal structures and cause–effect relationships in music have long been underappreciated in traditional music theory. In electroacoustic music however, the questions of temporal logic, temporal organisation, and potential causality across sounds become central. In a framework previously discussed, temporal directionality (in short, intrinsic temporal features of sounds) and temporal distancing (in short, temporal relationships between sounds), culminated in the notion of polychrony, wherein composers "weave time itself". These concepts show how electroacoustic works can exhibit complex temporal logics beyond linear chronology. Building on this framework, this paper will examine how causality and time perception in electroacoustic music are transformed through digital abstraction, emerging technologies, intercultural and ecological listening, as well as through analytical approaches.

The introduction of digital technology in composition and sound making has led to unprecedented levels of abstraction between sound and source. Whereas in instrumental music, listeners can usually discern or infer the concrete cause for each sound (e.g. the violin’s bow along a string, the hammer falling on piano strings) - hence reinforcing a straightforward temporal logic where cause precedes audible effect – electroacoustic music proceeds differenly, thanks to the acousmatic principles: with recorded, modified, or synthesized sounds, the actual cause may be obscured or artificial, leading to ambiguous causality. Under digital abstraction, composers manipulate time and sounds in ways that upset physical constraints: simple (reversal, time-stretching, granularisation...) or complex (e.g. algorithmically generated effects) sound manipulations disrupt the intuitive order of events ; nowadays musical structures are precisely organised at the scale of milliseconds, well beyond human performance. These techniques allow the creation of exoperceptual structures, "hidden to even the most acute listener", which leads to non-linear and opaque musical logic, challenging listeners to infer the discourse from sonic clues alone – which is often a “something to hold on to factor” in electroacoustic music. Meanwhile, new and emerging technologies such as artificial intelligence and quantum computing are potentially redefining compositional practice: AI-driven music systems can potentially act as autonomous agents in the creative process, introducing sounds without direct human interference ; quantum computing introduces further complexity based on quantum principles like superposition and entanglement, suggesting non-linear and probabilistic time structures. This further blurs traditional notions of causality: a given sound’s origin might lie with the human composer, within a deep-learning model, or through a synergy of both. The resulting music can therefore exhibit polychrony not only by design, but also through the dynamic behavior of the human and technological agents. Causality in such contexts becomes probabilistic rather than fixed, prompting listeners to reconsider ideas of intention and determinism in musical time.

Since causality and temporal logic become increasingly complex through digital technology, electroacoustic music listeners often rely on intercultural and ecological listening frameworks to navigate complexity. Intercultural listening allows an inclusive approach to diverse temporal understandings from varied musical traditions ; composers incorporating or referencing culturally-specific temporalities allow listeners from different backgrounds to interpret causality and temporal relationships uniquely, potentially perceiving simultaneous yet distinct temporal streams. Likewise, ecological listening further complicates and enriches this perspective, informed by acoustic ecology theories that treat environments as dynamic sonic contexts.

This ecological interpretation grounds otherwise ambiguous sonic materials within listeners’ lived environmental experiences, enabling a richer, contextually informed perception of causality. Together, intercultural and ecological listening provide listeners with perceptual anchors and frameworks for interpreting temporally and causally ambiguous musical contexts. Addressing the complexity introduced by digital abstraction and intercultural-ecological contexts necessitates analytical innovations. Traditional electroacoustic music approaches such as spectromorphology focus on the dynamic morphological and spectral properties of sound without relying on prescriptive notation. Furthermore, interdisciplinary methodological frameworks combining computational analysis with qualitative listener-centered approaches promise richer, more holistic analytical outcomes.

In brief, this paper will outline how electroacoustic music’s temporal logic and causality are profoundly reshaped by digital abstraction, intercultural and ecological listening strategies, and further refined through analytical and methodological innovations. By considering electroacoustic works as dynamic chronotopes – spaces where multiple temporal and spatial layers intersect – it advocates for an interdisciplinary analytical approach capable of embracing the complexity inherent in contemporary compositional practices.

Session 7A: Listening and Reception

Something to hold on to 2

Leigh Landy, De Montfort University

Talk’s abstract: Just over thirty years ago, I gave a published talk ‘The Something to Hold on to Factor in Timbral Music’ (SHF, 1993) which has been evolving ever since. Given my focus as musicologist and as composer on making innovative music accessible to a broader public beyond specialists, SHF offers means to identify aspects of shared experience that can help new listeners navigate their way through electroacoustic works as well as access tools for composers. Given the combination of issues raised in the EMS25 call alongside the foci of my recent talks and composition series, it seems timely to create a sequel to this key text of mine in which the intention is to add various social and cultural aspects to the SHF that a) can make electroacoustic music accessible and, perhaps more poignantly, b) make this music more relevant in today’s world.

This assumes that electroacoustic music is communicable in a general sense. It also implies that a work can be about something, thus enabling an intention/reception loop as the basis of potential shared experience.
The talk will return to the original SHF reintroducing it along with publications of others who further developed it. Following this, it investigates and illustrates areas including ecology, place and (inter)cultural material related to compositional approaches to discover how all of these can support both access and social relevance.

As someone interested in sample-based composition, I have experienced using musical samples that are (inter)culturally rooted or taken from our daily lives enabling the creation of works that are both aesthetic and socially engaging. This approach will illustrate the talk’s aim of optimising intention with reception as well as the accessibility of today’s and tomorrow’s electroacoustic works offering both specialists and nonspecialists with some more things to hold on to.

Futures of Listening: From the Act of Critique to(wards) That of Composition

Suk-Jun Kim, University of Aberdeen

Futures of Listening is an interdisciplinary project centred on a simple, yet far-reaching question: what is going to happen to the ways in which we listen in a couple of decades? Started in 2023 with the National Asian Culture Center (ACC) in Gwangju, South Korea, it contextualises the question through four themes: Listening to the Others, Listening to Climate Change, Listening to Urban-Rural Divide, and Listening to the Machines Who Listen. Introducing the project’s background, objectives, and key activities, the paper discusses its current case study, Futures of Listening: Water Knowledge from Two Cities (British Academy-funded ODA ISPF Challenge-Oriented Grant project 2024-2026). Based on its deep collaboration with two international partners (Forum Lenteng in Jakarta and Urban.Koop in Istanbul) in developing and executing the project objectives and methods, the project aims to identify, amplify and share the local water knowledge in two target communities, Kalibata Pulo in Jakarta and Sahintepe in Istanbul. The paper reconsiders the oft-raised question—what does a listening do?—from the perspective of transdisciplinarity (Nicolescu 2014). As the project examines the existing (yet often hidden or suppressed at times) water knowledge and its history through listening, it proposes that the futures of listening can be imagined, following what Latour proposed (2010), by moving from critiquing—which is connected to unveiling, revealing, discovering or uncovering—to composing—which is connected to (re)assembling (Deleuze and Guattari 1987; Latour 2005; DeLanda 2016) while paying close attention to touching (Nancy 2008) and withdrawing (Harman 2017) that listening activates. In this paper, listening is proposed as a responsible act of and for ‘a missing people’ or ‘we’ who are ‘in this together’ (Braidotti, 2019) that can lead us to a rhizomic field where many forms of awareness of water can be accessed, amplified, and described (Neimanis, 2017).

The Intention and Reception of spatialisation: analysis, findings and discussion

Stefano Catena, De Montfort University

Spatialisation has always played a central role in acousmatic music, both in live diffusion through systems such as the Acousmonium and in fixed multichannel compositions. Its relevance has increased with the rise of immersive media and spatial audio standards in cinema and musical practice. However, the scholarly study of spatialisation in acousmatic music—particularly regarding how listeners perceive and interpret it—remains limited. Building on Landy and Weale’s Intention/Reception framework, this study explores listeners’ reception of spatialisation, focusing on perceptual, structural, and emotional aspects.

Two compositions by the author, A House in the Storm (2022) and Travelling without Moving (2023), were used in listening tests with 15 participants. Three questionnaires were employed: two real-time sessions (before and after dramaturgic information was introduced) and a post-listening directed questionnaire. The qualitative data were analysed across three emerging categories: spatial (immersion, movement, placement), extra-spatial (narrative, imagery), and emotional (affective). These were later compared to a composer's intention questionnaire to verify listeners understanding of the work.

Results show that listeners mainly focused on spatial and extra-spatial qualities, associating spatial behaviour with sonic materials, images, and structural features. Dramaturgic information enhanced their ability to identify musical themes and narratives with greater precision. Emotional responses were less prominent, suggesting that affective engagement was more closely tied to sonic content than to spatialisation alone.

These findings confirm that spatialisation is not a secondary or decorative element but a core compositional parameter capable of shaping narrative and form. The study contributes to the discourse on multichannel acousmatic composition, offering an empirically grounded understanding of how spatialisation is received and interpreted by listeners.

Session 7B: Analysis 3

The realization of Di Scipio's Audible Ecosystemics n.2 : strengths and challenges in the interpretation and notation of machine-independent computer music

Dario Sanfilippo, Independant artist

Luca Spanedda, Conservatorio Statale di Musica Alfredo Casella

This lecture and the corresponding musical performance present an analytical and practice-based investigation of Audible Ecosystemics n.2 (Feedback Study) by Agostino Di Scipio (2003), reimplemented within an open-source environment using the Faust programming language. The work—originally realized in the proprietary KYMA system—explores self-regulating feedback processes between computer, performer, and acoustic space. Feedback tones generated through the interaction of microphones, loudspeakers, and the room’s acoustics are analyzed and transformed in real time, giving rise to an adaptive sonic behavior that reflects the structural coupling between system and environment.

The presentation part addresses both the conceptual and technical challenges encountered when translating the score and the original KYMA patch into an open-source, transparent DSP framework. It highlights how small deviations in the interpretation of KYMA objects can significantly alter the emergent dynamics of the system, potentially compromising the integrity of the compositional idea if not carefully considered.

Beyond the technical reconstruction, this project situates Audible Ecosystemics n.2 within a broader reflection on the sustainability and transmission of live electronic music, with particular attention to the field of complex adaptive systems. It proposes the open-source reimplementation as a method of historically informed performance practice for experimental works, aiming to preserve their adaptive logic while ensuring future viability and accessibility.

Visual Inspiration and Musical Realization: Mapping in Algorithmic Composition

Kerry Hagan, University of Illinois Urbana-Champaign

When data yields a stunning visual representation, musicians can be inspired to translate the experience into sound. After teaching computer music for 20 years, I have witnessed many failed attempts at crossing modalities. Occasionally, a successful work emerges. What makes a few pieces more successful than others?

Good music from generative processes relies on creative and complex mapping, similar to what interaction designers face when creating new electronic instruments. Mapping generated data is like mapping numerical data from tables, sets, arrays, or other empirical data sources, such as experiments and observations. Since the mechanism is the same, data-driven music is often lumped in with generative categories. It, too, relies on good mapping to be successful.

In its simplest form, mapping data to a musical parameter in a one-to-one relationship, focusing on the data rather than the sonic output, is known as sonification. This paper assumes that there is a fundamental difference between composition from data and sonification of data. In the former, music is the objective; in the latter, information is the objective. The line distinguishing the two is blurry.

One of the primary requirements of good music in algorithmic or generative practice is human intervention. Loyalty to a process can sometimes remind people they are listening to data, not music. Assuming the human is intervening at some significant level in the compositional process, the most successful pieces and tools are those applied to sound design or synthesis. Timbre is a multidimensional trait, ideal for a complex many-to-many or one-to-many mapping, ultimately realizing the impact of complex visuals in a complex sonic construct.

This paper does not address machine learning, because a human is not making compositional choices in applying data to musical parameters. In its current state, AI is incapable of creating something new. It simply regurgitates an amalgam of existing music.

Words, Sounds, Music: Sonic storytelling beyond the acousma:c

James Andean, LISAA, Gustave Eiffel University

There are many ways to 'tell stories' through sound. The most explicit is through words, but composers and other sound-based creatives have over time proven very adept at deploying any and all sonic resources in the construction of sound-based narratives. Obvious examples include musical forms such as programme music, and sound-based forms such as acousmatic music. Indeed, we can identify three main materials for sound-based storytelling: words, music, and sounds, deployed individually or in any number of combinations.

On the surface, it might at first appear that the narrative proper4es of these three soundbased categories are substantially, or even en4rely, different and distinct. This might seem to be common sense: that, with such different sonic materials, it is natural for these to be shaped into very distinct sonic languages, which in turn develop very distinct forms of storytelling. There is a world of difference between the 'language' properties of, say, tonal music, acousmatic music, and Esperanto; and equally the storytelling characteristics and possibilities of these three would appear to be en4rely different beasts.

  • We will argue here that this might not, in fact, be the case after all – or at least, not to the extent we might easily assume. Instead we will here argue that:
  • there is substantial commonality in the storytelling properties of language, music, and sound;
  • that this is true individually, but is also key to their very successful co-deployment in the creation of sound-based narratives;
  • that this is in part a question of the ways in which humans construct narrative regardless of source or materials, sound-based or otherwise;
  • but, that there is also a shared commonality between words, music and sounds – i.e. their mode of reception – which not only ties them together, but channels them, or better yet 'bundles' them, into shared forms of narrative reception and interpretation.

In other words: regardless of the material, and regardless of the language, there is a commonality to how we construct and receive stories that is shared across words, sounds, and music.

To construct and demonstrate this position, we will be drawing on analysis of examples across a number of areas and genres, including radiophonic work (f.ex. Luc Ferrari), electroacoustic music (f.ex. Trevor Wishart), text-sound composition (f.ex. Charles Amirkhanian), programme music (f.ex. Prokofiev), sound poetry (f.ex. Kurt Schwitters), literature (f.ex. James Joyce), and film.

Loading... Loading...