Three days of lectures and concerts (schedule) devoted to the themes of the MUSAiC Project and beyond!
Important note: Talks are open to the public without registration. Concert tickets must be reserved:
- AI Music: Then and Now @ KMH Nov 22 8 PM (program)
- Neuronic @ KMH Nov 23 2 PM (program)
- AI in Music and Beyond @ Fylkingen Nov 24 8 PM (program)
- Morning AI Music Concerts playlist
- How can one judge applications of artificial intelligence (AI) to music and art along dimensions of utility, economics, and ethics?
- How do creative AI systems affect the use and worth of music and art in particular contexts? When is their involvement perceived as problematic?
- How can ethical considerations be folded into the engineering and application of these kinds of systems? Should there be “laws of AI” that make explicit the responsibility of the AI technologist to the domains where the technology is to be applied?
- What creative possibilities are hindered or facilitated by the “flaws” of an AI system’s knowledge of music and art?
- Where are music and art headed as a result of these creative machines?
This event is organized by Bob L. T. Sturm (KTH) in collaboration with KMH and Fylkingen, with support provided by:
Schedule overview:

Keynote Lectures

Cecilia Åsberg, Professor and director of the Posthumanities Hub, Linköping University
Posthumanities and more-than-human creativity
In order to deal with creative AI we need a cultural research worthy of our troublesome times. Nowadays, when it is hard to tell nature from culture, we need versatile and post-disciplinary forms of humanities that can handle the societal challenges of technological advancements, climate change and damaged bio-diversity. In my talk I will situate forms of such post-humanities, and try to give them context. I will provide a tentative map of a more-than-human humanities that has emerged from the critiques of history, especially from the queer and anti-colonial fields of science and technology studies, and from the open-ended and affirmative creativities of feminist posthuman theorizing and academic activism. Such practices of feminist posthumanities, as I call it, labels today a widespread and growing effort to rework the role of the humanities and their relation to science, technology, to art and other species, to human differences and contemporary society. It is tested and exemplified by various forms of ‘new’ humanities: for instance, digital and techno-humanities, environmental humanities, multispecies humanities, and other societally relevant arts of living on a more-than-human planet, together. Accordingly, feminist posthumanities focuses on critique and creativity as it is about to take shape, for instance along the lines of artistic AI. My talk will try to locate some insights of how creative AI change our cultural assumptions into the state-of-the art within new humanities research, set in the midst of a changing world.

Eric A Drott, Associate Professor of Theory, University of Texas at Austin
Socializing Music AI (video)
This paper considers some macroeconomic conditions that have shaped recent developments in commercial music AI, along with the contradictions that have emerged from its subjection to capitalist logics. Key conditions structuring this sector include:
- a long-term decline in profitability that has catalyzed the search for new avenues of value creation and capture;
- a related yet distinct decline in productivity (so-called “secular stagnation”), driving investment in technologies like AI that promise to make unproductive and labor-intensive sectors (like music creation, promotion, and recommendation) more productive;
- the increasing centrality of assetization and rentierism as strategies for addressing the challenges facing capital accumulation, with digital platforms serving as an important mechanism for the enclosure and privatization of public goods;
- the facilitation of the above tendencies by U.S.-led monetary policies that have used asset-price inflation as a means of propping up the global economy; and
- recent, post-pandemic developments that threaten to upset what now increasingly appears as the fragile foundations of the recent AI bubble (e.g., the decoupling of the Chinese and US economies, breakdown in supply chains that have particularly impacted electronics manufacture, rising interest rates, etc.).
Within this array of forces, commercial music AI—and above all the data it both relies upon and generates—has become a distinctive site of assetization and rent extraction. At once general and specific, commercial music AI gives acute expression to a contradiction underpinning the AI and tech sector more broadly, whereby the intrinsically social character of the data fueling algorithmic systems stands in conflict with the prerogatives of private ownership. This data is social not simply because it is often generated via digitally-mediated social interactions (e.g., music recommendation), or because it embodies prior social knowledge (e.g., familiarity with musical norms and practices), but because data’s value hinges on its correlation with other data. Its value, in other words, is an emergent quality—or surplus—that is born out of its relationality. This suggests that music AI will only realize its full potential if it is socialized, treated not as a source of privatized riches but as a form of public wealth instead.
Eric Drott is Associate Professor of Music Theory at the University of Texas at Austin. He is author of Music and the Elusive Revolution: Political Culture and Cultural Politics in France, 1968-1981 (2011). His next book, Streaming Music Streaming Capital, should be coming out next year with Duke University Press. In 2020, he received the Dent Medal from the Royal Musical Association.

Jonathan Sterne, Professor and James McGill Chair in Culture and Technology
In Search of the Human in Machine Listening (video)
This talk is based on a coauthored paper in progress with Mehak Sawhney and Andy Stuhl.
As voice interfaces, smart speakers, and music recommendation systems have proliferated, there has been an explosion of interest in the sonic dimensions of AI technologies among scholars and cultural critics. But what does it mean for machines to hear or listen? Our paper examines this question through two parallel lines of approach. First, we note that there are at least three different subfields concerned with what might be called machine listening: music information retrieval; speech recognition; and auditory scene analysis. Researchers often do not talk across these fields, yet they rely on some shared assumptions about sound and signal processing, and shared intellectual and practical history. At the same time, even within these fields, there is no general agreement as to whether or not the work of sonic AI constitutes a kind of hearing or listening. Therefore, we go in search of constructs of the human within these processes, in order to better understand whether and how AI systems can listen at all.
Jonathan Sterne teaches in the Department of Art History and Communication Studies at McGill University. He is author of Diminished Faculties: A Political Phenomenology of Impairment (Duke, 2021); MP3: The Meaning of a Format (Duke 2012), The Audible Past: Cultural Origins of Sound Reproduction (Duke, 2003); and numerous articles on media, technologies and the politics of culture. He currently working on a project about artificial intelligence and the politics of culture. And with co-author Mara Mills, he is writing Tuning Time: Histories of Sound and Speed. Visit his website at http://sterneworks.org.
Speakers and Performers

Intelligent Instruments in Performance (video)
The application of artificial intelligence in the creative arts has boomed in the past years. New machine learning architectures, open data sets and high computing power enable technologies that were unthinkable just a decade ago. But how does it feel to engage with musical instruments imbued with intelligence?
In our lab we create technologies for interactive real-time intelligent systems and study the the perception of these instruments with composers, performers and audience. We have created a modular system for prototyping musical instruments that opens up for quick design and prototyping of new performance systems. This talk will describe the design methodology of the Intelligent Instruments Lab and present some of the results we have had from our studies in the first year of operation.

Curating Culture, Building Critical Intelligence on Music AI
Today’s public debates about AI focus primarily on its problematic social and discriminatory effects, ethical failings and concentrations of power. Less discussed – or even recognised – are the cultural impacts of AI as it curates what we receive via streaming services, search engines and online retailers. We urgently need concerted critical research on the ways in which AI is becoming virally woven into the fabric of our cultural lives. In this talk I will introduce the MusAI research program which takes music as the medium through which to create a field of critical interdisciplinary studies, indicative of AI’s wider influence on culture. Music has long been a site of AI experimentation and commercial development, and although considerable resources are going into scientific and artistic research projects in this area, critical research on music AI is at an early stage.
Georgina Born is Professor of Anthropology and Music at University College London. Earlier she had a professional life as a musician in experimental rock, jazz and free improvisation. Her work combines ethnographic and theoretical writings on interdisciplinarity, music, sound, and digital/media. Her books include Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde (1995), Western Music and Its Others (ed. with D. Hesmondhalgh, 2000), Music, Sound and Space (ed., 2013), Interdisciplinarity (ed. with A. Barry, 2013), Improvisation and Social Aesthetics (ed. with E. Lewis and W. Straw, 2017), and Music and Digital Media: A Planetary Anthropology (ed., 2022). She directed the ERC-funded research program ‘Music, Digitization, Mediation’ (2010-15) and in 2021 was awarded an ERC grant for ‘Music and Artificial Intelligence: Building Critical Interdisciplinary Studies’. She has held visiting professorships at UC Berkeley, UC Irvine and Aarhus, Oslo, McGill and Princeton Universities.

Benoit Baudry and Erik Natanael Gustafsson
re|thread – unveiling software Benoit and Erik have been running the re|thread project since 2019. re|thread lies at the intersection between software technology and audiovisual art. It focuses on the use of software as the material and medium for artistic creation. We have explored real-time sonification and vizualization as ways to let the audience make sense of the software that fuels their digital lives. In this presentation, we will present the cyber|glow installation. We elaborate on the sublimity and invisibility of software and discuss the details of the interactive sonification installation. We will also share some experiences from an interdisciplinary practice in art and science.
The Carolan Guitar
Steve Benford is the Dunford Professor of Computer Science at the University of Nottingham where he co-founded the Mixed Reality Laboratory. He is Director of the EPSRC-funded Horizon Centre for Doctoral Training and also Director of the University’s Smart Products beacon of research excellence. He was previously an EPSRC Dream Fellow, a Visiting Researcher at Microsoft Research Cambridge and a Visiting Professor at the BBC.
Accompanying machine folk music generated by folk-rnn
Accompanying Erik Gustafsson’s neod

Musicking with Minimal Agents – Emergence from entangled human-algorithm interaction (video)
Contemporary AI research tend to focus on gradually more complex algorithms and larger machine-learning models, often with a mimetic focus performed in a black-box computational paradigm. Many implementations attempt to imitate the symptoms and results of human creative process, without implementing the process itself.
When focusing on musical interaction in an improvisational setting, these approaches are less interesting, and also very simple systems can give very interesting results. I will here share an ongoing artistic exploration of a particular kind if human-machine interaction, where the algorithms are extremely small and simple.
Certain kinds of algorithms make you feel that there is another musician present. They feel like agents, even though they may be dead stupid. What happens to me when I play with such an algorithm? What happens to the system of me and the algorithm? And how simple can an algorithm be, while still being perceived as “another”, as an agent?
By making the algorithms as small as possible, we can focus on the transformative effects of human-machine interactions, and the human-machine synergies. In the talk, I will also discuss further aspects of such systems, such as design constraints and implications; agency, perceived agency, and proportions of influential agency; the generative mechanisms behind the emergent behavior; and the necessary conditions for this kind of generative entanglement to appear.) ‘Cruiser’ is released by the Italian label Aletheia.

Artificial Intelligence and Music Ecosystem (video)
Martin Clancy is a Dublin-based musician, academic, and events producer, and was awarded his doctorate (Trinity College Dublin) on the financial and ethical implications of AI music in 2021. He is the editor/author of Artificial Intelligence and Music Ecosystem (Routledge 2022), and chairs the IEEE Global AI Ethics Arts Committee. Martin is a Certified Ableton 11 Trainer and was a member of the Irish band In Tua Nua. As artist in residence at New York’s Seaport Music Festival (2009–2011), Martin had a series of top 40 hits in the US Billboard Dance Charts. His latest release (Valleraphon) ‘Cruiser’ is released by the Italian label Aletheia.
Martin will trace his transdisciplinary research approach and reflect on how his tacit curiosity about changes in musician/machine dynamics led him to doing a doctorate in the financial and ethical implications of AI and music. He will consider existing and potential legal and ethical concepts to meet the AI challenge. He will also formally launch his new book, Artificial Intelligence & Music Ecosystem (Routledge 2022).

Assisted Interactive Machine Learning, artificial agents and musical interactions: exploring agency in human-machine performance (video)
In this talk I will describe the notion of Assisted Interactive Machine Learning (AIML) and present some artistic projects in which this sonic interaction design approach was implemented. I will reflect on its utility as a musical tool and attempt some broader considerations on the practices and aesthetics that this and other similar approaches may enable.
AIML makes use of reinforcement learning to explore many mapping possibilities between large sound corpora and sensor data. The design approach adopted is inspired by the ideas underpinning the interactive machine learning paradigm, as well as by the use of artificial agents in computer music for exploring complex parameter spaces. The player can explore a large corpus of sounds (i.e. a large archive of samples) by articulating gestures captured by a sensor device. Giving feedback to an artificial agent through a reinforcement learning algorithm results in new mappings between gestures and sounds. This allows the player to interactively explore the many possible ways of performing with the content of the sound corpus, possibly discovering unexpected gesture-sound articulations through the interaction with the artificial agent.
I will describe how AIML was used in different scenarios:
- The track “Knee-Jerk Drifter” by AQAXA (2021), in which the sound corpus contains personal sonic memories of the performer, who is then able to recollect episodes of their recent past by interacting with the artificial agent. The track combines conventional means of electronic music production with AIML performance.
- The practice of the TCP/Indeterminate Place quartet (Ek, Östersö, Visi, & Petersson, 2021) , a group of musicians performing with networked hyperorgans.
- The performance “Artificial Agents and Moving Bodies” by Nicola L. Hein (guitar, electronics), Federico Visi (sensor armbands, electronics), Simon Rose (baritone saxophone), Ingo Reulecke (dance) presented at the “International Conference Improvisation, Ecology and Digital Technology“ (Hein, Visi, Rose, Reulecke, 2022). This artistic research project involves improvised interaction between human musicians, dancers, and artificial musical agents designed using recordings of previous performances by two of the musicians.
Through these works, I’ll propose some reflections on topics such as negotiating agency in human-machine performance situations, delegating musical tasks to artificial agents in a creative workflow, and the practical and aesthetic implications of using sound as data in algorithmic processes.
Federico Visi (he/they) is a researcher, composer and performer based in Berlin, Germany. He carried out his doctoral research on instrumental music and body movement at the Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, UK. He currently teaches and carries out research on embodiment in network music performance at Universität der Künste Berlin and at Luleå University of Technology, where is part of the “GEMM))) Gesture Embodiment and Machines in Music” research cluster. Under the moniker AQAXA, they released an EP in which they combine conventional electronic music production techniques with the exploration of personal sonic memories by means of body movement and machine learning algorithms.
A shared concern of his research and artistic projects is the interplay between human and artificial agencies and its influence on collective and individual behaviours.

alien productions repeatedly remind us in their artistic work, that we are ultimately all part of a system. The artists began involving recipients and users in creative design processes very early on, then programs and machines. Even devices from everyday life that are equipped with a certain “intelligence” – such as household appliances, wind turbines or hairdryers – were allowed to communicate with one another in their projects. Working with animals as intelligent living creatures and recently with AI and deep learning systems is just a further logical step.

Tom Hodgson and James Badley
Music Streaming in the Global South: Beyond Borders? (video)
Global streaming services such as Spotify, Apple and YouTube are having an unprecedented and little-understood effect on the way music is created, distributed and consumed around the world. Moreover, the emergence and growth of home-grown streaming services suggest that traditional centres of power in the global music industry are being destabilized: Saarey in Pakistan, for example, has recently started buying publishing rights to regional catalogues; Boomplay in Nigeria is wholly owned and financed by Chinese holding company Transsion; and Douyin (TikTok) in China now has 2bn users worldwide and is acquiring tech companies in the Global North.
This paper examines the reception, on YouTube and Spotify, of Pakistani music programme Coke Studio, as it occurs across and between national borders. Focusing on the artist Eva B, we look at how Spotify’s algorithm situates her music quite specifically, and politically, between Indo-Pak genre worlds. Through a combination of ethnographic and digital methods we interrogate the implications of this socio-algorithmic ‘bordering’ for musicians in a shifting landscape of corporate and regulatory environment.
This talk is based on an article currently being co-authored with James Badley.
Tom Hodgson is an Assistant Professor in the Herb Alpert School of Music at UCLA. His scholarship centers on the ethnomusicology of algorithms and artificial intelligence, with a particular focus on how new digital technologies flow outwards from music streaming companies ‘downstream’ to local ethnographic sites of musical creativity in the Global South.
He is currently finishing a book about Kashmiris, music and migration. Journeys of Love: Kashmiris, Music, and the Poetics of Migration explores questions of memory and exchange among musicians in Pakistan-controlled Kashmir and the Mirpuri diaspora. One of the central themes of the book is the question of how musicians create value and meaning in environments that are being rapidly and radically transformed by migration, changing flows of money and new technologies. His research has been published in Popular Music, Les Cahier d’Ethnomusicologie, Sound Studies, and Performing Islam, as well as a number of edited volumes.

Two Years with MUSAiC: Overview and Reflections
In this presentation, Rujing will reflect on her work since joining MUSAiC as a postdoc in 2020: from exploring questions of “aura” and “authenticity” through folk-rnn to bridging non-Western philosophical traditions with technological ethics, from writing about the future of ethical MIR to her most recent work that examines the shifting notions of musical labor, talent, and the “deskilled” artist in the age of AI.
Rujing Stacy Huang is an (ethno)musicologist, singer-songwriter, and currently Presidential Postdoctoral Fellow at the University of Hong Kong. Her latest research explores the ethical, cultural, and socio-political implications of artificial intelligence when applied to music. She completed her PhD in Ethnomusicology from Harvard University in 2019. Prior to joining HKU, she was a postdoc researcher at KTH Royal Institute of Technology, working as a member of the MUSAiC team. Her paper “De-centering the West: East Asian Philosophies and the Ethics of Applying Artificial Intelligence to Music” with Bob L. T. Sturm and Andre Holzapfel received the Best Special Call Paper Award at ISMIR 2021. She is a Co-Organizer of the AI Song Contest (www.aisongcontest.com), a grantee of the Asian Cultural Council, and currently a Co-Chair of the Sound Studies Section at the Society for Ethnomusicology (SEM). She is also the founder of Project Grain™ (under Grain Music International Ltd.), a creative music studio recently launched in Hong Kong.

The challenge of AI, AI as a challenge (video)
I will be discussing my work with computationally creative systems as partners to human creators. In the talk I will illustrate some approaches to the challenges of integrating human and machine creativity, including in real-time improvisations. I will also discuss the challenges that this developing technology poses to current modes of making music and thinking about it.
Oded Ben-Tal is a composer and researcher working at the intersection of music, computing, and cognition. His compositions range from purely acoustic pieces, to interactive, live electronic pieces and multimedia work. In recent years he is particularly interested in the interaction between human and computational creativities. Together with Dr. Bob Sturm he developed research applying deep learning to folk musics and interrogating the creative capacity of the resulting generative system within the folk tradition as well as outside it. He is also using AI-inspired approaches in the domain of interactive, live electronic music. Machine listening techniques combined with algorithmically steered processes open space for musical dialogue, in real-time, between human performers and computer counterparts.
In 2022 he launched the Datasounds, Datasets and Datasense research network which aims to identify core questions that will drive forward the next phase in data-rich music research, focused in particular on creative music making. Stemming out of his own compositional interests in relating human and machine creativities and broadening the scope to consider the implications as well as applications of computational means used to make music, understand it, and engage with it.
Workshop in sound design and music composition with machine learning
The use of Deep Learning for artistic creation has become popular in recent years, whether to create music, visual media, or quasi-realistic versions of historical or current characters — also called “deep fakes.” In this introductory workshop we will make use of tools that allow us to work with neural networks for sound and music synthesis. The goal of the workshop is for participants to build a basic understanding on the underlying principles of deep learning and incorporate them into their existing sound design and music composition artistic workflows. We will make use of existing deep learning algorithms and analyze the philosophies behind them, while discussing different types of implementations that artists utilize in their work as well as the ethical implications that these technologies bring to the process of the practitioner. No programming knowledge is required, but it is welcome.
Bio:
Moisés Horta Valenzuela (he/him) is an autodidact sound artist, creative technologist and electronic musician from Tijuana, México, working in the fields of computer music, Artificial Intelligence and the history and politics of emerging digital technologies. As 𝔥𝔢𝔵𝔬𝔯𝔠𝔦𝔰𝔪𝔬𝔰, he crafts an uncanny link between ancient and l through a critical lens in the context of contemporary electronic music and the sonic arts. His work has been presented in Ars Electronica, NeurIPS Machine Learning for creativity and design workshop, MUTEK México, Transart Festival, MUTEK: AI Art Lab Montréal, Elektron Musik Studion, CTM Festival Berlin, among others. He is currently leading independently organized workshops around creative AI art practices centered around sound and image synthesis and the demystification of neural networks, developing SEMILLA, an interface for interacting with generative neural sound synthesizers, and OIR, an online channel for semi-autonomous meta-DJ trained on thousands of hours of visuals and music from global electronic club music and techno.

Machine and Folk Music Panel (Nov. 22)
- Selected slängpolskor from the AI Music Generation Challenge 2021, performed by Olof Misgeld and Sven Ahlbäck
- Steve Benford and his Carolan Guitar
- Erik Natanael Gustafsson and his neod