Art, Music, and Culture Theme

This theme explores advanced Artificial Intelligence and Machine Learning techniques for composition, performance, and broadcasting. It will emphasise how legibility, agency and negotiability can be used to address the tension between human creativity on the one hand, and system autonomy on the other.

Because it focuses on the one hand on the nature of AI and ML, and on the other hand on what we can creatively do with these things, an alternative name for this theme was, originally ‘Art, AI-created content, & industrial/cultural effects’. 

Projects funded in the Art, Music and Culture theme.

The HDI theme in Music/Art/Culture has made 8 awards, ranging from £2,500 to £17,500. They span an investigation of human interaction with data through artistic, social, and cultural actions using the human voice, live coding, wearable technology, gesture sensing, synthesizers, and performance datasets.

Polyphonic Intelligence

Dr Eleni Ikoniadou, RCA in partnership with Qu Junktions and the ICA

This is an interactive installation that brings together human and nonhuman participants in an improvised choral performance. It asks, how can we create a new sound that includes algorithmic processes, mythology, speculative strategies and artistic sensibilities, in order to move beyond simplistic divisions between human and machine subjects and producers?

MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding

Dr Anna Xambó Sedó, De Montfort University in partnership with Iklectik and Leicester Hackspace

This project looks into finding new approaches to the musical practice of collaborative music live coding (CMLC). This research (1) combines machine learning algorithms with music information retrieval techniques in order to retrieve crowdsourced databases of sounds from large datasets (2) points out how creative industries can benefit from using crowdsourced databases of sounds in their music production workflows; and (3) suggests the need of exploring a virtual agent companion that learns from human live coders using machine learning algorithms.

Call and Response

Dr Matthew Yee-King, Goldsmiths in partnership with Max de Wardener

This project will develop an interactive, co-creative, musical AI system and to use it to create a new, recorded musical work. Through interaction between the AI and real human voices, the piece will adopt the ancient musical idea of call and response to contrast the real with the uncanny. We will make use of recent developments in neural network technology to create the AI system. We will use the Differentiable Digital Signal Processing (DDSP) technology from the Google Magenta project which allows neural networks to include trainable digital signal processing nodes such as audio oscillators and filters.

Embodied Companionship

Despina Papadopoulos, RCA in partnership with Ubrellium

This project will create a wearable exploration of markers of aliveness and the potential for affective relationships with algorithmically driven technological artifacts. A scarf detects motion and activates conductive threads to produce a sensation of heat around the wearer’s neck. The current interaction is based on tracking usage in terms of frequency and duration using a simple algorithm. We would like to significantly expand the scarf’s relationship to its environment and its wearer as to move away from an interactive relationship and explore an intra-active engagement between machine and human.

Assisted Interactive Machine Learning for Musicians

Dr Michael Zbyszynski, Goldsmiths in partnership with Music Hackspace

Designing gestural interactions between body movement and sound synthesis is a multifaceted process. Modern sound synthesis techniques are often characterised by a high number of parameters one can manipulate in order to make different sounds. The user-centric interface design methods enabled by machine learning delineate a scenario in which exploring gestural mappings can be done inter- actively, intuitively, and with the assistance of algorithms that become part of the creative tool set of musicians. In this project, we will create music timbre information spaces that are satisfying for non-spe- cialist musicians to perform.

Negotiating Deep Agency in Embodied Sonic Interaction

Tim Murray-Browne, Panagiotis Tigas, with the support of the University of Glasgow

This project addresses the topics of ‘agency’ and ‘negotiability’ of user interfaces in the domain of musical expressivity. An interface will transform full-body movement into sound. Rather than sourcing a universal dataset to train this system, we instead propose to build a training set of movement data. We use this to train a deep neural network (DNN) using unsupervised learning algorithms such as variational autoencoders. We will map the latent space of this model to the parameters of a sound synthesis engine.

Spoken Word Data: Artistic Interventions in the Everyday Spaces of Digital Capitalism

Pip Thornton, University of Edinburgh, with the Scottish Poetry Library and the Fruitmarket Gallery

This project contributes to an ongoing academic and artistic intervention into the ways in which everyday language-use has become commodified by the technologies of digital capitalism, and the significant ethical implications this development is having upon our lives through personalisation and the transforming of the spaces in which we live. By finding ways to make the processes and politics of the digital economy more visible and legible to wide and diverse audiences, this project proposes two new artistic interventions which expose and challenge the ways in which AI based technologies such as search engines, home assistants, and other smart IoT devices transform, twist, and render into commodities the words that we speak.

Creative AI Dataset for Live, Co-operative Music Performance

Prof Craig Vear, De Montfort University

This project aims to design, develop and demonstrate a unique dataset capturing human creativity from inside the embodied relationships of music performance. The understanding of such a critical and fruit- ful mesh of environment, embodiment, new goal creation, and human-machine cooperation, will allow this project to cast a new light on the role of embodiment in HDI in music. It will also gather empirical data about the way in which human beings can create and enjoy music performance through their data streams. This dataset will be the first of its kind to deal with embodied creativity, and will have poten- tial impact upon the fields of realtime performance, game engine integration, companion-bots, healthcare interventions through art, and other interpersonal interaction between people, their data and their machines.