Art, Music, and Culture

This theme explores advanced Artificial Intelligence and Machine Learning techniques for composition, performance, and broadcasting. It will emphasise how legibility, agency and negotiability can be used to address the tension between human creativity on the one hand, and system autonomy on the other. The academic lead for this theme is Atau Tanaka (Goldsmiths).

Projects funded in the Art, Music and Culture theme

The HDI theme in Music/Art/Culture has made awards ranging from £2,500 to £17,500. They span an investigation of human interaction with data through artistic, social, and cultural actions using the human voice, live coding, wearable technology, gesture sensing, synthesizers, and performance datasets.

Polyphonic Intelligence

Dr Eleni Ikoniadou, RCA in partnership with Qu Junktions and the ICA

This is an interactive installation that brings together human and nonhuman participants in an improvised choral performance. It asks, how can we create a new sound that includes algorithmic processes, mythology, speculative strategies and artistic sensibilities, in order to move beyond simplistic divisions between human and machine subjects and producers?

MIRLCAuto: A Virtual Agent for Music Information Retrieval in Live Coding

Dr Anna Xambó Sedó, De Montfort University in partnership with Iklectik and Leicester Hackspace

This project looks into finding new approaches to the musical practice of collaborative music live coding (CMLC). This research (1) combines machine learning algorithms with music information retrieval techniques in order to retrieve crowdsourced databases of sounds from large datasets (2) points out how creative industries can benefit from using crowdsourced databases of sounds in their music production workflows; and (3) suggests the need of exploring a virtual agent companion that learns from human live coders using machine learning algorithms.

Call and Response

Dr Matthew Yee-King, Goldsmiths in partnership with Max de Wardener

This project will develop an interactive, co-creative, musical AI system and to use it to create a new, recorded musical work. Through interaction between the AI and real human voices, the piece will adopt the ancient musical idea of call and response to contrast the real with the uncanny. We will make use of recent developments in neural network technology to create the AI system. We will use the Differentiable Digital Signal Processing (DDSP) technology from the Google Magenta project which allows neural networks to include trainable digital signal processing nodes such as audio oscillators and filters.

Embodied Companionship

Despina Papadopoulos, RCA in partnership with Ubrellium

This project will create a wearable exploration of markers of aliveness and the potential for affective relationships with algorithmically driven technological artifacts. A scarf detects motion and activates conductive threads to produce a sensation of heat around the wearer’s neck. The current interaction is based on tracking usage in terms of frequency and duration using a simple algorithm. We would like to significantly expand the scarf’s relationship to its environment and its wearer as to move away from an interactive relationship and explore an intra-active engagement between machine and human.

Assisted Interactive Machine Learning for Musicians

Dr Michael Zbyszynski, Goldsmiths in partnership with Music Hackspace

Designing gestural interactions between body movement and sound synthesis is a multifaceted process. Modern sound synthesis techniques are often characterised by a high number of parameters one can manipulate in order to make different sounds. The user-centric interface design methods enabled by machine learning delineate a scenario in which exploring gestural mappings can be done inter- actively, intuitively, and with the assistance of algorithms that become part of the creative tool set of musicians. In this project, we will create music timbre information spaces that are satisfying for non-spe- cialist musicians to perform.

Spoken Word Data: Artistic Interventions in the Everyday Spaces of Digital Capitalism

Pip Thornton, University of Edinburgh, with the Scottish Poetry Library and the Fruitmarket Gallery

This project contributes to an ongoing academic and artistic intervention into the ways in which everyday language-use has become commodified by the technologies of digital capitalism, and the significant ethical implications this development is having upon our lives through personalisation and the transforming of the spaces in which we live. By finding ways to make the processes and politics of the digital economy more visible and legible to wide and diverse audiences, this project proposes two new artistic interventions which expose and challenge the ways in which AI based technologies such as search engines, home assistants, and other smart IoT devices transform, twist, and render into commodities the words that we speak.

Creative AI Dataset for Live, Co-operative Music Performance

Prof Craig Vear, De Montfort University

This project aims to design, develop and demonstrate a unique dataset capturing human creativity from inside the embodied relationships of music performance. The understanding of such a critical and fruit- ful mesh of environment, embodiment, new goal creation, and human-machine cooperation, will allow this project to cast a new light on the role of embodiment in HDI in music. It will also gather empirical data about the way in which human beings can create and enjoy music performance through their data streams. This dataset will be the first of its kind to deal with embodied creativity, and will have potential impact upon the fields of realtime performance, game engine integration, companion-bots, healthcare interventions through art, and other interpersonal interaction between people, their data and their machines.

Design of ‘Data Dialogues’ in Media Recommenders

N. Sailaja & D. McAuley (U. Nottingham), R. Jones (BBC R&D)

This proposal outlines research that intends to contribute to the HDI themes of legibility and negotiability, contextualised within the scope of media recommenders. It intends to produce a set of design implications that help realise improved data legibility and negotiability within media recommenders through the development of a ‘data dialogue’ between the audiences and the system. This would be achieved firstly through a series of four focus groups with audience members, that discuss and debate the challenges of data leverage by media recommenders and the inclusion and design of a ‘data dialogue’ to respond to them. This would be followed by a co-design session wherein findings and assets created from the results of the focus groups would be used to enable audience members to participate in the creation of future ‘data dialogues’ within media recommenders. The findings of these sessions would contribute to a set of tangible and practical design guidelines that help realise data legibility and negotiability within media recommenders. These results would be submitted for publication at ACM CHI, be reported in a publicly available industry report, directly feed into the creation of a novel media recommender within BBC R&D and form part of a wider research partnership between the University of Nottingham and BBC which looks into cross media recommenders and the use of personal data stores.