Meetings

The Music Computing Lab has regular research meetings. The meeting format is flexible and includes research talks, seminars, demonstrations of prototypes, discussions of selected journal articles, and hands-on tutorials of tools and techniques.


Regular Music Computing Research meetings continue every other Thursday at 3pm.


Previous Talks and Seminars

10 July 2014

Speakers: Nanda Khaorapapong, Doon MacDonald and Tony Stockman – Queen Mary University
Title: Methods and tools for interaction design beyond the GUI
Abstract:
We present a series of projects which focus on the development and evaluation of non-visual Interaction Design. Firstly we look at a number of examples and issues raised in the use of audio to improve human performance in both physical activity and spatial cognition tasks. We then take a step back to consider methods for auditory display design, in particular examining ways in which sound track composition can inform the practice of auditory display design. We go on to look at ways in which haptic interfaces may be used as a discrete means of cueing human social interactions. We conclude by pulling together issues inherent to the development of non-traditional interfaces and problems that arise in their evaluation with both specific and mainstream populations.
Bios
Tony Stockman is a Senior Lecturer in the School of Electronic Engineering and Computer Science at Queen Mary, University of London. His research interests include the design and evaluation of auditory displays, assistive technology and data sonification. He is the president of the International Community for auditory displays (www.icad.org).
Nanda Khaorapapong is a PhD student in the school of Electronic Engineering and Computer Science at Queen Mary, University of London. Her research focuses on facilitating offline social interaction using haptic stimulae embedded in wearable technology and using embedded and natural interface technologies to facilitate social interactions.
Doon MacDonald is a musician and a final year PhD student at Queen Mary, University of London, researching interface design and developing compositional tools and methods for the design of auditory displays and sonification. Doon is interested in creating accessible and aesthetically driven approaches to sound design that focus on interaction, usability and enjoyment

26th June 2014

Speaker: Prof John Rink – Musical Performance Studies at the University of Cambridge
Title: Creating the Musical Work: From Archive to ‘Dynamic Edition’
Abstract:
This talk will begin by describing the primary sources of the music of Fryderyk Chopin and the considerable complexities surrounding them; it will then review several recent projects focusing on these sources, including the Online Chopin Variorum Edition. This ten-year initiative has led to the development of a unique ‘dynamic edition’ in which digital images of the manuscripts and multiple versions of the first editions of Chopin’s music are made available to users in a number of different formats. Not only will the features and functions of the Chopin Variorum be outlined, but so too will some of the difficulties that have been encountered in preparing and presenting the digital material as well as the metadata accompanying it.
BIO:
John Rink is Professor of Musical Performance Studies at the University of Cambridge, Fellow and Director of Studies in Music at St John’s College, and Director of the AHRC Research Centre for Musical Performance as Creative Practice. He also directs the Mellon-funded Online Chopin Variorum Edition as well as The Complete Chopin – A New Critical Edition. He specialises in Chopin studies, performance studies, music theory and analysis, and digital applications in musicology. He has published six books with Cambridge University Press, including The Practice of Performance: Studies in Musical Interpretation (1995), Chopin: The Piano Concertos (1997), Musical Performance: A Guide to Understanding (2002), and Annotated Catalogue of Chopin’s First Editions (with Christophe Grabowski; 2010). He is also General Editor of the five-book series Studies in Musical Performance as Creative Practice, which Oxford University Press will publish in 2015.

5th June 2014

Andrea Franceschini, Tom Mudd, Tony Steffert and Anthony Prechtl presented talks on their PhD research at the Centre for Research in Computing (CRC) 2014 conference.

22 May 2014

Music Computing Lab regular meeting discussing developments in members recent research.

24 April 2014

Speaker: Rebecca Fiebrink – Lecturer in Graphics and Interaction at Goldsmiths, University of London
Title: Interactive Machine Learning for End-User Systems Building in Music Composition & Performance
Abstract:
I build, study, teach about, and perform with new human-computer interfaces for real-time digital music performance. Much of my research concerns the use of supervised learning as a tool for musicians, artists, and composers to build digital musical instruments and other real-time interactive systems. Through the use of training data, these algorithms offer composers and instrument builders a means to specify the relationship between low-level, human-generated control signals (such as the outputs of gesturally-manipulated sensor interfaces, or audio captured by a microphone) and the desired computer response (such as a change in the parameters driving computer-generated audio). The task of creating an interactive system can therefore be formulated not as a task of writing and debugging code, but rather one of designing and revising a set of training examples that implicitly encode a target function, and of choosing and tuning an algorithm to learn that function.

In this talk, I will provide a brief introduction to interactive computer music and the use of supervised learning in this field. I will show a live musical demo of the software that I have created to enable non-computer-scientists to interactively apply standard supervised learning algorithms to music and other real-time problem domains. This software, called the Wekinator, supports human interaction throughout the entire supervised learning process, including the generation of training data by real-time demonstration and the evaluation of trained models through hands-on application to real-time inputs.

Drawing on my work with users applying the Wekinator to real-world problems, I’ll discuss how data-driven methods can enable more effective approaches to building interactive systems, through supporting rapid prototyping and an embodied approach to design, and through “training” users to become better machine learning practitioners. I’ll also discuss some of the remaining challenges at the intersection of machine learning and human-computer interaction that must be addressed for end users to apply machine learning more efficiently and effectively, especially in interactive contexts.

Bio:
Rebecca Fiebrink is a Lecturer in Graphics and Interaction at Goldsmiths, University of London. As both a computer scientist and a musician, she is interested in creating and studying new technologies for music composition and performance. Much of her current work focuses on applications of machine learning to music: for example, how can machine learning algorithms help people to create new digital musical instruments by supporting rapid prototyping and a more embodied approach to design? How can these algorithms support composers in creating real-time, interactive performances in which computers listen to or observe human performers, then respond in musically appropriate ways? She is interested both in how techniques from computer science can support new forms of music-making, and in how applications in music and other creative domains demand new computational techniques and bring new perspectives to how technology might be used and by whom.

Fiebrink is the developer of the Wekinator system for real-time interactive machine learning, and she frequently collaborates with composers and artists on digital media projects. She has worked extensively as a co-director, performer, and composer with the Princeton Laptop Orchestra, which performed at Carnegie Hall and has been featured in the New York Times, the Philadelphia Enquirer, and NPR’s All Things Considered. She has worked with companies including Microsoft Research, Sun Microsystems Research Labs, Imagine Research, and Smule, where she helped to build the #1 iTunes app “I am T-Pain.” Recently, Rebecca has enjoyed performing as the principal flutist in the Timmins Symphony Orchestra, as the keyboardist in the University of Washington computer science rock band “The Parody Bits,” and as a laptopist in the Princeton-based digital music ensemble, Sideband. She holds a PhD in Computer Science from Princeton University and a Master’s in Music Technology from McGill University.

27th March 2014

Antony Prechtl presents a preview of “Algorithmic music as intelligent game music”, to be presented next month at AISB50: The 50th Annual Convention of the AISB at Goldsmiths, University of London.

13 Feb 2014

Music Computing Lab discussing developments in members recent research.

6 Dec 2013

Kurijn Buys gives a talk entitled “Simulink as a Musical Computing Playground” .

28th Nov 2013

Tom Mudd gives a talk entitled “Dynamical Systems and Digital Musical Instruments”.

24th Oct 2013

Regular Music Computing Lab meeting introducing new members and discussing developments in existing members recent research.

23 May 2013

Andrea Franceschini talks about how software written in several different languages can talk to each other without requiring major rewriting or arcane wizardry

1st May 2013

Anna Xambo demonstrates SuperCollider, a programming language and environment often used for music applications.

17 April 2013

Anthony Prechtl demonstrates Max/MSP techniques and the integration of Java and C.

21 Feb 2013

Roundtable discussion of recent literature of interest

  • Influence of the tonality of Japanese Traditional music on Japanese approaches to Western music from classical to J-Pop.
  • Music as Narrative (Fred Everett Maus).
  • PSYSOUND3: Software for Acoustical and Psychacoustical Analysis of Sound recordings (Densil Cabrera, Sam Ferguson, and Emery Schubert) and related tool boxes.
  • David Huron, Sweet Expectation.
  • Novice Collaboration in Solo and Accompaniment Improvisation Hansen and Anderson.
  • Relation between Language Learning and Music Learning in Young Children.
  • Steven Mithen, The Singing Neanderthals.

……………….

6th May 2011

(Talks by Tom Collins and Vassilis Angelis as part of Music Postgraduate Research Day, 6th May 2011)

Discovering translational patterns in symbolic representations of music

Tom Collins
Typically, to become familiar with a piece, one studies/plays through the score and listens, gaining an appreciation of where and how material is reused. The literature on music information retrieval (MIR) contains several algorithmic approaches to this task, referred to as ‘intra-opus’ pattern discovery. Given a piece of music in a symbolic representation, the aim is to define and evaluate an algorithm that returns patterns occurring within the piece. Some potential applications for such an algorithm are: (1) a pattern discovery tool to aid music students; (2) comparing an algorithm’s discoveries with those of a music expert as a means of investigating human perception of music; (3) stylistic composition (the process of writing in the style of another composer or period) assisted by using the patterns/structure returned by a pattern discovery algorithm. The presentation will look at how my research has improved upon current pattern discovery algorithms.

A preliminary investigation of a computational model of rhythm perception using polyrhythms as stimuli

Vassilis Angelis
Different models have been developed to explain how humans perceive rhythm in music. Here we concentrate on a computational model that employs a neurobiological approach, according to which aspects of rhythm perception could be directly grounded on the dynamics of neural activity (Large et al, 2010). To date, testing on this model has been done mainly by stimulating it with metrical stimuli. The outputs of the model have been utilised to provide potential explanations about certain behaviours encountered in rhythm perception, such as the tendency of human tapping to precede sequence tones by a few tens of milliseconds (Large, 2008). In this paper we present a preliminary investigation of this model using polyrhythmic stimuli, the assumptions involved in carrying out this investigation, and the obtained results. To explore how well the computational model matches the range of human tapping behaviour in polyrhythms, we used as a bench mark an experiment by Handel & Oshinsky (1981) on human subjects and polyrhythms, in which subjects were asked to tap along with the polyrhythmic stimuli, implicitly leaving them the choice of tapping out either one of the regular streams, or the cross-rhythm, or any other way.

28th April 2011

Buzzing to Play: Lessons Learned From an In the Wild Study of Real-time Vibrotactile Feedback

Janet van der Linden The Open University,
Rose Johnson The Open University,
Jon Bird The Open University,
Yvonne Rogers The Open University,
Erwin Schoonderwaldt Institute for Music Physiology and Musicians’ Medicine
Abstract
Vibrotactile feedback offers much potential for facilitating and accelerating how people learn sensory-motor skills that typically take hundreds of hours to learn, such as learning to play a musical instrument, skiing or swimming. However, there is little evidence of this benefit materializing outside of research lab settings. We describe the findings of an in-the-wild study that explored how to integrate vibrotactile feedback into a real-world teaching setting. The focus of the study was on exploring how children of different ages, learning to play the violin, can use real-time vibrotactile feedback. Many of the findings were unexpected, showing how students and their teachers appropriated the technology in creative ways. We present some ‘lessons learned’ that are also applicable to other training settings, emphasizing the need to understand how vibrotactile feedback can switch between being foregrounded and backgrounded depending on the demands of the task, the teacher’s role in making it work and when feedback is most relevant and useful. Finally, we discuss how vibrotactile feedback can provide a new language for talking about the skill being learned that may also play an instrumental role in enhancing learning.
(Hosted by HCI seminar series)

6th April 2011

SuperCollider Workshop.

Gerard Roma (visiting researcher from Universitat Pompeu Fabra, Barcelona) kindly ran a superb and well-received hands-on workshop and theoretical overview of SuperCollider.
http://supercollider.sourceforge.net/

30th March 2011

Kindly co-hosted by the Human-Centred Computing Seminar Series
Dan Stowell (Queen Mary University of London)

Developing and evaluating systems for cyber-beatboxing

ABSTRACT
Most of us make expressive use of our voice timbre in everyday
conversation; and beatboxers and other extended-technique vocal
performers take timbre modulations to another level. Yet vocal timbre is
an under-utilised dimension in musical interfaces, perhaps because of
difficulties in analysing and mapping timbre. In this talk Dan will
discuss his research on vocal timbre interfaces, considering different
technical strategies to achieve effective real-time mappings useful for
on-stage performance.
Evaluating such systems is crucial for understanding how they succeed
and fail, and how they might be adopted into performers’ practice, yet
evaluation through standard task-focussed experiments is less useful for
expressive musical systems. Dan will discuss the development of a
qualitative approach used to explore how beatboxers understand a system
after interacting with it.

29th March 2011

Gerard Roma, visiting student from the Universitat Pompeu Fabra in Barcelona, will give us an informal presentation about his work. He is a PhD student in their Music Technology Group, working on sound description. Feel free to bring others along who may beinterested.

15th March 2011

Tom Collins will give a short talk about a model for stylistic composition and its evaluation. There are two related related papers:

  • Pearce, M.T., and G.A. Wiggins, ‘Evaluating cognitive models of musical composition’, in eds. A. Cardoso and G.A. Wiggins, Proceedings of the Fourth International Joint Workshop on Computational Creativity, (Goldsmiths, University of London, 2007), 73-80.
  • Collins, David, ‘A synthesis process model of creative thinking in music composition’, in Psychology of Music 33(2) (2005), 193-216.

1st March 2011

Anna Xambó and Rose Johnson giving a presentation on the TEI conference (Tangible Embedded Embodied Interaction) they attended in Madeira. Including an overview of their favourite papers and demos and the studios they attended on the first day.

6th December 2010

Tom Collins and Vassilis Angelis giving an informal presentation on Ed Large’s theory of
meter induction, and a general discussion on Pulse and Meter as Neural Resonance by
Edward W. Large and Joel S. Snyder. If time, a discussion also on ‘Love is in the air’: Effects of songs with romantic lyrics on compliance with a courtship request by Nicolas Guéguen, Céline Jacob and Lubomir Lamy.

30th November 2010

We will meet at 1 pm in the Pervasive Lab, where Anna Xambó will give us an informal demo of an early prototype of her TOUCHtr4ck democratic collaborative tool for creating music.

23rd November 2010

Rose Johnson will be showing us around her lab to take a look at some of her prototypes.

16th November 2010

Adam Linson giving a presentation entitled: A Plea for Unusability.

12th October 2010

Group review of journals and conferences relevant to Music Computing.

7th September 2010

Meeting to discuss and share our experiences over the Summer presenting at various conferences including ISMIR, SMC, ICMPC, CHI.

13th July 2010

Meeting to discuss changes to the CRC Music Computing web page. This page, along with all the HCI pages, will be updated shortly so this is an opportunity for us to make sure the information here is up-to-date, and accurately reflects what we’re doing.

6th July 2010

Tom Collins leading a reading group discussion on Parsing of melody, Frankland and Cohen, 2004.

29th June 2010

Stefan Kreitmayer giving a presentation on Processing:

Processing is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context, Processing also has evolved into a tool for generating finished professional work. Today, tens of thousands of students, artists, designers, researchers, and hobbyists who use Processing for learning, prototyping, and production.

22 June 2010

Vassilis Angelis leading a reading group discussion on:

Ed Large’s “Resonating to Rhythm” (2008) essay.  Part of Ed’s work regards computational models that simulate rhythm perception. Here are some of his concluding thoughts about aspects that influence rhythm perception and how those can be implemented into computational models:

“It appears that melodic patterns can contribute to a listener’s sense of meter and that listeners also respond differentially to various combinations of melodic and temporal accents (Hannon et al., 2004; Jones & Pfordresher, 1997) especially if the relative salience of different accent types are well calibrated (Ellis & Jones, in press; Windsor, 1993).

“If we accept that melodic and other musical accents can affect meter, then the significant theoretical question arises of how such information couples into a resonant system. Is it sufficient to consider accents arising from different features (for example, intensity, duration, pitch, harmony, and timbre) as combining into a single scalar value that determines the strength of each stimulus event? Probably not. The flip side of this coin is the effect of pulse and meter on the perception of individual musical events. Recall Zuckerkandl’s (1956) view of meter as a series of waves, away from one downbeat and towards the next. As such, meter is an active force; each tone is imbued with a special rhythmic quality from its place in the cycle of the wave, from ‘the direction of its kinetic impulse.’ It is, perhaps, a start to show that attention is differently allocated in time; however, it seems clear that future work must consider these issues.”

15th June 2010

Andrew Milne leading a reading group discussion on:

Toward a Universal Law of Generalization for Psychological Science, Roger N. Shepard, Science, New Series, Vol. 237, No. 4820 (Sep. 11, 1987), pp. 1317-1323.

11th May 2010

Katie Wilkie presents:

Analysis of Conceptual Metaphors to Evaluate Music Interaction Designs

Katie Wilkie, Simon Holland, Paul Mulholland
Centre for Research in Computing
The Open University

Abstract
In domains such as music, technical understanding of the parameters, processes and interactions involved in defining and analysing the structure of artifacts is often restricted to domain experts. Consequently, interaction designs to communicate and manipulate musical information are often difficult to use for those without such specialised knowledge. The present work explores how this problem might be addressed by drawing explicitly on domain–specific conceptual metaphors in the design of music interactions, for example creating and manipulating harmonic progressions.

Conceptual metaphors are used to map image schemas, structures which are rooted in embodied sensory-motor experiences of space, forces and interactions with other objects, onto potentially unrelated, abstract domains. These conceptual metaphors are commonly, though not exclusively, identified through linguistic expressions in discourse and texts. Building on existing theoretical work, we subscribe to the view human understanding in music and other domains is grounded in conceptual metaphors based on image schemas. We hypothesise that if we can identify the conceptual metaphors used by music experts to structure their understanding of specific domain concepts such as pitch, melody and harmony, then we may be able to use these conceptual metaphors to evaluate existing music interaction designs in terms of how they afford or inhibit their expression. We further hypothesise that it may be possible to use the results of these evaluations to inform the design of music interactions such that they may better support musicians’ understanding of the domain concepts. In this way, it may be possible for users of such interaction designs to exploit the pre-existing embodied knowledge shared by all users, and to lessen the requirement for specialist domain knowledge, formal reasoning, and memorisation of technical terms.

Recently, the conceptual metaphor approach has been applied to areas including the analysis of music theory, the improvement of user interface design and, to a limited extent, music interaction designs. However, to the best of our knowledge, the present work is the first attempt to use the conceptual metaphors elicited from a dialogue between musicians as a means to evaluate existing music interaction designs focusing on the communication of harmonic, melodic and structural relationships.

27th April 2010

A jam session in the Music research Studio—all instruments/devices and abilities welcome—all types of music or noise applicable!

16th March 2010

Vassilis Angelis presenting:

Digital Mirrors is an interactive installation designed for body-centered video performances. It extends the idea of using a mirror as a metaphor for reflective investigation, space alteration and fragmentation. The technical implementation of the installation employs a range of software (e.g Isadora, Arduino IDE) and hardware (e.g Wii controller, Arduino Board) technologies, which will be the main focus of the presentation. A short reference to the theoretical context of Media Arts will be presented in the beginning.

2th March 2010

Andrew Milne and Tom Collins giving An Introduction to MATLAB.

22nd February 2010

Andrew Milne giving presentation at Music Research Seminar:

Tonal music theory—a psychoacoustical explanation?

From the seventeenth century to the present day, tonal harmonic music has had a number of invariant properties: specific chord progressions (cadences) that induce a sense of closure; the asymmetrical privileging of certain progressions; the degree of fit between pairs of successively played tones or chords; the privileging of tertial harmony and the major and minor scales.

The most widely accepted explanation (e.g., Bharucha (1987), Krumhansl (1990), Lerdahl (2001)) has been that this is due to a process of enculturation: frequently occurring musical patterns are learned by listeners, some of whom become composers and replicate the same patterns, which go on to influence the next “generation” of composers, and so on. Some contemporary researchers (e.g., Parncutt, Milne (2009), Large (in press)) have argued that these are circular arguments, and have proposed various psychoacoustic, or neural, processes and constraints that shape tonal harmonic music into the form it has actually taken.

In this presentation, I discuss some of the broader music theoretical implications of my recently developed psychoacoustic model of harmonic cadences (which has had encouraging experimental support (Milne, 2009)). The core of the model is two different psychoacoustically-derived measures of pitch-based distance between chords (one modelling “fit”, the other “voice-leading distance”), and the interaction of these two distances to model the feelings of activity, expectation, and resolution induced by certain chord progressions (notably cadences). When a played pair of chords have a poorer fit than an un-played comparison pair, that is also voice-leading-close, it is reasonable to assume the played pair is heard as an alteration of the comparison pair. This is similar to how a harmonically dissonant interval (e.g., the tritone B–F) is likely to be heard as an alteration of a voice-leading-close consonant interval (e.g., the perfect fourth B–E, or the major third C–E).

I explore the extent to which the model can predict the familiar tonal cadences described in music theory (including those containing tritone substitutions), and the asymmetries that are so characteristic of tonal harmony. I also compare and contrast the model with Riemann’s functional theory, and show how it may be able to shed light upon the privileged status of the major and minor scales (over the modes), and the dependence of tonality upon triadic harmony.

9th February 2010

Andrew Milne giving the presentation Microtonal Music Theory:

Microtonality is a huge and diverse area—I will be focussing on the use of microtonal well-formed scales that embed numerous major and minor triads. Such scales cannot be played in any conventional Western tuning (so they really are novel and different), but they also generalise many of the most important properties of the standard Western diatonic (major) scale (so they may provide a fertile resource for musical experimentation).

I’ll also demonstrate a Thummer—a button-lattice MIDI controller that makes the playing of microtonal well-formed scales as straightforward as playing standard Western scales.

15th December 2009:

Katie Wilkie and Tom Collins giving the following presentations:

Katie Wilkie

Technical understanding of the processes involved in creating and analysing artifacts in abstract domains such as music is often restricted to domain experts with specialist knowledge. Consequently, those who do not have this specialist knowledge often find the user interfaces of software designed to convey information about the structure of these artifacts difficult to use. Our work explores how how we can address this problem in music interaction designs by drawing on domain-specific conceptual metaphors.

Conceptual metaphors, often identified through linguistic expressions, are used to map experiences of prior sensory-motor experiences onto abstract domains. This process enables us to understand complex concepts such as pitch, tempo, rhythm and harmonic progression in terms of embodied experiences of space, force and interactions with other bodies in our environment.

We hypothesise that if we can identify the conceptual metaphors used by music experts to structure their understanding of musical concepts, then we may be able to systematically improve music interaction designs to better reflect these conceptual metaphors and lessen the requirement for specialist domain knowledge. Conceptual metaphor theory has been applied to a number of domains including music theory and, separately, user interface design. However, to the best of our knowledge this work is the first to combine these distinct bodies of research.

Tom Collins

A metric for evaluating the creativity of a music-generating system is presented, the objective being to generate mazurka-style music that inherits salient patterns from an original excerpt by Frédéric Chopin. The metric acts as a filter within our overall system, causing rejection of generated passages that do not inherit salient patterns, until a generated passage survives. Over fifty iterations, the mean number of generations required until survival was 12.7, with standard deviation 13.2. In the interests of clarity and replicability, the system is described with reference to specific excerpts of music. Four concepts—Markov modelling for generation, pattern discovery, pattern quantification, and statistical testing—are presented quite distinctly, so that the reader might adopt (or ignore) each concept as they wish.

7–9th December 2009:

Entrainment Seminar in the Department of Music.

1st December 2009

Rose Johnson will be talking about her work with the motion capture study for violin players.

24th November 2009

Vassilis Angelis on: The use of a computational system for real-time (live) interactive musical performance that extends musical creativity by using and adding elements on traditional instrumental performances. To do so, it uses gestural, sensor, spatial and other technological modes for capturing elements of performance which are then mapped to create an additional performing layer. The motivation for this research is the desire to approach the creation of musical performances in a new way, to re-think traditional approaches of composition and performance, and to allocate to a single performer a series of control parameters to create a new performing environment in which he cooperates with a computational system.

16–17th November

Audience, Listening and Participation interdisciplinary workshop in the Department of Music.

10th November 2009

Andrew Milne presenting the use of metrics and Gaussian smoothing (of discrete data) in modelling of music perception.

3rd November 2009

Tom Collins leading a reading group on viewpoints.