Docsity
Docsity

Prepara i tuoi esami
Prepara i tuoi esami

Studia grazie alle numerose risorse presenti su Docsity


Ottieni i punti per scaricare
Ottieni i punti per scaricare

Guadagna punti aiutando altri studenti oppure acquistali con un piano Premium


Guide e consigli
Guide e consigli

Real-Time Interactive Music Systems: Overview of Accompaniment and Score Following, Appunti di Sistemi Informatici

An overview of real-time interactive musical systems, focusing on automatic accompaniment and score following. The authors discuss the history and development of these systems, including notable researchers and their contributions. Key topics include score following algorithms, musical interactive systems, and the role of technology in live performance.

Tipologia: Appunti

2019/2020

Caricato il 27/01/2020

thomascarter
thomascarter 🇮🇹

1 documento

1 / 4

Toggle sidebar

Documenti correlati


Anteprima parziale del testo

Scarica Real-Time Interactive Music Systems: Overview of Accompaniment and Score Following e più Appunti in PDF di Sistemi Informatici solo su Docsity! REAL-TIME INTERACTIVE MUSICAL SYSTEMS: AN OVERVIEW A.N.Robertson Centre for Digital Music Department of Electronic Engineering Queen Mary University of London M. D. Plumbley Centre for Digital Music Department of Electronic Engineering Queen Mary University of London ABSTRACT We present an overview of developments towards inter- active musical systems. A description of an interactive system is given and we consider potential uses for the au- tomation of creative processes within live performance. We then look at the history of research into the problem of automatic accompaniment, discuss a variety of current interactive systems and present some ideas for future re- search. Keywords – Score following, automatic accompaniment, musical interactive system, machine learning. 1. INTRODUCTION It has been a long-standing goal to empower computers with the ability to interpret sounds in a musically mean- ingful way and use this information to integrate them- selves into a performance by human musicians [2]. In the last forty years, huge developments have been made in the creation of new electronic technology for music. How- ever, almost all of this technology requires a human agent to write the musical rules that the computer follows when creating sounds. An interactive system uses information from the ac- tions of the user, perhaps via an interface, to formulate its response or output. The term, ‘musical interactive sys- tem’, encompasses all systems which depend on the users actions to generate sound and also includes systems which interact musically with the user. Any musical interface, such as a keyboard, might form part of an interactive system in the first sense by being connected to a sound generator. Also, technology that al- lows the user to alter the parameters used in creating the sound offers a form of musical interaction for the user, but here we are primarily concerned with systems that in- teract musically by ‘listening’ to external input from the user and generate their responses in real-time. The differ- ence is that the actual interaction is musical as opposed to a system giving rise to an interaction that has a musical context. Musically interactive systems include automatic accompaniment and systems capable of improvising with musicians. There are significant constraints on such an interactive system. It must negotiate errors within detection and per- formance. It must also be efficient so that it is able to Email: andrew.robertson@elec.qmul.ac.uk respond to new input in real-time. In the case of score followers they must be robust enough to relate the per- formance to the score despite the fact that no two perfor- mances are ever the same, whilst in the case of improvis- ing systems, they must be able to make sense of unex- pected musical events and use some form of knowledge database when making musical decisions. 2. TECHNOLOGY IN LIVE PERFORMANCE The most significant step in the use of computers for mu- sic has been in the recording studio, where the sound qual- ity available from digital hardware now rivals that of ana- logue tape. Computers have an advantage over analogue recording in that they allow non-destructive over-dubs and can act as powerful sequencers of audio samples. The most widely used software packages for digital recording are ProTools and Logic, which base themselves on multi- track tape. Some software packages are also capable of some interesting processing, such as Ableton Live’s abil- ity to offer easy time-stretching and pitch-shifting so that audio sample manipulation is comparatively easy. It seems that existing software is capable of perform- ing many of the creative tasks that musicians can formu- late, provided that this is done in the studio and hence, not in real-time. Recordings are made through thousands of small tasks so that gradually a single audio track is edited together for each channel, effects are applied and the mix is performed with automated volume, panning and effects changes. It seems natural to ask what functions the computer can play within the context of live performance. There is already considerable use of technology and automation within contemporary musical performance. Some of the main uses are as follows: • Electronic instruments - keyboards and other inter- faces to electronic synthesizers which make the sounds. Sounds will be triggered by the interface - some- times pitched as is the case with keyboards , some- times triggered from a pad. Considerable research into designing new interfaces is regularly presented at NIME conference [19] with some interesting in- novations. • Audio processing - vocals and drum signals might be gated. Compression and other effects are ap- plied. A sound engineer mixes the signals for the audience and provides on-stage monitoring. • Lighting - various methods are used. Often a set of pre-written lighting settings can be triggered at a lighting console. Also, back projections are some- times used to considerable effect. • Live coding [17] - using audio programming lan- guages such as SuperCollider [12], Max/MSP [13], ChucK [14], it is possible to create sounds ‘on the fly’. Performances making use of laptops are in- creasingly common - for instance, Polar Bear, a jazz band nominated for the Mercury Music Prize 2005, have performed with one musician creating sounds within Max using a joystick interface. It seems natural that some of these processes could be automated in a manner which takes advantage of the computers powerful ability to schedule events. Synchro- nisation of these processes with the performance could be achieved to a level that would not be possible manually. This would require that the computer can relate inputs de- rived from the performance to an abstract representation, or score, in order to schedule events such as electronic parts, audio or lighting effects. At present, performers wishing to use pre-recorded parts synchronise with the computer by having the drummer lis- tening to a click track. This forces the tempo to be con- stant, immediately imposing a constraint on potential dy- namics for the piece. By having the computer follow the musicians, automation of this technology could result in powerful, new creative effects. Research into this area be- gan with the problem of score following of live perfor- mance since the knowledge provided by the score simpli- fied the wider problem of automatic transcription. 3. HISTORY OF AUTOMATIC ACCOMPANIMENT At the 1984 ICMC, Barry Vercoe [1] and Roger Dannen- berg [3] both independently presented working systems that could perform the task of automatic accompaniment. In Vercoe’s case he used optical sensors on a flute and some pitch tracking to peform score following. Over the subsequent year, Vercoe began working with Miller Puckette [2] on the problem of training the Syn- thetic Performer to learn from rehearsals. This was a sig- nificant step as it acknowledged the important role that rehearsals have in improving conventional performances by enabling the musicians to learn when to expect rhyth- mic deviation, such variations in onset timing and tempo, which are not predicted by the score. Their research sug- gested that machine learning techniques would need to be used if the computer is to be able to do more than effec- tively sight-read. Importantly, the distinction was made between rhyth- mic aberrations, where a note is played early or late, and tempo variation. The fraction by which a note is early or late is learnt from rehearsals and written into the perfor- mance record. The system then expects a variation in the onsets of notes from that implied by a literal reading of the score and when using a note onset time to determine the tempo, can factor into account how likely this timing is to vary between performances. Vercoe also formulated automatic accompaniment as consisting of three processes: Listen, Perform and Learn. The role of Listen is to extract information from the per- formance which Perform then uses to match against the score and synchronise the accompaniment. The Learn process involves extracting information from performances which aids the system to follow subsequent renditions of the piece. This formulation of the problem has been influ- ential in subsequent research in the area. Dannenberg used a form of dynamic programming to calculate the least cost match between the observed per- formance, translated into Midi notes, and the score rep- resentation. He formalised the problem of score follow- ing as finding the longest common subsequence of two strings, one representing the score and the other the ob- served performance. Where notes are skipped, they must be removed from the score string. Where they are in- serted they are removed from the performance string and where they are wrongly played or detected, they must be removed from both. The matcher is an algorithm designed to solve this problem with use of heuristics in order that it is successful in practice, as was demonstrated at the 1984 ICMC. Puckette subsequently worked on the design of the Max/MSP language at IRCAM and re-implemented the ideas in an open source version called Pure Data [6]. Both languages aim to allow real-time manipulation of audio and Midi data. He also continued to work on automatic accompani- ment, introducing the idea of concurrent matchers, using a slow but reliable matcher to follow the global position within the piece and a faster matcher used to trigger the computer’s response to notes [5]. 4. STATE OF THE ART Raphael’s Music Plus One system [4] is capable of provid- ing automatic accompaniment to solo oboe. This system uses a hidden Markov model (HMM) to model the note transitions within a piece. He follows Vercoe [2] in delin- eating listen, perform and learn processes for the system. Previous score followers had used pitch-to-Midi conver- sion where as the listen component of Raphael’s system processes monophonic audio to form a vector of features, including the energy of the signal and the presence of in- dividual notes computed via a Fourier transform. The HMM works by assuming that the observed states, derived from the processed audio of the performance, are created by a hidden sequence of states, resulting from the player’s movement through the score. This assumption in the architecture of an HMM has made it a popular method of tackling the problem of score following. In Raphael’s model, the states used in the hidden layer represent the
Docsity logo


Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved