Center for New Music and Audio Technologies (CNMAT) Studio Report

Richard Andrews
CNMAT, UC Berkeley, 1750 Arch St., Berkeley, CA 94709



The Center for New Music and Audio Technologies (CNMAT) is an interdisciplinary research center within the UC Berkeley Department of Music. CNMATs mission is to explore the creative interaction between music and technology with a focus on composition and performance, often improvisational in character


1. Introduction

CNMAT provides facilities, expertise, and educational opportunities to individuals interested in musical performance, practice and research. Our goal is to provide a common ground where music, cognitive science, computer science, electrical engineering, and other disciplines meet to investigate, invent and implement tools for the creation of live music. Our research results are often "field tested" in concerts and demonstrations at a very early stage, allowing researchers to refine their work in the context of real performance situations.

2. Facilities

CNMATs student labs have been upgraded with five new G3 PowerMac-based workstations, loaded with Max/MSP and other software for coursework and individual projects. Our research labs were recently outfitted with two new Dell Optiplex GX1P/T+ workstations to support our PC-related work.

3. Composition and Performance

A diverse group of composers and performers presented in our main room this year. CNMAT also collaborated with local presenting organizations on several large productions of important works.

3.1. Manoury Celebration

The Manoury Celebration was a three-day series of events featuring the music of Philippe Manoury. The series was a collaboration between the Berkeley Symphony Orchestra, CNMAT, the Mills College Center for Contemporary Music, and the music department at UC San Diego. The events included a panel presentation by Miller Puckette, David Wessel, and Philippe Manoury, followed by a concert of Manourys Ultima and Gestes performed by members of SIRIUS; a concert of Pluton for piano and electronics featuring Jerry Kuderna, piano and Miller Puckette, Musical Assistant, and Jupiter for flute and electronics featuring Elizabeth McNutt, flute and Keeril Makan, Musical Assistant; and a concert presentation of Manourys opera 60th Parallel, featuring the Berkeley Symphony, conducted by Kent Nagano, with Musical Assistants Leslie Stuck and Miller Puckette, and technical direction by David Wessel.

3.2. Lautre

Lautre (The Other), by UC Berkeley faculty composer Edmund Campion and poet John Campion, is a new collaborative work of music and poetry. The work was commissioned by Radio France and produced at CNMAT and IRCAM. It was featured in the Présences 1999 Festival in Paris.

3.3. Play Back

Edmund Campion was commissioned to create the musical score for Play Back, a new dance piece by choreographer Francois Raffinot. The live performance realization of this piece made extensive use of resonance models developed at CNMAT.

3.4. Catch and Throw

Catch and Throw is an interactive performance work that has been under continual development by David Wessel since its inception in the mid-eighties (Wessel, Lavoie et al. 1987). Recent performances have involved improvising acoustic pianists Vijay Iyer and Georg Graewe. In this work, the pianists phrases are captured, analyzed, transformed, and reinjected into the performance by Wessel using a responsive user interface. Performances of this work were presented at the Common Sense Composers Collective concert at the Yerba Buena Center in San Francisco with Vijay Iyer, at CNMAT with Iyer and reed player J.D. Parran, and most recently with pianist/composer Georg Graewe .

3.5. The CNMAT/CCRMA/CARTAH exchange

The Stanford University Center for Computer Research in Music and Acoustics (CCRMA), the University of Washington Center for Advanced Research Technology in the Arts and Humanities (CARTAH), and the CNMAT Users Group presented a series of concerts and colloquia of electroacoustic music.

3.6. Other

Other composer/performers presented by CNMAT included Chris Brown and the Computer Network Music Ensemble; Philip Gelb, Barre Phillips, Pauline Oliveros, and Dana Reason; Ensemble Intercontemporain, David Robertson, music director; Jean-Claude Risset; David Berhman, Laetitia Sonami, John Ingle, and Alex Potts; and Hollands Ensemble LOOS.

4. Education

CNMAT continues to offer a popular series of courses, workshops, colloquia, and demonstrations. Our audience includes UC Berkeley students and faculty, visiting scholars, members of the local, national, and international academic communities, representatives from industry, and the general public. Courses for campus students include Music 108: Music Perception and Cognition, Music 158: Musical Applications of Computers and Related Technologies, Music 201: Workshop in Computer Music, Music 209: Advanced Topics in Computer Music, and independent study courses.

4.1. Max/MSP Night School

An intensive week of evening classes featuring instruction in Max/MSP programming by its developer David Zicarelli and instructors David Wessel, Richard Dudas, Leslie Stuck, Adrian Freed, and Matthew Wright. The course focuses on developing MSP-based electroacoustic instrumentation in which Max provides flexible control and interactivity. Managing complexity in larger projects is covered, with examples of abstraction techniques that keep things from getting out of control as they grow. The special challenges and techniques of building Max/MSP programs for reliable concert performance is also discussed.

4.2. SuperCollider Night School

Another intensive week of evening classes, this time featuring instruction in SuperCollider 2, a sophisticated software environment for real-time audio synthesis on the PowerPC platform. The course, presented by SuperCollider 2 developer James McCartney and instructors Alberto de Campo, Curtis Roads, and Matthew Wright, covers basic language and environment handling, standard synthesis and processing methods, advanced synthesis, composition, and interaction possibilities.

4.3. 105th AES Convention

The following CNMAT papers were presented at the 105th Audio Engineering Society Convention:
Band-Limited Simulation of Analog Synthesizer Modules by Additive Synthesis, Amar Chaudhary. (Chaudhary 1998)
Real-Time Inverse Transform Additive Synthesis for Additive and Pitch Synchronous Noise and Sound Spatialization, Adrian Freed. (Freed 1998)
Statistical Analysis of Sound Signals Using a Local Harmonic Model, Rafael Irizarry. (Irizarry 1998)
Volumetric Modeling of Acoustic Fields in CNMATs Sound Spatialization Theater, Sami Khoury, Adrian Freed, and David Wessel. (Khoury, Freed et al. 1998)

4.4. Other

Other presentations include a lecture by noted German composer Gerhard Stäbler presented by the CNMAT Users Group, a demonstration by representatives from Be, Inc., including Timothy Self, Brian Mikol, and Doug Wright, and a talk by Patrick Ozzard-Low, composer and Director of Alternative Tuning Projects, UK.

5. Research

The CNMAT research program had an exceptionally productive year, with many new initiatives and a wealth of new developments in existing projects.

5.1. New Signal Processing Objects and Programming Practices in Max/MSP

As part of our shift towards the Max/MSP environment, we have created programming practices and a set of new signal and discrete event processing objects to support these practices. Included are techniques for parameterizing the number of polyphonic voices, dynamic patch building, atomic parameter updates, banks and cascades of signal processing modules, parameter interpolation, manipulation of large sets of data, and an assortment of software engineering practices for programming reliable large scale projects.

5.2. Supporting the Sound Description Interchange Format in the Max/MSP Environment

We have invented a representation for Sound Description Interchange Format (SDIF) data within Max/MSP, circumventing Maxs limited language of data structures with a novel technique. Applications include real-time spectral analysis, network streaming of SDIF data to Max/MSP, resonance and additive synthesis with interactive timbral transformation, and display of SDIF data. (Wright, Khoury et al. 1999)

5.3. Musical Applications of New Filter Extensions to Max/MSP

This project demonstrates a wide range of applications of new external signal processing functions for Max/MSP. The heart of the new resonators~ and biquads~ external functions is a highly tuned signal processing kernel that exploits processor parallelism and keeps the floating point arithmetic pipelines full. The key to achieving this is the implementation of a bank of parallel resonators and a cascade of biquads. (Jehan, Freed et al. 1999)

5.4. Cross-Coding SDIF into MPEG-4 Structured Audio

The new MPEG-4 standard includes a general purpose sound synthesis tool called Structured Audio. Composers, sound designers, and analysis/synthesis researchers can benefit from the combined strengths of MPEG-4 and SDIF by using the MPEG-4 Structured Audio decoder as an SDIF synthesizer. This allows the use of sophisticated SDIF tools to create musical works and other sound data, while leveraging the anticipated broad availability of MPEG-4 playback devices. (Wright and Scheirer 1999)

5.5. Spectral Line Broadening with Transform Domain Additive Synthesis

Implementing spectral line broadening efficiently on modern processors is surprisingly challenging. We introduce in this project an efficient method using transform-domain additive synthesis: we modulate the phase of each sinusoidal description by a scaled, zero-meaned, uniform random value. The good match of this computational structure to the external/secondary-cache/primary-cache/register memory hierarchy of modern computers indicates that the transform methods can outperform direct oscillator implementations. (Freed 1999)

5.6. Second-order recursive oscillators for musical additive synthesis applications on SIMD and VLIW processors

The most widely used digital sinusoidal oscillator structure employs a first order recursion to accumulate and wrap a phasor, followed by a sinusoidal functional evaluation - usually a table lookup. Second order recursions are an attractive alternative because the sinusoid is computed within the structure, avoiding the need for a large table or specialized hardware function evaluator. This is a major performance advantage on upcoming generations of vector microprocessors such as the "T0", "TigerSharc", "Altivec" and VLIW machines such as "Merced" because of their large multiply/add rate for on-chip data. (Hodes and Freed 1999)

5.7. Volumetric Modeling of Acoustic Fields for Musical Sound Design in a New Sound Spatialization Theatre

Live sound sources in our theatre are spatialized in real time using software that integrates an acoustic model of the actual room the audience is in. We exploit an unusual feature of the theatre: its flexible suspension system. This project features acoustic and volumetric models that allow real time performance; the architecture that coordinates real time data flow between the user interface, acoustic modeling, visualization and sound spatialization software; and results of experience with diverse performances in the theatre. (Kaup, Freed et al. 1999)

5.8. An Open Architecture for Real-time Audio Processing Software

We introduce "Open Sound World" (OSW), a scalable, extensible object-oriented language that allows sound designers and musicians to process sound in response to expressive real-time control.
OSW allows development of audio applications using patching, C++, high-level specifications and scripting. In OSW, components called "transforms" are dynamically configured into larger units called "patches." New components can be expressed using familiar mathematical definitions without deep knowledge of C++. High-level specifications of transforms are created using the "Externalizer," and are compiled and loaded into a running OSW environment. The data used by transforms can have any valid C++ type. OSW uses a reactive real-time scheduler that safely and efficiently handles multiple processors, time sources and synchronous dataflows. (Chaudhary, Freed et al. 1999)

5.9. Acoustic Field Simulation and Visualization for Architectural Acoustics

Computer modeling of acoustic fields has been widely explored. Despite the exciting potential of such models to improve existing concert spaces and guide the design of better new venues, existing computer modeling tools are not good enough to be a standard part of the professional acoustic consultants toolbox. This project addresses fundamental research questions limiting widespread use of computer-based acoustic models and facilitates the integration of acoustic models into leading CAD/CAM software. Results of the research effort include a survey, implementation and evaluation of promising acoustic simulation methods; development of efficient representation of loudspeaker sources for acoustic modeling; and development of new visualization techniques and user interfaces necessary for successful use of acoustic models.

5.10. The CNMAT Organ Project

This project explores how computer modeling tools may be used to design and improve conventional acoustic organs. The organ timbre simulation work uses computer learning methods to create models from recordings of real organ pipes. The spatial diffusion research uses our newly created tool for real-time acoustic volume visualization: Open Sound View. (Khoury, Freed et al. 1998). This project is funded by the UC Berkeley Music Departments Edmund ONeill Fund.

5.11. Gestural Control of Sound Synthesis

This project examines and develops multidimensional real-time control of computer generated musical sound using gestural input devices such as tablets and touch sensitive controllers. The research is carried out in two locations: the work at CNMAT is led by David Wessel, and the work at IRCAM is led by Xavier Rodet. The CNMAT work has concentrated on the use of the tablet interface for the control of high quality additive synthesis of melodic lines (Wessel, Wright et al. 1998), rhythmic material (Wright and Wessel 1998), and the selection and processing of sampled sound. The IRCAM work has concentrated on the use of the tablet interface for the control of high quality physical models of brass, reed, and bowed string instruments (Serafin, Dudas et al. 1999). Support for this project comes from the UC Berkeley France-Berkeley Fund.

5.12. Integrated Digital Media System for Live Music Performance

This research project seeks to identify and overcome roadblocks to an entirely digital infrastructure for live music performance. A multidisciplinary effort, the work will result in research prototypes of a bi-directional, multi-channel audio network link circuit providing power and sound between musical instruments and digital sound processors, synthesizers and arrays of loudspeakers; a programmable directivity source model loudspeaker; a micromachined multi-sensor with integrated calibration and A/D conversion for guitars and other string instruments; a large database of polyphonic recordings of guitars in all the major playing styles; a robust, real-time low-latency pitch estimation algorithm; new graphical and gestural interactive tools for live performance; new polyphonic sound processing effects; pitch synchronous effects; real-time sound analysis/synthesis; and 3-D acoustic visualization software for the sound design of rooms, musical instruments and speaker arrays. (Jehan, Freed et al. 1999), (Chaudhary and Freed 1999)

6. Personnel

Under the guidance of Richard Felciano, Founder and David Wessel, Director, the CNMAT staff includes Adrian Freed, Research Director, Matthew Wright, Musical Applications Programmer, Edmund Campion, Composer-in-Residence, and Richard Andrews, Administrator.
Our roster of researchers includes Rimas Avizienis, Amar Chaudhary, Cyril Drame, Richard Dudas, Tristan Jehan, Arnold Kaup, Sami Khoury, Norbert Lindlbauer, Andreas Luecke, Will Pritchard, Ron Smith, and Brian Vogel. Gillian Edgelow is CNMATs Administrative Assistant. The CNMAT list of graduate student composers includes Bruce Bennett, Keeril Makan, Eric Marty, and Tom Swafford. Other collaborators are John Campion, Hugh Livingston, Silvia Matheus, Eleanor Ronaele, and Laetitia Sonami.

7. Acknowledgements

CNMAT gratefully acknowledges the support of our corporate sponsors and Industrial Affiliates Program members, including Gibson Musical Instruments and Opcode Systems, Silicon Graphics, E-Mu Systems, Meyer Sound Laboratories, Wacom Technology Corporation, Digidesign, Earthworks, Kurzweil, Orban, and Tom Austin/Sherman Clay.

8. References

Chaudhary, A. (1998). Band-Limited Simulation of Analog Synthesizer Modules by Additive Synthesis. AES 104th Convention, San Francisco, CA, AES.

Chaudhary, A. and A. Freed (1999). A Framework for Editing Timbral Resources and Sound Spatialization. Audio Engineering Society 107th Convention, Audio Engineering Society.

Chaudhary, A., A. Freed, et al. (1999). An Open Architecture for Real-Time Audio Processing Software. Audio Engineering Society 107th Convention, Audio Engineering Society.

Freed, A. (1998). Real-Time Inverse Transform Additive Synthesis for Additive and Pitch Synchronous Noise and Sound Spatialization. AES 104th Convention, San Francisco, CA, AES.

Freed, A. (1999). Spectral Line Broadening with Transform Domain Additive Synthesis. International Computer Music Conference, Beijing, China.

Hodes, T. and A. Freed (1999). Second-order recursive oscillators for musical additive synthesis. ICMC, Beijing, China.

Irizarry, R. (1998). A Direct Adaptive Window Size Estimation Procedure for Parametric Sinusoidal Modeling. International Computer Music Conference, University of Michigan, Ann Arbor, ICMA.

Jehan, T., A. Freed, et al. (1999). Musical Applications of New Filter Extensions to Max/MSP. ICMC, Beijing, China, ICMA.

Kaup, A., A. Freed, et al. (1999). Volumetric Modeling of Acoustic Fields for Musical Sound Design in a New Sound Spatialization Theatre. International Computer Music Conference, Beijing, China, ICMA.

Khoury, S., A. Freed, et al. (1998). Volumetric Modeling of Acoustic Fields in CNMAT's Sound Spatialization Theatre. AES 104th Convention, San Francisco, CA, AES.

Khoury, S., A. Freed, et al. (1998). Volumetric Visualization of Acoustic Fields in CNMAT's Sound Spatialization Theatre. IEEE Visualization 98, Research Triangle Park, NC, IEEE.

Serafin, S., R. Dudas, et al. (1999). Gestural Control of a Real-Time Physical Model of a Bowed String Instrument. International Computer Music Conference, Beijing, China, ICMC.

Wessel, D., M. Wright, et al. (1998). Preparation for Improvised Performance in Collaboration with a Khyal Singer. International Computer Music Conference, Ann Arbor, Michigan, International Computer Music Association.

Wessel, D. L., P. Lavoie, et al. (1987). MIDI-Lisp: A Lisp-based programming environment for MIDI on the Macintosh. AES 5th International Conference: Music and Digital Technology, Los Angeles, Audio Engineering Society, New York.

Wright, M., S. Khoury, et al. (1999). Supporting the Sound Description Interchange Format in the Max/MSP Environment. International Computer Music Conference, Beijing, China, ICMA.

Wright, M. and E. Scheirer (1999). Cross-Coding SDIF into MPEG-4 Structured Audio. International Computer Music Conference, Beijing, China, ICMA.

Wright, M. and D. Wessel (1998). An Improvisation Environment for Generating Rhythmic Structures Based on North Indian "Tal" Patterns. International Computer Music Conference, Ann Arbor, Michigan, International Computer Music Association.