Authors
David Wessel is Director of CNMAT and has been Professor
of Music at the University of California Berkeley since the Fall of 1988.
Prior to his return to the Bay Area he spent 12 years in various key positions
at IRCAM in Paris. Throughout the mid eighties he headed IRCAM's personal
computer software development group. He received his PhD in Mathematical
and Theoretical Psychology from Stanford University where he encounted Leland
Smith and John Chowning in the late 60's and began his work in the computer
music field. His computer music research and composition is carried out
with a strong concern for issues in music perception and cognition. For
more information see http://www.cnmat.berkeley.edu/~wessel
Adrian Freed has been responsible for software
and systems development at CNMAT since 1989. Before moving to CNMAT he co-developed
the Reson8, a multi-processor signal processing engine based on the Motorola
DSP56000 that was optimized for resonance sound synthesis, sound mixing
and spatialization. Before this he developed hard disk audio recording technology
at WaveFrame. His pioneering work on graphical user interfaces in audio
post-production, the MacMix program, resulted in Studer Editech's widely
respected Dyaxis system. His debut in the computer music field came in 1982
at IRCAM where he was responsible for computer systems
Guy E. Garnett is a composer, conductor, researcher, theorist,
and computer music specialist. His composition, research, theory, and computer
performance interests are in using new technologies to extend composition
and performance resources, especially in the area of rhythmic organization
and perception, and control and design of electronic music instruments.
He has also been involved in designing composition environments and tools
in Smalltalk. He has worked at Stanford University's CCRMA on physical modeling
of musical instrumnets and graphical tools for spectral analysis. Before
coming to CNMAT, he was employed by Yamaha Music Technologies to develop
technology for use in advanced musical synthesis and instrument control.
Guillermo Garcia, researcher IRCAM, Signal Processing. Sound Analysis
and modelling techniques including timbral interpolation and singing voice
modelling.
Amar Chaudhary, UC Berkeley