Wright, Matthew matt@cnmat.berkeley.edu
Freed, Adrian adrian@cnmat.berkeley.edu

Center for New Music and Audio Technologies
1750 Arch Street
Berkeley, CA 94709
phone (510) 643-9990, fax (510) 642-7918

ICMC 1997 Short Paper Proposal

OpenSound Control: A New Protocol for Communicating with Sound Synthesizers

Keywords: Protocol - synthesis control, MIDI

Content Area: Other: Sound Synthesis Control

Resources required: Overhead projector

A better integration of computers, controllers and sound synthesizers will lead to lower costs, increased reliability, greater user convenience, and more reactive musical control. The prevailing technologies to interconnect these elements are high-speed bus (motherboard or PCI), operating system interface (software synthesis), or medium-speed serial LAN (Firewire, USB, fast ethernet). It is easy to adapt MIDI streams to this new communication substrate. To do so needlessly discards new potential and perpetuates MIDI's well-documented flaws. Instead we have designed a new protocol optimized for modern transport technologies.

OpenSound Control is an open, efficient, transport-independent, message-based protocol developed for communication among computers, sound synthesizers, and other multimedia devices. OpenSound Control is a machine and operating system neutral protocol and readily implementable on constrained, embedded systems. Numeric data are encoded using native machine representations of 32-bit big-endian twos-complement integers and IEEE floating point numbers, with all data aligned on 4-byte boundaries.

OpenSound Control parameters are sent to a hierarchically addressed set of dynamic objects that include, for example, synthesis voices, output channels, and a memory manager. Parameters are addressed to a feature of a particular object or set of objects through a hierarchical namespace similar to URL notation, e.g., /voices/drone-b/resonators/3/set-Q. Groups of synthesizer objects may be addressed collectively via a pattern-matching syntax similar to UNIX shell file name "globbing", e.g., /voices/drone-*/resonators/2/set-Q. This open-ended mechanism avoids the addressing limitations inherent in protocols such as MIDI that rely on short fixed length bit fields. Our experience is that with modern transport technologies and careful programming, this addressing scheme incurs no significant performance penalty either in network bandwidth utilization or in message processing.

An atomic OpenSound Control packet may contain multiple commands, specifying parameter changes to be interpreted concurrently. Messages contain time tags, to allow receiving synthesizers to eliminate jitter introduced during packet transport by resynchronizing with bounded latency.

OpenSound Control is bidirectional, with synthesizers able to send messages back to controlling processes. An address that ends with a trailing slash is a query to the synthesis system asking for the list of addresses underneath the given node. Other messages query objects for information on their current state such as: "how much memory is free?" Some queries are requests for human-readable documentation about a particular object or feature, and return the URL of a WWW page with the appropriate information. This allows for "hot-plugging" extensible synthesis resources and internetworked multimedia applications.

We deliberately avoid proscribing a pitch model or note model and the protocol may be therefore considered musically neutral. We do however propose a standard set of perceptually motivated names and units for uncontroversial control parameters.

The paper will conclude with detailed description of protocol implementation using ethernet, bus, and operating system transportation with a high performance realtime software synthesizer.