OpenSoundControl Application Areas

Matt Wright, 7/6/4, updated 10/8/4

Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology.

This page lists some of the ways in which OSC has been used, organized into "application area" categories, with examples. Please let us know of other projects that we can list here.

Network Architectures

In the application areas described below, collections of devices are connected with OSC in the following ways:

Heterogenous Distributed Multiprocessing on Local Area Networks ("LAN"s)
Machines in the same location cooperate to accomplish a single task jointly with division of labor
Peer-to-peer LANs
Machines in the same location operate independently while communicating with each other
Wide-Area Networks (WANs)
Machines in different locations operate independently while communicating with each other
Single Machine
Software components (such as processes, threads, plugins, subpatches...) within a single machine communicate internally with OSC

Application Area: Sensor/Gesture-Based Electronic Musical Instruments

A human musician interacts with sensor(s) that detect physical activity such as motion, acceleration, pressure, displacement, flexion, keypresses, switch closures, etc. The data from the sensor(s) are processed in real time and mapped to control of electronic sound synthesis and processing.


Diagram of processes (ovals) and data (rectangles) flow in a sensor-based musical instrument.

This kind of application is often realized with Heterogenous Distributed Multiprocessing on Local Area Networks, e.g., with the synth control parameters sent over the LAN to a dedicated "synthesis server," or with the sensor measurements sent over the LAN from a dedicated "sensor server". There have also been many realizations of this paradigm using OSC within a single machine.

Examples:

Application Area: Mapping nonmusical data to sound

This is almost the same as the "Sensor/Gesture-Based Electronic Musical Instruments" application area above, except that the intended user isn't necessarily a musician (though the end result may be intended to be musical). Therefore the focus tends to be more on fun and experimentation rather than musical expression, and the user often interacts directly with the computer's user interface instead of special

Examples:

Appliction Area: Multiple-User Shared Musical Control

A group of human players (not necessarily each skilled musicians) each interact with an interface (e.g., via a web browser) in real-time to control some aspect(s) of a single shared sonic environment. This could be thought of as a client/server model in which multiple clients interact with a single sound server.


Multiple players influence a common synthetic sound output

Examples:

Application Area: Web interface to Sound Synthesis

[write me]

Examples:

Application Area: Networked LAN Musical Performance

A group of musicians operate a group of computers that are connected on a LAN. Each computer is somewhat independent (e.g., it produces sound in response to local input) yet the computers control each other in some ways (e.g., by sharing a global tempo clock or by controlling some of each others' parameters.) This is somewhat analogous to multi-player gaming.


Each player can control some of the parameters of every other player

 

Examples:

Application Area: WAN performance / "Telepresence"

A group of musicians in different physical locations play together as a sort of "musical conference call". Control messages and/or audio from each player go out to all the other sites. Sound is produced at each site to represent the activities of each participant.

Examples:

Application Area: Virtual Reality

Examples:

Enabling Technology: Wrapping Other Protocols Inside OSC

People often convert data from other protocols into OSC for reasons including easier network transport, homogeneity of message formats, compatibility with existing OSC servers, and the possibility of self-documenting symbolic parameter names.

Examples

References

Bencina, R. (2003), PortAudio and Media Synchronisation. In Proceedings of the Australasian Computer Music Conference, Australasian Computer Music Association, Perth, pp. 13-20.

Garnett, G.E., Jonnalagadda, M., Elezovic, I., Johnson, T. and Small, K., Technological Advances for Conducting a Virtual Ensemble, in International Computer Music Conference, (Habana, Cuba, 2001), 167-169.

Garnett, G.E., Choi, K., Johnson, T. and Subramanian, V., VirtualScore: Exploring Music in an Immersive Virtual Environment, in Symposium on Sensing and Input for Media-Centric Systems (SIMS), (Santa Barbara, CA, 2002), 19-23. (pdf)

Goudeseune, C., Garnett, G. and Johnson, T., Resonant Processing of Instrumental Sound Controlled by Spatial Position, in CHI '01 Workshop on New Interfaces for Musical Expression (NIME'01), (Seattle, WA, 2001), ACM SIGCHI. (pdf)

Hankins, T., Merrill, D. and Robert, J., Circular Optical Object Locator, Proc. Conference on New Interfaces for Musical Expression (NIME-02), (Dublin, Ireland, 2002), 163-164.

Impett, J. and Bongers, B., Hypermusic and the Sighting of Sound - A Nomadic Studio Report, Proc. International Computer Music Conference, (Habana, Cuba, 2001), ICMA, 459-462.

Jehan, T. and Schoner, B., An Audio-Driven Perceptually Meaningful Timbre Synthesizer, in Proc. International Computer Music Conference, (Habana, Cuba, 2001), 381-388. (pdf)

Overholt, D., The MATRIX: A Novel Controller for Musical Expression, Proc. CHI '01 Workshop on New Interfaces for Musical Expression (NIME'01), (Seattle, WA , 2001). (pdf)

Pope, S.T. and Engberg, A., Distributed Control and Computation in the HPDM and DSCP Projects, in Proc. Symposium on Sensing and Input for Media-Centric Systems (SIMS), (Santa Barbara, CA, 2002), 38-43. (pdf)

Wessel, David, Matthew Wright, and Shafqat Ali Khan. Preparation for Improvised Performance in Collaboration with a Khyal Singer, in Proc. International Computer Music Conference (Ann Arbor, Michigan, 1998), ICMA, 497-503. (html)

Wilson, Scott, Michael Gurevich, Bill Verplank, and Pascal Stang. Microcontrollers in Music HCI Instruction: Reflections on Our Switch to the Atmel AVR Platform, In Proc. of the Conference on New Interfaces for Musical Expression, (Montreal, 2003) 24-29.

Young, J.P., Using the Web for Live Interactive Music, Proc. International Computer Music Conference, (Habana, Cuba, 2001), 302-305.