Let's Develop a Common Language for Synth Programming

The time has come to establish synthesizer standards that go beyond MIDI

David Wessel

These pages have seen considerable discussion of the programming (and non-programming) of synthesizers. The sad truth is, many musicians never go beyond the factory presets. But there are many synth programmers who strive for new sounds with more expressive control.

These programmers must struggle with various idiosyncratic and awkward front-panel programming systems. Patch editors help, but the whole enterprise lacks coherency, consistency, and expressive power.

The time has come for a common programming language to describe the behavior of our synths. What might such a language allow us to do? At least two things: First, the language should allow us to describe the way MIDI events such as note and controller data influence synthesis or processing. Second, we should be able to describe the flow of sampled data and thus characterize the audio signal processing patch.

This language should allow us to insert arithmetic operations of a general nature into the data-flow description. By way of example, we might want to scale the depth of modulation by a particular controller with another controller, such as the value of the Mod Wheel. Such interrelationships are at the heart of expressive control.

The computer music community has seen the development of several sound synthesis languages such as MUSIC 4, MUSIC V, and CSOUND. A close look at these languages might be suggestive, but they were not designed with real-time performance in mind. They lack a real-time scheduling mechanism to manage the temporal behavior of the program. The language concept I propose relies heavily on a well-behaved scheduling mechanism.

This proposition might seem far-fetched given the current state of synth technology. I'd argue that we are not far from affordable architectures that make this notion of a common synthesis and control language possible.

If we look inside the current generation of synths and samplers, we see "real" computers, such as those of the Motorola 68000 family, used as embedded controllers. These computers operate in tandem with the audio processing hardware. The controller chips handle MIDI input and output, front-panel display, mapping of controllers to synthesis parameters, voice allocation, and a host of other synth behaviors.

There are real computer processors in our synths and the trend is toward the use of ever more powerful processor chips. Why then do we not have a common language for programming them as we have for personal computers? A little thought suggests that such control computers can run a real-time operating system, including real-time scheduling and a high-level language designed to simplify programming the synth's behavior.

What about programming at the level of the audio samples? Synths will continue to use specialized DSP processors that are not easy to program. Some are exploring the use of general-purpose DSP chips, but the algorithms must be hand-programmed in machine code for reasons of efficiency.

Is there hope for a high-level programming language at the audio-signal level? I think so. Most processing algorithms can be described in terms of modules such as filters, oscillators, reverb units, and so forth. The language would describe the flow of audio samples between these modules. Different synths and effects processors would have different signal processing modules, but we could hope for a common language for patching them together.

There is a model for this sort of thing in the desktop publishing world. Adobe developed the PostScript language, and a number of different manufacturers have made it run on the embedded controllers in their printers.

This standardization has had important consequences. Of these, the most notable is that diverse hardware platforms (from Linotronics to Brand X laser printers) respond to PostScript programs in a consistent manner.

The music industry's next standardization effort should not stop at a communications protocol such as the proposed extensions to MIDI. Rather, we should seek a common language for music control and synthesis.