In the early to mid 1980s I was involved in a set of activities rooted in the development and use of computer applications in the contemporary classical music field. As a composer I often envisioned sounds and musical processes that were unrealistic or even impossible to achieve with existing instruments, and I was attracted to delving into the potential of computer-based resources to fulfill my musical voice. At the time I was undertaking graduate studies at Eastman School of Music, the first professional music conservatory in North America to integrate a computer music studio into its facility and program. This satisfied my need to explore the potential of computer music resources to continue the development of my musical language, in a context where superb professional musicians were actively making music using traditional instruments. 

Significant research and innovation in computer music was taking place in the mid to late 1970s by William Buxton and others at the Dynamic Graphics Department at the University of Toronto, where I did my undergraduate degree in music composition, but there was no connection with the Faculty of Music directly across the quad. The computer music field was evolving in computer science and engineering departments in the United States as well, but Eastman was the first music conservatory in North America to develop a software synthesis-based computer music studio. In my 2nd year of my graduate program I stepped into the role of computer music studio systems administrator, an opportunity to gain critical expertise while learning how to use the systems to create my own music. 

Several factors contributed to the emergence of small but powerful computer music workstations, either as single units in a larger network of computers, or as standalone computer music systems. The current state of hardware and software development had reached a level of sophistication that made it possible to have access to computer power and programs previously only available on larger computers for a fraction of the price.

Major sound synthesis software systems had been rewritten in the C programming language, contributing to their portability and extensibility. This affects systems built within the UNIX environment, as well as those built upon other operating systems, since C has become a significant development language across operating system boundaries. Recent developments that allowed for the writing and reading of standard operating system soundfiles made the necessity of designing and maintaining separate soundfile management systems virtually obsolete. The implementation of a variety of user­interface styles to computer music applications made the creation environment much more accessible to musicians as a whole, and to composers in particular. These factors, combined with decreasing prices for powerful microcomputers, larger and faster disk drives, and the availability of high-quality, lower-cost sound conversion systems, pointed in a new direction: powerful, dedicated machines for individual artists, smaller computer music facilities, and larger systems based on networks of semi­independent computer music workstations.

In order serve my own artistic needs, and to help advance development in the computer music field, I developed a personal computer music work station concentrating on building the tools and environment for music creation using “delayed performance,” direct synthesis techniques for music generation and sound processing.

The decision to use software sample synthesis as the primary means of generation and manipulation of materials was made because of the extreme flexibility it provides in designing, controlling, and adapting sound. Once the restriction for real-time synthesis or control is removed, it is possible to be as particular about details as the compositional context requires, in generating, mixing, and pro­cessing sound as the work evolves. Working in this manner with the rich sound realm made available using these techniques is compositionally and aurally compelling, and not attainable using an event-based communications scheme such as MIDI (Musical Instrument Digital Interface).

Now 35 years later many of the sounds and compositional processes that we achieved in a non-real time setting are available in “real time” live musical contexts, many even at very low or no cost. This article published in the Computer Music Journal in 1987 (CMJ Vol. 11, No. 3) describes the personal computer system that I developed.

CMJ-Composers-Computer-Music-System-Article