The technology used to generate the required sounds for simulation and training applications has improved greatly in the last 20 years. The technology has transitioned from simple analog electronic circuits generating tones and noise signals, through crude digital sample replay, to sophisticated digital samplers borrowed from the music industry, capable of triggering many sounds simultaneously. Leading finally to today’s state-of-the-art technology, represented by the ASTi Telestra architecture, with a dedicated Linux™ realtime audio framework running on one core of a dual core processor, with the other core handling non-realtime support tasking, results in an audio processing system that allows previously unobtainable levels of capability.
Perhaps more importantly, the effort to create a simulation model utilizing these new features is easier and more intuitive than any previous method provided in a sound system fielded to the simulation and training industry.
One area that has not changed significantly in the simulation and training field is how these sounds are delivered to the trainee, and more importantly how direction is imparted to the sound. Somewhat obviously, amplifiers and loudspeakers are used to generate the sound waves that reach the listener, and this continues to be true, but only passing interest has been paid to where the sound is perceived to be coming from in terms of direction and the mechanisms used to impart direction.
Conventionally, two approaches have been used to implement sound directionality; the simplest is placing a loudspeaker in the apparent source direction of the sound and outputting the sound only through this one loudspeaker, hence implementing a point source for the sound. Or more conventionally, placing a number of loudspeakers around the simulated area in some regular pattern and allowing all the sounds to replay through several (or all) of the loudspeakers at once, and adjusting the gain of each sound for each loudspeaker channel to impart position. This latter technique is known as gain-panning.
The point source approach will provide realistic directionality, however it is very unlikely that enough loudspeakers can be driven as point sources in a practical system. Other issues are likely to make this impractical since we can hardly locate a speaker 20 feet off the left rear of the simulator to represent the left engine. Therefore, the majority of simulators are fitted with what is best described as a gain-panned system. Although gain-panning does result in a sense of sound position, there are a significant number of shortcomings on the underlying premises of the basis of this technique.
The answer is, as in most areas of advancement in the simulation industry, because the customer has a requirement, and technology has advanced to provide the capability to allow this to happen.
The principal customer requirement is derived from a number of training scenarios that place operators in a virtual environment representing a position on a battlefield with a requirement to immerse the training in a full 360 degree soundfield, to augment a 360 degree half dome visual display. In many of these scenarios, the direction of a sound cue is the first indication of the arrival of what may be friend or foe; with the enemy forces most often appearing from the front! What is perhaps not so apparent is that in these situations sound is a 3D entity, requiring a much better representation of the elevation of the sound source. Almost any other sound simulation or training requirement is some subset of this requirement, and therefore equally applicable to conventional flight simulator applications and similar.