Delay is defined as the splitting of a signal into separate components, the slowing of one of these split signals, and its subsequent re-introduction into the original signal.
Echo is defined as a delay of approximately 35 ms. in which the regenerations are evenly spaced and in which the release portion of the sound envelope is even as well. This is also sometimes referred to as Early Reflections.
Reverb is defined as a delay less that 35 ms. in which the regenerations are randomly dispersed and in which the release portion of the sound envelope is random as well. This is also sometimes referred to as Later Reflections.
After magnetic recording had been introduced, Les Paul realized that the space between the record and playback heads of a tape recorder could be used to create a Tape Delay. Later, to increase the delay time, he tied two tape recorders together and, with the advent of variable speed playback decks, he could control the actual delay time by slowing or speeding up the second deck. But Les Paul was faced with a dilemma every time the tape on the second deck ended. The answer came a few years later with a new form of tape delay (new back then) called Echoplexing. This utilized a continuous tape loop which allowed for continuous delay without running out of tape. Echoplexers were used up into the seventies and may still be found in some studios today (usually on a dusty back shelf though).
The mid seventies brought the beginning of the digital age, and with it, the first digital delay lines or DDL's. However, due to the cost of digital technology back then, DDL's generally only had about a 10 - 15 kHz bandwidth. Its primary use was to keep the signal in particularly large venues in sync between the front and back of house speakers.
In the late seventies, Analog Delay became popular because of its reduced price. Many musicians still claim that the analog delays were much fatter and warmer sounding than digital delays, however anyone who has used one knows they are subject to extraneous noise from EMF and other sources.
Echo and Reverb, in pre-digital days (was there really such a time) was created by playing an audio signal in a room with very hard surfaces and recording the reflections. These type of rooms, known as Live Rooms, offered engineers a chance to control the parameters of the reverb simply by changing placement of the microphone capturing the reflections. If you needed more reverb, you simply moved the mic further from the sound source, less reverb was achieved by moving the mic closer to the sound source. Extravagant ways of changing the envelope of the reverb were soon devised, such as hanging sound absorbing mats on some walls, the use of moveable baffles, etc.
Today, engineers for the most part record signals dry (without any effect) trying to get the best quality of signal, and then add these types of effects when making the final mix of a recording. This has led to a definite change in the structure of recording studios. Where once only the very rich could afford rooms that offered high quality reverberation, now anyone with a relatively quiet recording area can manipulate sounds after the fact with high quality digital processing. Digital processing has become so standard in fact that many home stereos offer some type of reverb for mic inputs.
The advantage of an acoustic chamber is the naturalness of the reverb sound. The downside, however, is they are extremely expensive to build largely because of the surface treatment requirements and the fact that they must be at least 2000 sq./ft so as not to lose bass information. They are also very time consuming to set up, which results in higher studio construction costs as well.
Foil Reverbs use principles found in plate reverbs and ribbon microphones. The signal is introduced into a thin piece of gold foil which acts similarly to the plate. However, the transduction of vibration into electrical signal takes place on a much smaller scale.
Digital Doubling splits the signal into two components and delays one portion of it randomly over a period of several milliseconds (ms.). This can be quite advantageous in expensive studios where every second of session time counts. Its main drawback is its inability to anticipate the next note as a musician might. In normal doubling, the musician may play one note a few milliseconds before the prerecorded one, the next note he may play a few milliseconds behind the prerecorded one. This gives the naturalness that normal doubling is famous for. With digital doubling, the processor is forced to always double the notes behind the prerecorded ones, since the electronics have no capability to predict an upcoming note. This may however change in the future as electronic processing becomes more powerful and memory cheaper. Don't expect it on your doorstep any time soon though. The other drawback to digital doubling is, as always, a slight addition of noise beyond that introduced by the tape machine itself.
The Mother of All Pitch Shifters is the Eventide Harmonizer. While it definitely does more than just shift pitch, a majority of its algoritms are based initially on this principle. Def Leppard's famous vocal sound came from such processing, and Alanis Morrisette's new album is one of the better displays of this technique I've heard recently. You'll also hear it used alot on rock guitars to make them sound fuller.
This sets the amount of time between the introduction of the source signal, and the reintroduction of the affected signal. This measurement is most often in milliseconds and ranges anywhere from a few ms. to several seconds.
Pre-Delay Time is the amount of time before an affected signal's regenerations are reintroduced back into the source signal. In nature, the direct sound reaches the listener before the reflected sounds. The closer the source, the longer the delay between the direct and reflected sounds. Conversely, the further the sound source, the shorter the delay between the direct and reflected sounds. By changing the Pre-Delay parameter, the engineer can cause the listener to perceive a difference in the spatial depth of the sound. It also lends a perceived crispness to the attack of the sound envelope by allowing the listener to hear the initial attack without effects. This is sometimes referred to as giving the sound more punch, since reflections tend to round out the amplification of a sound.
When the direct sound and reflected sound reach the listener at the same time it is said that the source is at the Critical Distance from the listener.
A high-pass filter (HPF) is often made available to help reproduce more of natures tendencies with sound. It takes great amounts of energy to keep high frequency vibrations moving and as a result high frequency information is the first to dissipate over distance. According to one statistic, 30% of high frequency information is absorbed upon contact with the first surface. This of course depends directly on the material the surface is made of. Vidiot's Rule of Thumb is to generally set the HPF at around 110 Hz so that bass frequencies, in particular those which are perceived as mono) do not get any reverberations. This keeps the low bass clean and tight.
A low-pass filter (LPF) is the opposite of the HPF and is generally set at 8 kHz for vocals. This also allows the engineer to create realistic sounding reverb, since reverb in the 10-20 kHz range is rarely perceived in a real setting. However, allowing reverb in this range does give the signal a crispness which is sometimes desirable.
The time it takes the affected portion of the signal to drop 60 dBv below the source. As explained earlier, the time between regenerations is random, however the time before the reverb signal is released can be set by this parameter.