original location of this document: http://w3.one.net/~smadigan/vidiot/effects.htm

The College Papers



Delay is defined as the splitting of a signal into separate components, the slowing of one of these split signals, and its subsequent re-introduction into the original signal.

Echo is defined as a delay of approximately 35 ms. in which the regenerations are evenly spaced and in which the release portion of the sound envelope is even as well. This is also sometimes referred to as Early Reflections.

Reverb is defined as a delay less that 35 ms. in which the regenerations are randomly dispersed and in which the release portion of the sound envelope is random as well. This is also sometimes referred to as Later Reflections.


Before the invention of magnetic recording, the first delay was used in radio broadcasting. It was created by sending the audio signal across telephone lines to a city hundreds of miles away and then bringing it back. The time it took for the signal to travel the distance across the phone lines caused the signal to be delayed somewhat in comparison to the source signal.

After magnetic recording had been introduced, Les Paul realized that the space between the record and playback heads of a tape recorder could be used to create a Tape Delay. Later, to increase the delay time, he tied two tape recorders together and, with the advent of variable speed playback decks, he could control the actual delay time by slowing or speeding up the second deck. But Les Paul was faced with a dilemma every time the tape on the second deck ended. The answer came a few years later with a new form of tape delay (new back then) called Echoplexing. This utilized a continuous tape loop which allowed for continuous delay without running out of tape. Echoplexers were used up into the seventies and may still be found in some studios today (usually on a dusty back shelf though).

The mid seventies brought the beginning of the digital age, and with it, the first digital delay lines or DDL's. However, due to the cost of digital technology back then, DDL's generally only had about a 10 - 15 kHz bandwidth. Its primary use was to keep the signal in particularly large venues in sync between the front and back of house speakers.

In the late seventies, Analog Delay became popular because of its reduced price. Many musicians still claim that the analog delays were much fatter and warmer sounding than digital delays, however anyone who has used one knows they are subject to extraneous noise from EMF and other sources.

Echo and Reverb, in pre-digital days (was there really such a time) was created by playing an audio signal in a room with very hard surfaces and recording the reflections. These type of rooms, known as Live Rooms, offered engineers a chance to control the parameters of the reverb simply by changing placement of the microphone capturing the reflections. If you needed more reverb, you simply moved the mic further from the sound source, less reverb was achieved by moving the mic closer to the sound source. Extravagant ways of changing the envelope of the reverb were soon devised, such as hanging sound absorbing mats on some walls, the use of moveable baffles, etc.

Today, engineers for the most part record signals dry (without any effect) trying to get the best quality of signal, and then add these types of effects when making the final mix of a recording. This has led to a definite change in the structure of recording studios. Where once only the very rich could afford rooms that offered high quality reverberation, now anyone with a relatively quiet recording area can manipulate sounds after the fact with high quality digital processing. Digital processing has become so standard in fact that many home stereos offer some type of reverb for mic inputs.


Acoustic Chambers are rooms with highly reflective walls and movable baffles to allow the engineer to change the shape of the room. The reverb is created by the reflections from the walls and either the reflection or absorption of sound by these baffles. Microphones are placed in the room so that they receive minimum direct sound and maximum reflected sound.

The advantage of an acoustic chamber is the naturalness of the reverb sound. The downside, however, is they are extremely expensive to build largely because of the surface treatment requirements and the fact that they must be at least 2000 sq./ft so as not to lose bass information. They are also very time consuming to set up, which results in higher studio construction costs as well.


If you have ever listened to a recording made prior to the digital revolution, chance are you have heard the effects of Plate Reverb. Plate Reverb uses a steel plate suspended inside a frame. Vibrations are introduced into the plate by a driver similar to those used in speakers. Microphones are mounted on the plate to absorb these vibrations which are then transduced into electrical signals. Plate reverbs create a sound which is very pleasing to the human ear and are still the favored form of reverb among many professional musicians. The single drawback of the plate reverb is its inherent cost and size. Most plate reverbs come in sizes from 6 ft. to as large as 18 feet and must be isolated in rooms by themselves. This is not very practical for most studios these days, and certainly is out of the question when it comes to Live Sound Reinforcement.

Foil Reverbs use principles found in plate reverbs and ribbon microphones. The signal is introduced into a thin piece of gold foil which acts similarly to the plate. However, the transduction of vibration into electrical signal takes place on a much smaller scale.


Spring Reverbs use the motion of a spring's vibration to transduce the signal back into electrical energy. Spring reverbs are the cheapest of reverb units and can be found on virtually all guitar amplifiers with a reverb knob. The knob simply adjusts the amount of reverb fed back into the source signal. The problem with spring reverb is the added noise and possibility of severe overloads. If an amp using spring reverb receives a strong jolt, such as accidentally kicking it on stage, the extreme vibrations of the spring can cause large amounts of current to be generated within the amp, resulting in blown tubes (of course this is not the case with solid state amps, but most solid state amps are moving to some form of cheap digital reverb anyway). If your a guitar player your most likely aware of the effects of spring reverb. If at all possible, avoid spring reverb like the plague. Only a few were ever created that were worth the the problems associated with them (I personally like the sound of an old Fender Twin Reverb, but again the potential for problems make it a tough choice on live gigs).


Until the advent of digital reverb, most engineers worried little about its affects on the sound envelope. For the most part it either sounded the way they wanted or it didn't. With the digital revolution, sound engineers were forced to see the relationship between their dry sound and its affect on reflective surfaces. Suddenly regenerations and release time were at the whim of the engineer. On today's high end reverb units an engineer has the option to program in the exact dimensions of the "room" he wishes to use for digital reverb and microchips work out the mathematics in complex algorithms to produce somewhat accurate reverberation for any environment.


One effect used often by engineers is Doubling. After a musician lays an initial track to tape, he then plays the same thing on a different track while listening to the first track. The small variances in timing between the two recorded tracks create a fuller sound. In a studio session where costs permit it, this style of doubling is preferred.

Digital Doubling splits the signal into two components and delays one portion of it randomly over a period of several milliseconds (ms.). This can be quite advantageous in expensive studios where every second of session time counts. Its main drawback is its inability to anticipate the next note as a musician might. In normal doubling, the musician may play one note a few milliseconds before the prerecorded one, the next note he may play a few milliseconds behind the prerecorded one. This gives the naturalness that normal doubling is famous for. With digital doubling, the processor is forced to always double the notes behind the prerecorded ones, since the electronics have no capability to predict an upcoming note. This may however change in the future as electronic processing becomes more powerful and memory cheaper. Don't expect it on your doorstep any time soon though. The other drawback to digital doubling is, as always, a slight addition of noise beyond that introduced by the tape machine itself.


Whereas an echo can be defined as a delayed signal with several regenerations, Slapback Echo is generally categorized as having only one distinct repetition. The Time of Distinction, or the point at which a distinct repetition can be heard, is usually between 30 and 50 milliseconds, depending on the listener. Trained studio engineers can generally hear a distinct repetition at around 35 ms.


Phasing is created by splitting a signal into two components, delaying one portion of it by a few milliseconds, and then reintroducing it back into the source at the same amplitude as the source. This causes the two signals to sometimes be in phase and other times out of phase with each other, causing a distinctive swirling sound. The device used to perform this operation is most commonly called a Phase Shifter but may also be referred to as simply a phaser (Star Trek fans always get such a kick out of this). An older effects pedal known as an Enveloper did much the same thing though it allowed for some user changes of the bandwidth affected.


The name Flanging was taken from an effect created in the early studios, again by our hero Les Paul, who literally put pressure on the tape flange (the outer portion of the tape's reel) to slow it down. This caused the phase shifting as described above with a slight shift in pitch, due to the slowing of the tape movement across the record/playback heads. Today, Flangers still use this shift in pitch, which is the primary distinction between a flanger and a phase shifter.


Pitch Shifters change the pitch of a signal without changing the time scheme of the recorded material. This is often used in a process known as Time Compression which causes the program material to speed up, thus compressing the amount of time taken for playback. Normally, increasing the speed of playback causes a shift in pitch of the recorded material, however, using a pitch shifter to "shift" the pitch down to its original level can remove any noticeable effects of the Time Compression. The result is a 4 minute recording playing back in 3 minutes 30 seconds, with no noticeable change in overall pitch. This effect is used most heavily in the commercial broadcast arena where strict guidelines on program time must be adhered to.

The Mother of All Pitch Shifters is the Eventide Harmonizer. While it definitely does more than just shift pitch, a majority of its algoritms are based initially on this principle. Def Leppard's famous vocal sound came from such processing, and Alanis Morrisette's new album is one of the better displays of this technique I've heard recently. You'll also hear it used alot on rock guitars to make them sound fuller.


The parameters available on a given effect unit are as numerous as the units themselves. However, these are some general parameters you can expect to see on most units:

This sets the amount of time between the introduction of the source signal, and the reintroduction of the affected signal. This measurement is most often in milliseconds and ranges anywhere from a few ms. to several seconds.

Pre-Delay Time is the amount of time before an affected signal's regenerations are reintroduced back into the source signal. In nature, the direct sound reaches the listener before the reflected sounds. The closer the source, the longer the delay between the direct and reflected sounds. Conversely, the further the sound source, the shorter the delay between the direct and reflected sounds. By changing the Pre-Delay parameter, the engineer can cause the listener to perceive a difference in the spatial depth of the sound. It also lends a perceived crispness to the attack of the sound envelope by allowing the listener to hear the initial attack without effects. This is sometimes referred to as giving the sound more punch, since reflections tend to round out the amplification of a sound.

When the direct sound and reflected sound reach the listener at the same time it is said that the source is at the Critical Distance from the listener.

A high-pass filter (HPF) is often made available to help reproduce more of natures tendencies with sound. It takes great amounts of energy to keep high frequency vibrations moving and as a result high frequency information is the first to dissipate over distance. According to one statistic, 30% of high frequency information is absorbed upon contact with the first surface. This of course depends directly on the material the surface is made of. Vidiot's Rule of Thumb is to generally set the HPF at around 110 Hz so that bass frequencies, in particular those which are perceived as mono) do not get any reverberations. This keeps the low bass clean and tight.

A low-pass filter (LPF) is the opposite of the HPF and is generally set at 8 kHz for vocals. This also allows the engineer to create realistic sounding reverb, since reverb in the 10-20 kHz range is rarely perceived in a real setting. However, allowing reverb in this range does give the signal a crispness which is sometimes desirable.

The time it takes the affected portion of the signal to drop 60 dBv below the source. As explained earlier, the time between regenerations is random, however the time before the reverb signal is released can be set by this parameter.