Dealing With Latency
Do you sometimes hear an unexplained slap echo or flanging effect in your monitors while recording? In a mix, do you sometimes find a part lagging behind the beat for no apparent reason? If the answer to either of these questions is “yes”, you may have a latency problem. Latency is a time delay that is inherent in digital processing. Usually this latency is not something you intended to have, and if you do not understand what it is and how it works, you can be confused or even frustrated when it becomes audible. If you know how to deal with latency, you can usually either reduce it or work around it, and I am going to tell you how.
Latency is a kind of delay, and this means it is a matter of timing. It takes time for both sounds in air and signals in wire to travel. This means that there is always some delay between the source and the destination. Signals in wire travel so fast that we do not notice this delay until the travel distance is many miles. Sounds in air, though, are a lot slower, which is why we can hear distinct echoes bouncing off walls that are about 50 feet (or more) away.
Back when all recording equipment was analog, we did not have to worry about delays in the electronic parts of the recording studio, because signal returns were virtually instantaneous. Now that most recording setups are computer-based, however, that has changed. In a digital system, any process you do is broken down into a gazillion tiny steps, and any processor you use can only do one step at a time. Even when a computer is multi-tasking, it only seems as though everything is happening at once; in reality, all the different processes have to “take turns”, but the processor is able to jump around so fast that we don’t notice that this is happening. Now, each little step in a process takes time, and if there are enough steps you can have a noticeable delay.
In fact, there are all sorts of delays in a digital system. Even the conversions between analog and digital take time, although the delay involved is very short, so we usually don’t notice it. Incoming audio is converted into a stream of numbers, called samples, and those numbers are then processed by the system. In other words, it all comes down to math. There are really only three things that the system can do with these numbers:
- Add them together
- Multiply them
- Delay them
Everything that a digital system, whether it is a hardware mixing console or a personal computer, does to audio is a combination of just these three processes. Adding and multiplying can both be done at amazing speed, but without the use of delays you actually can’t do anything but control volume and combine signals. The minute you want to do anything else, delays have to be used. Not only reverb and echo effects, but equalizers, filters, even compressors and gates have to use delays. There is good news here, though: in many instances, the delays used are not in the direct signal path, but rather to “re-circulate” some of the signal. In cases like that, little or no real latency may be caused by those processes.
Some digital systems and equipment are able to keep latency low enough that you will never notice it. This is especially true of systems that use dedicated hardware DSP for audio processing. A stand-alone hardware digital console, for example, usually has multiple Digital Signal Processors, dividing the load up so that they can work faster. The top-tier Pro Tools systems do their audio processing with special cards that have multiple DSP chips, essentially providing a full hardware mixer that only uses the host computer for control, display, and storage.
The systems that are most vulnerable to latency problems are generally those that do everything using the CPU of the host computer. This is because the same processor has to do everything, including all the “internal housekeeping” of the machine in addition to all of the audio processing. Keeping all of this working smoothly is a major juggling act. The audio is not really processed “in real time”; the audio processing is actually done much faster than real time so that the processor can take regular “breaks” to do all of its other jobs. This trick is made possible by using chunks of memory called “buffers” to hold certain amounts of audio until it is needed.
There are often several sets of these buffers, one set aside for each of several purposes. Most of these are completely invisible to the user, but some adjustments may be made available. Depending on the system, you may have the option of setting the size of the buffers, and you may have a choice of how many of these buffers the system loads up and holds in reserve for processing. The size and number of these buffers determines the minimum delay of the system. Using smaller or fewer buffers shortens the latency, but limits the amount of processing that the system can keep up with. Using more or larger buffers allows the system to keep up with a heavier processing load or handle more channels/tracks before it chokes. Once you are fully into the mixing phase of a project, and you do not need to monitor input sources any more, there is no reason not to use higher buffer settings to allow you to use more processing.
The main reason for concern about latency, of course, is live input source monitoring or live input source mixing. I have, in fact, handled the live mixing for stage shows using my DAW. Naturally, I had to keep the buffer settings very low, and I had to limit my choices of plugins for that use, but I still had far better features than those available on the hardware mixers the theater had available. I did not always have this capability, though, and you may not want to work in the same way.
Many engineers are still more comfortable working with a regular mixing console and using a DAW as a sort of “glorified tape machine”. This is especially common among engineers who grew up with the old analog technology. Using a console for live input monitoring pretty much eliminates worries about latency. Lately, though, it is becoming more popular to do all mixing “in the box”, eliminating the need for a large console. I personally made that choice some time ago.
Some sound cards and interfaces offer a certain amount of hardware mixing capability, which allows input source monitoring without latency. These setups range from very simple mixing with just level controls to full DSP for EQ and effects as well. Either this option or the use of an external console will allow you to stick with high buffer settings without having to worry about source latency.
If your are going to do your input source monitoring through your DAW software, you need to use the smallest and fewest buffers that your system can tolerate, and you need to avoid plugins and processes that cause latency. You may also have to limit the number of tracks and inputs you are using. You may have to do some experimenting to find out what works and what doesn’t. If your buffers are too few or small, you may have glitches in the recording, which can sound like occasional ticks or pops in the file. In severe cases, the system may even freeze or halt. If the buffers are too many or large, the latency will become noticeable as anything from a “flanging” effect to a pronounced slap delay in the monitors.
Testing for input latency is simple. Put on monitor headphones, and listen to either a microphone or a direct input from an instrument. The toughest test is monitoring for vocals. Speak or sing into the mic and listen to yourself live in the headphones. Even at the shortest buffer settings, you may hear a slight sort of phasing or flanging effect, and at higher settings you will get a definite slap echo. You can also try playing an instrument through a DI into the system while listening with headphones. The delay on an instrument recorded this way will not be as noticeable as for a vocal. You may, in fact, find that monitoring instruments this way works fine while monitoring vocals is distracting or difficult. In that case, you may want to monitor instruments through the DAW mixer while monitoring vocals through either the hardware monitoring of the sound card or using an external mixer.
If you are developing a mix while you are still tracking, you need to be aware that not all processes and plugins will permit live monitoring without serious latency. There are two problems to watch out for here: first, plugins that cause significant latency, and second, plugins that change buffer sizes while working. Long latencies will impose a noticeable delay, often on the entire mix as the DAW applies latency compensation. Changing buffer sizes can cause stability problems. Plugins causing either problem should not be used while you are monitoring or mixing live inputs through your DAW. If you are not sure whether a plugin will cause trouble, try the input latency test with and without the plugin. If adding the plugin causes noticeable delay, don’t use it until you are done with monitoring inputs.
If your DAW applies automatic latency compensation for plugins, a latency-causing plugin applied to any channel, or anywhere else in the mix, will cause all inputs to be delayed.
That leads me to another topic: how plugin latency can affect mixing. The latency caused by a plugin delays everything that goes through that plugin, which can put that part out of time with other parts not processed with that plugin. Some DAW programs apply automatic latency compensation to “fix” this problem. A plugin is supposed to “report” its latency to the host program, so that the program can apply a matching delay to all paths not effected through that plugin. A plugin that does not report its latency, or that reports that latency incorrectly, can cause a timing error in the rest of the system, thus pulling one or more tracks of the mix out of sync. If you start hearing timing problems in a mix that weren’t there before, start checking to see what plugins you have added, and try removing them one at a time until you find the “faulty” plugin.
There is one more possible trouble spot for uncorrected latency, and that is in the use of parallel processing. Sometimes you may want to send a track, or even a submix, to two different paths, with each path processed differently. This is most commonly done with compression. I have done mixes where I have applied a multiband compressor in this way. I found that this particular plugin had some latency to it, and when my system did not automatically compensate for that latency (now it does), the difference in the delay time between the processed path and the “clean” path caused a flanging effect. If all else fails, you can put the offending plugin in both paths, with one instance of it set so that it does no processing. The “dummy” plugin provides a matching delay, which cures the flanging problem.
Latency is one of those “hidden traps” that can cause you trouble if you don’t know what to look for. Once you understand it, though, it is usually not hard to find and work around the problem. It’s just one more aspect of your system and plugins that you have to test for, learn about, and be aware of. Yeah, we didn’t have to worry about it back in the bad old analog days, but then again, there was a lot of really cool stuff that we now take for granted that was difficult or impossible with the old tools. There’s more that we have to watch out for, but it’s a fair price to pay for all the neat things we can now do.