14
\$\begingroup\$

Is the explanation that the digital equipment takes longer to propagate? For instance, a software synthesis is very slow compared to a hardware synthesis.

\$\endgroup\$

3 Answers 3

30
+100
\$\begingroup\$

I assume you are not alluding to a deeper philosophical discussion about information, power and entropy, but you are just interested in the practical aspects.

Very simply put, digital circuits need to measure input, digitize it, run it through some kind of processing and then transform the output into an electrical signal again. Digital circuits cannot directly manipulate analogue electrical signals. You inherently have extra latency because of signal conversion.

You can stop reading here if this answered your question.

From a more philosophical/physical point of view, in almost all circuits you are actually not trying to manipulate electrical energy (that is what power electronics does), but you are trying to manipulate information. In this case, technically it is not at all true that analogue is faster than digital. Why? Well, analogue signal paths are nonorthogonal information processors: there is no such thing as a perfect opamp or a perfect buffer, everything has parasitic effects that you need to filter or otherwise get rid of. Especially at very high speeds, it becomes a real problem even to build a wire that reliably transfers a voltage. Digital processing decouples the electrical aspect from the information: after it has digitized its inputs, the signal exists as a very pure form of information. You can then manipulate the information without having to think about the electrical nature of it, and only in the end stages you need to convert it back to an analogue state.

Even though you are penalized with two conversion stages, in between your ADC and DAC you can employ many processing tricks to speed up processing speed and usually vastly surpass the performance of any purely analogue signal processor. A great example for this is the revolution of digital modems in cell phones, which now operate at very near the theoretical limit of information processing (tens of pJ/bit energy requirements), whereas not very long ago purely analogue GSM modems required orders of magnitude more silicon area and I think 5 or 6 orders of magnitude more processing energy.

\$\endgroup\$
6
\$\begingroup\$

Digital processes inherently add a certain amount of latency since an event that happens between two clock cycles can't be processed until the next one and, to avoid problems with events that happen very close to clock-cycle boundaries, things are often designed so that events won't take effect until the second clock cycle after them (trying to decide quickly whether an event occurred before or after a clock cycle boundary is often surprisingly difficult, even if close calls could safely be decided either way; being able to postpone the decision for an extra clock cycle makes things much easier). That's usually only a small part of the latency that's observed in many digital systems, however.

A bigger factor in digital-system latency revolves around the fact that for a variety of reasons many systems are able to process large chunks of data more efficiently than small ones. For example, while it would be possible to record a 44KHz stereo audio data stream by interrupting the processor 88,200 times/second, that would require that the processor stop whatever it was doing 88,200 times/second, save all its registers, switch to the interrupt context, grab the sample, switch back, etc. Even interrupt entry and exit take only a microsecond each, the system would be spending 22% of its time entering and exiting the interrupt rather than doing anything useful. If the system were to instead use hardware to buffer groups of 512 samples (256 from each channel) and notify the processor when each group was ready, that overhead could be cut by more than 99.5%--a major savings.

Note that taking groups of 256 samples per channel might not sound like much of a delay (it's about 6ms), if the signal passes through multiple devices and each induces such a delay, the delays can add up. Further, if any of the stages the signal passes through use any sort of variable time-sharing, the delays may be variable. Passing real-time audio data through a channel that sometimes had a longer delay than other times would cause a noticeable "warbling" or "garbling" every time the delay changed. To prevent that, some systems tag blocks of audio data with a timestamp indicating when they were captured, and have the final recipient of digital data that will convert it back to analog form hold it until a certain amount of time has elapsed since it was captured. If the final recipient delays it until a second after its capture, then variations in the delay at different parts of the journey won't affect the output unless they total more than a second. If one figures that random short delays in transmission will be frequent but longer delays will be rare, increasing the delay before the final recipient outputs the audio will reduce the frequency of audible disruptions, but also mean that the sound won't come out as soon as it otherwise could have.

\$\endgroup\$
1
\$\begingroup\$

In addition, digital systems tend to be clocked - in effect, quantizing time, meaning that digital events don't propagate until the next clock time.

\$\endgroup\$
1
  • \$\begingroup\$ Technically, and I'm really entering philosophical area, digital systems are just systems that represent data in a digital fashion and don't necessarily need to be clocked. All processors use clocking, but e.g. FPGAs can be made into instacarry ALUs that operate instantaneously on their operands. The only reason you can't pump data in at infinite speeds is propagation delay, skew and transistor switching speed. \$\endgroup\$
    – user36129
    Commented Jul 8, 2013 at 18:26

Not the answer you're looking for? Browse other questions tagged or ask your own question.