Friday, June 8, 2012

Resolution needs more bits

Resolution is not "signal to noise ratio (SNR)."  Resolution refers to something we can't easily measure directly, but infer, in analog systems.  Analog amplifiers have the potential ability to reproduce every voltage level from zero to maximum, subject to limitations (noise, distortion, etc) which are additive in nature.  If the world were truly continuous, this would mean it can reproduce an infinite number of potential voltage levels, for in between each two levels we could specify, there would be another.

It is already known that we can hear a signal even when embedded in noise that is equally loud, so long as there is some other characteristic we can use to distinguish the signal from the noise, such as frequency response.  If the noise were pure gaussian, and the signal a pure tone, it is easy to see how this could also be done electronically as well as by ear.  I can't think of a general rule which would describe the limits of ability to do this.  For quite some time, it has been claimed that we can still hear the signal even if the noise is 15dB louder, but if our hearing were as good as possible for any acoustic sensor, and we had some pre-knowledge of either the signal or the noise, the noise could be much higher still relative to the signal.

Digital audio systems have been designed to have SNR nearly as good as the best possible analog equipment, and far better than most, with the potential 96dB SNR at 16 bits.  Perfectionists like me have sought to use 24 bit digital audio systems that have potential 144dB SNR, which is better than the best available analog amplifiers.  That has to be good enough, right?

Well, no, if the goal is to have the same resolution as analog systems.  I can't prove that we need this extra resolution, but I have some suspicions that it is, and I think it's interesting to think about what that kind of performance would require.

The answer seems to be we need bits to encode resolution down to some appropriate quantuum level.  Assuming the world is like that described by quantuum mechanics, 
quessing an appropriate quantuum level to be about 10^-33 volt, the number of bits required for 0-1V would be about 76 (2.3 bits per decimal digit x 33).

I can imagine a 24 bit encoder expanded to full 76 bit capacity.  Alternatively, and even given off-the-shelf parts, one could cascade multiple 24 bit encoders.  Suppose a successive approximation method is used, then we could simply make sure that 76 iterations of successive approximation are done.  Now we can't prove we have 76 bit accuracy, but I wasn't so much worried about accuracy as resolution.

Now such a digital encoder would probably not perform as well as an ordinary 24 bit one with regards to signal to noise ratio.  But often analog systems don't do that either.

I say not to worry about either the fact that SNR is not improving or even getting slightly worse.

Now I've only been considering the quantization accuracy here.  What if we consider time, making the sampling interval as small as possible.  Well, that gets us into trouble very quickly.  It is clear we have no hope of digitally encoding at something like a quantuum rate (10^-30 sec), and we would have no hope of storing so much data either.

My guess is that, for us, the amplitude accuracy is more important than the timing.  I've generally felt that 96Khz sampling rate is sufficient.  For what it's worth, audio research published in JAES has claimed that jitter, for example, isn't audible until we get to the 100's of nanoseconds, about 1000 times worse than the level modern digital equipment performs at. 

A 76bit/96Khz stereo datastream would require less than twice as much as a currently used 24bit/192Khz datastream.  It would be quite feasible to implement, if you were making digital converters anyway.  A major problem would be fighting off the nags who say it isn't necessary and has no proveable benefit.


No comments:

Post a Comment