This week, a more mathematical topic. Sometime ago, we—friends and I—were discussing the fidelity of various signals, and how many bits were needed for an optimal digitization of the signal, given known characteristics such as spectrum and signal-to-noise ratio.

Indeed, at some point, when adding bits, you only add more power to represent noise in the signal. There’s a rule of thumb that say that for every bit you add, you can represent a signal with more of signal to noise ratio. Let me show you how you derive such a result.

The SNR, or signal to noise ratio, is a measure that compares the power of the signal to the power of the noise. Both are squared (by the very definition of the power). We therefore have:

where stands for amplitude and for power. A frequent measure to map the SNR to a (somewhat linear) perceptual measure is the decibel, and the SNR is often expressed in decibels:

I usually use for the natural log, base logarithm, for base 2 logarithms, most often used in computer science, and for base 10 logarithms, but to make sure it is unambiguous, I’ll use to make clear it is the base 10 logarithm. So, back to the SNR: when we are measuring the amplitude of the signal, we really mean the *average* amplitude, and likewise for the error. We can rewrite the SNR equation as:

OK, let us derive that 1 bit result. Let us first suppose that we are interested in the PSNR (peak signal-to-noise ratio) of an bits signal, where only the last bit is corrupted. Considering the PSNR lets us throw away the expectations (, in the formulæ) and write:

because the maximum value of a bits integer is not but , and that we suppose, as a simplification, that the last bit is always wrong, which contributes a (squared) error of . So, simplifying the previous formula, we have:

because the second term, , goes to zero *very* rapidly with a growing . Already, gives . Therefore, it is true that adding a bit (growing ) adds (at least) about to the signal. Using this relation, it is now easier to find how many bits you need to encode a signal with sufficient precision given the amount of noise embedded in it.

(Conversely, you now know that if a sound card touts a SNR, it really means that if offers bits restitution, even if it claims to have 24 bits DACs.)

However, that derivation is only for the PSNR, which doesn’t take into account for all of the signal’s characteristics. A much better approximate measure is the RMS, for *root mean square*—not for the eccentric hippie. RMS is a more realistic approximate measure for ondulatory phenomena as it approximate the average power of the signal. The RMS of a function on interval is given by:

which takes different forms depending on the function . Taking the sine as a “typical function” (which remains entirely debatable), on a complete period (that is, on the interval ) we have:

so the RMS of a sine is . Let us plug this value , into the previous derivation:

which leads to the expected result. Since , the last term is , so the final result is still that each additional bit yields an increase of (at least) .

This result bugged me a long time until I figured out how to derive it by myself. As you can see, there’s nothing to it: we start from hypotheses (like the signal is sinusoidal) and we apply the decibel formula quite mechanically.

reference : http://hbfs.wordpress.com/2008/12/09/deriving-the-1-bit-6-db-rule-of-thumb/

국내에선 돈주고 사봐야 하는 전문자료(?)들이 외국 웹에 보면 널려있다.

새삼 영어의 중요성을 느끼면서 역시 세상은 넓다는 생각도 해본다.