Talk:Quantization error

Latest comment: 13 years ago by 90.179.235.249 in topic Misleading figure

"Many physical quantities are actually quantized by physical entities. Examples of fields where this limitation applies include electronics (due to electrons), optics (due to photons), and chemistry (due to molecules). This is sometimes known as the "quantum noise limit" of systems in those fields. This is a different manifestation of "quantization error," in which theoretical models may be analog but physics occurs digitally. Around the quantum limit, the distinction between analog and digital quantities vanishes."

Is that really true, though? Charge is quantized but voltage isn't. — Omegatron 15:53, 8 October 2005 (UTC)Reply

Is there a significant difference between round-off error vs. quantization error, or should they be merged? As far as I know, "round-off error" is applied more to mathematical calculations that *could* be calculated to higher precision, but were rounded off to save time, while "quantization error" is usually applied to actual measurements of the real world (something that cannot possibly be calculated). I think this is only a minor difference that could be mentioned in a sentence or two, in a combined article. --DavidCary 03:54, 18 October 2005 (UTC)Reply

Well, they are similar, but I don't think they're the same thing. Round-off error inherently uses rounding, while quantization error can be rounding, truncation, etc. (Not sure what it is for floating-point numbers.) Also quantization error is generally found in discussions about sampling theory and the error between internal representation and real-life signals , while round-off error is generally about computations on numbers.
In other words, in the signal processing realm, quantization error is added to a signal when it is first digitized, and round-off error is added as you do computations on that already-digital signal. Maybe they are the same thing with different names, but I think they are different enough to have different articles. Maybe, maybe not. — Omegatron 14:14, 18 October 2005 (UTC)Reply

merge complete

edit

See discussion at Talk:Quantization_noise#Don.27t_Merge.

Merge complete.

Copying discussion from quantization noise article (now merged) to this talk page.

edit

"The noise is additive and independent of the signal when the number of bits   is greater than 4, that is, more than 16 digitizing levels,  ."

Why?
Also, the letter L is used for both "load" and "number of levels" — Omegatron 14:07, August 30, 2005 (UTC)

Derivation

edit

Trying to find a derivation for the quantization noise power equations and subsequent SNR per bit resolution values. I can't find the type of derivation I am thinking of online. Some scribbled, incoherent notes from class (I was probably asleep, and can't find the source I copied them from):

 

Signed magnitude Two's complement
Truncation    
Round-off    

 

LSB-1:  

 


Mean squared error or quantization noise

 

with a note making sure I remember that it's not always Δ2/12. The external link I included shows a more thorough derivation of q2/12. — Omegatron 06:04, 17 October 2005 (UTC)Reply


I keep forgetting that Google Print exists. [1]

Thesis

edit

I have previously derived the SNR formulas in my thesis. Sorry for the unconventional symbols, i use Q for LSB (in volts) and N for the number of resolution bits and m=2^N for the number of levels. umax and umin are the max and min voltage corresponding to the quantized interval of Q*m

The noise power is assumed to be uniform:

 

where Q is the quantization step (often called LSB):

 

In the case of a uniformly distributed signal (ramped, triangle etc) the signal power is

 

where i use m=2^N for the number of levels (does not have to be a power of two!). And thus the SNR

 

In decibels this is

 

Next case: A sine wave test tone. This one has more power than the ramped, namely:

 

And the SNR becomes

 

in decibels:

 

/ Johan Stigwall

Wonderful. Thank you. — Omegatron 14:44, 6 December 2005 (UTC)Reply

∞ How about adding that to the page? This is a pretty fundamental and relevant equation to people who use ADCs. Guerberj (talk) 16:49, 14 May 2010 (UTC)Reply

Don't Merge

edit

Quantisation error is not Quantisation noise - it's a complicated topic and worthy of several pages to make the distinctions needed, especially when weighting and dither are brought in. --Lindosland 16:18, 5 February 2006 (UTC)Reply

Well they're not even close to a page right now, so I figured the subjects were close enough to merge into one topic. — Omegatron 01:57, 13 March 2006 (UTC)Reply

Since quantization noise is entirely caused by quantization error, I'm in favor of merging. How can you possibly describe quantization noise without first describing quantization error? And if this article describes both, what is the purpose of another description of either one alone? --68.0.120.35 01:05, 25 May 2007 (UTC)Reply

Quantization noise is a misnomer, and really just a model. Quantization noise should redirect to quantization error, and the article on quantization error should explain the "noise" model, under what conditions it is useful, etc.

Error In Example?

edit

I think there is a discrepancy in the formula for sawtooth SNR and the example given. The formula is approximated as 6.02 · n, where n=4 for 16 bit audio. However, the example provided comes up with the answer 6.02 · 16 = 96.3 dB, which is not in agreement with the formula.

One stinky bum 01:35, 14 January 2007 (UTC)Reply

the right formula

edit

Which of the following 4 formulas is the right way to calculate the quantization noise relative to a full-scale sinewave?

The "quantization noise" article currently gives the formulas

  • 1 :  
  • 2 :  

(Are those 2 formulas contradictory?)

The article "Creating the Digital World: Step One" by Rob Howald gives a formula with one additional term

  • 3 : SQNR (dB) = ( 6.02 N + 10 log [ fsample/ 2 fbandwidth] + 1.76 ) dB

(The formulas (2) and (3) would give the same answer in the case where fsample == 2 fbandwidth.)

Also, the SQNR page gives a formula that looks completely different:

  • 4 :  

Which is the right way to calculate the quantization noise relative to a full-scale sinewave?

--68.0.120.35 01:05, 25 May 2007 (UTC)Reply


Why modulation?

edit

I'm removing the first line of this article, which says that quantization error has to do with the conversion from PAM to PCM. If an analog signal is in a PAM form, there's already quantization error. --TedPavlic | talk 16:08, 18 February 2008 (UTC)Reply

Spanish version

edit

I would suggest translating the spanish version: Ruido de cuantificación. --213.37.235.41 (talk) 19:12, 2 May 2008 (UTC)Reply

Redirection

edit

The link to quantization noise redirect to itself ( = this page ). Since I'm not a regular wiki user I prefer just telling and letting someone "who knows" doing it 213.121.242.200 (talk) 22:46, 14 May 2008 (UTC)Reply

Misleading figure

edit

The graphical representation of quantization error does not take into account how the digital signal is reconstructed. Quantization error is the difference between the reconstructed signal and the band limited original. --Kvng (talk) 19:20, 23 December 2010 (UTC)Reply

You are wrong, only offects sampling rate are affected by signal reconstruction. The example seems to use infinite sampling rate, so there is nothing to reconstruct. It is a bit misleading however, such signal is invalid, noise should be added prior to sampling.--90.179.235.249 (talk) 19:20, 19 February 2011 (UTC)Reply
I did assume a finite sample rate. I can see how the figure is correct for a system with infinite sample rate. Infinite sample rate is not practical so I'm not convinced that this is a good illustration. --Kvng (talk) 03:12, 20 February 2011 (UTC)Reply
It's an illustration of quantization error, not sample rate.--90.179.235.249 (talk) 23:21, 20 February 2011 (UTC)Reply