Talk:Additive synthesis/Archive 1

Latest comment: 12 years ago by Clusternote in topic Another defeats on articles

The latest "improvement" reverted.

I will certainly not be opposed to putting in some good references for the theory in the article, but I am just as certainly opposed to putting in incorrect mathematics. Clusternote, you made a common error. In the expression:

 

the parameter   is not the (angular) frequency because it is what multiplies  . It is the the frequency because it's the time rate of change of the internal phase. You take whatever is inside the   function and you differentiate it to get  . You do not take whatever is inside the   function and divide it by   to get the frequency. It might make no difference when   is constant, but it makes a big difference when the instantaneous frequency varies in time. 70.109.180.242 (talk) 18:04, 7 January 2012 (UTC)

I meant to add that some of the changes in prose that you made might very well be good. But removing the summations (which is the discrete-time way of integrating) and replacing by the simpler multiplication was incorrect. (I think it was Einstein that said "A system should be as simple as possible, but no simpler."). 70.109.180.242 (talk) 18:10, 7 January 2012 (UTC)

Stop vandalism! This article not contains expression or formula modification you said. Your story is not related to this article at all.--Clusternote (talk) 20:08, 7 January 2012 (UTC)

I've also made this error in the past, so I can understand it got slipped in in error. But we should keep the brackets that Clusternote added around the contents of the summations. Please assume good faith, both sides; I don't see anything to the contrary. Olli Niemitalo (talk) 16:45, 10 January 2012 (UTC)

Olli - can you clarify exactly which brackets you mean? Cheers. Chrisjohnson (talk) 17:04, 10 January 2012 (UTC)
 
as compared to
 
and
 
as compared to
 
Olli Niemitalo (talk) 17:40, 10 January 2012 (UTC)

Can this please be fixed (the Fourier series equation missing brackets) as it is quite clearly wrong, in fact the same, correct equation appears in the wiki article on Fourier series. You can also use   for clarity and to match the other article.

Done Olli Niemitalo (talk) 19:24, 12 January 2012 (UTC)

Stop reverting, 71.169.179.65

Stop reverting, 71.169.179.65. This article is basically already added reliable sources, except for the description tagged with [Citation needed].

I don't want to own this article. I only add reliable sources and correct several problems. You have no right to revert article without any reliable sources. --Clusternote (talk) 17:48, 9 January 2012 (UTC)

Practice what you preach.
Even though I work for a living, I am willing to take a little time to discuss. But I will not let you (as best as I can) take over the article and turn it into your sole POV. 71.169.179.65 (talk) 17:57, 9 January 2012 (UTC)

Your opinion & POV always lacks any reliable sources. I always show/add reliable sources. You should not revert article based on your POV.

Also, you should not revert any article without preceding discussion. --Clusternote (talk) 18:03, 9 January 2012 (UTC)

Practice what you preach.
And I did add two cites from Julius Smith. But since you don't understand mathematics, you just dismiss them. 71.169.179.65 (talk) 18:06, 9 January 2012 (UTC)

Your repeatedly reverts without preceding discussion nor reliable sources, are equivalent to vandalism.

Practice what you preach.

If you want to keep your equations on Additive synthesis#Theory section, you should show reliable source on your very unique equations. Especially, equations on Additive synthesis#Non-harmonic signal seems to be inappropriate on Wikipedia, because treat of time-varying frequency factor (on these equation, these are time-averaged) lacks generality and accuracy. Your private, unpublished algorithm is inappropriate on the Wikipedia article. --Clusternote (talk) 18:32, 9 January 2012 (UTC)

They're not my equations, but they are correct. I cited them with two articles from Julius Smith, and I told you why your equations (where you simply multiply the time-variant frequency with t ) are wrong. You dismissed that with no understanding of what you are talking about. 71.169.179.65 (talk) 18:38, 9 January 2012 (UTC)

On the section Additive synthesis#Theory, some IP user posted incorrect equations in 2007. On his post in 2007, he introduced at least one inappropriate substitution:

  1. On simple substitution             k f0           →       fk[n],
    his real substitute     (2 π k f0 / Fs)   →   (2 π / Fs) ni=1 fk[i]
    Note: On the discussion of corresponding section, the context requires substitution of constant   k f0   with inharmonic version (on here, "inharmonic" means "arbitrary")   fk, or time-varying inharmonic version   fk[i]
    He declared substitution of constant   k f0   with time-varying inharmonic version   fk[i] .
    And his real substitution is   ni=1 fk[i] / n   imply time-average of time-varying inharmonic version. (if he intent to extend equations towards "time-varying inharmonic" version, this average operation spoil the time-varying nature)
    His word and real substitution are contradicted, and also contracted with context.

If YOU think it is correct and no farther explanation is needed, probably YOU are just the person who posted these equations in 2007.

However, we don't need unpublished, unreliable personal opinion on article. These equations should be delete. --Clusternote (talk) 18:49, 9 January 2012 (UTC)

I took a quick look at the diff and that IP (207.something) is not me. They look similar but not identical to the current ones. And they are obviously correct. If you can do math you would know this. They are precisely the discrete-time equivalent to what Julius Smith has published. Would you like me to give you another reason why your equations (I can get the diff, but we both know you tried to add them in the last couple of days) are incorrect? Consider what happens a second or two after the note begins and n = 44100 or 88200 or some big number like that. Then consider what happens to the phase when the instantaneous frequency changes a little. 71.169.179.65 (talk) 18:56, 9 January 2012 (UTC)

We don't need time-average here.

Red herring. The only sense of "time averaging" is that of the instantaneous frequency over one sample period. It doesn't change anything.

Also I checked source cited on, however, what your saying is not found anywhere. Verification failed, therefore, these expression should be deleted. --Clusternote (talk) 19:03, 9 January 2012 (UTC)

No you didn't. It's clearly referenced at [1] and [2]. And the latter reference is discrete-time. But the former reference is continuous-time and it translates directly to the discrete-time version when you substitute t = nT = n(1/Fs).
As a result, on these pages, at least no clear explanations about time-averaging operation of frequency. If you think "it's clearly referenced", just quote it here.
Also it should be explained on article, however, you didn't it even after repeatedly revert the article. --Clusternote (talk) 20:39, 9 January 2012 (UTC)


You're not very good at telling the truth, are you Cluster? 71.169.179.65 (talk) 19:19, 9 January 2012 (UTC)

Your reply is falsehood. "Time-average ... over one sample period" means 1i=1 fk[i] = fk[1] . On such case, time-averaging is not needed. --Clusternote (talk) 19:37, 9 January 2012 (UTC)

Falsehood by User:71.169.179.65

User:71.169.179.65 posted false statement on this talk page:

  1. His/her expression time averaging" is that of the instantaneous frequency over one sample period" means
    1i=1 fk[i] = fk[1],
    because there is only one "instantaneous frequency" on "one sample period".
    On that case, "time-averaging" is not needed at all. --Clusternote (talk) 19:52, 9 January 2012 (UTC)

Page protected

I've fully protected the article for three days. Please continue the discussion on the talk page and if you can't come to an agreement, ask for advice from an editor experienced in the area of mathematics. —Tom Morris (talk) 19:06, 9 January 2012 (UTC)

Thanks for your kindly protection. --Clusternote (talk) 19:08, 9 January 2012 (UTC)

Can I step in and make a few comments? Firstly, "real-time sound synthesis" is definitely an application of "real-time computing". In the particular case of sound synthesis, the time limitation is usually the sample interval, or a block of them. Given that samples go out at (for example) 48KHz, I have only a limited amount of time to calculate the next sample - about 21us in this case. This is the "real-time constraint" or "operational deadline" that the Realtime computing article talks about. If the sample isn't ready by then, you get a glitch. I don't know exactly what caused the row about that, but it's simple to clear up.

The equations given seem like pretty standard Fourier stuff. The first one given in each case expresses each harmonic as a sum of a sine and cosine, whereas the second expresses it with a variable phase. These are equivalent. There are probably a lot of very-slightly-different-but-equivalent ways of expressing these, but again, it shouldn't be causing this kind of conflict. It's simple to look up.

There seem to be a lot of demands for credentials too, so here goes; I am active in SDIY (http://www.electricdruid.com) and work writing firmware for synthesis applications (Example: http://www.modcan.com/bseries/ahdbdsr.html ) Electricdruid (talk) 20:31, 9 January 2012 (UTC)

The only person demanding credentials was Cluster, when he insisted that I create an account and that he wouldn't "waste time" with me without it. I have only been demanding competence and accuracy. There is a substantive difference when Cluster tried to "simplify" the equations for non-harmonic additive synthesis, where the instantaneous frequency of the partials may be time-varying, and it is a difference where we both have insisted the other is clearly wrong. There is no way we can both be correct about that. I have spelled it out to Cluster (and everyone else to see), both in what the general idea is which he dismissed as "Your story is not related to this article at all." (just like "real-time additive synthesis is not related to real-time computing", boy is this guy ignorant) and I also spelled out how his "simplified" equations are in error.
If you want, I can try to elucidate, but I really am tired of talking. I just don't want this article to decline further, which is why I took some effort to challenge an ignorant editor who fancies himself an expert when there were clear errors. I don't mind other people editing, just don't screw it up. And if you don't know something, leave it alone. 71.169.179.65 (talk) 06:57, 10 January 2012 (UTC)
Thanks for your help. At now, we discuss about equations on sub-section "Non-harmonic signal". There are, 2 versions, basically:
  1. new, simple version
  2. previous version posted in 2007
Do you think which version is more appropriate on Wikipedia ? Or how to improve this sub-section ? I expect your honest opinion... --Clusternote (talk) 21:19, 9 January 2012 (UTC)

My views on the "Non-harmonic signal" section:

  1. I believe the previous version (edit 470480517) to be mathematically correct, and that the representation of the phase term   used in this version is supported by references [2] and [3].
  2. The equations in the new simple version (edit 470480676) are incorrect, since they the a form of equation for a sine wave   (which is valid only if the frequency is constant in time) to describe a sine wave with a time-varying frequency. I do not believe that references [2] and [3] support this form of the equations.

This is not to say that the previous version (edit 470480517) is perfect; I think that it could profitably be shortened (the equivalent forms of the equation could perhaps be stated without showing exactly how each is derived from the others). Also, given the section title of "non-harmonic signal", the distinction between constant non-integer multiples of a constant fundamental frequency (partials as opposed to harmonics, in the sense of [[3]]) and signals with time-varying frequency could be made clearer. Mathematically though, the previous version (edit 470480517) is correct and 'new, simple version' is incorrect. Credentials: (since Electricdruid put his down...) Ph.D in mathematics, currently employed as a researcher in mathematics. Chrisjohnson (talk) 22:39, 9 January 2012 (UTC)

Hi, Chris. Thanks for your comment.
However, your long words lacks mathematical proof. Any mathematical correctness can't be guaranteed by the words without mathematical proofs.

On your observation,
  • What is the continuous version of these discrete equations ?
  • And how your continuous version is converted to discrete equations ?
Please show your mathematical equations.

--Clusternote (talk) 03:12, 10 January 2012 (UTC)

See my comments in the 'Question to mathematicians' section Chrisjohnson (talk) 12:01, 10 January 2012 (UTC)

Basic mathematical foundation of additive synthesis, from Fourier series to non-harmonic synthesis

Harmonic tone

 

or

 

where

y[n] is the output sample at discrete time n,
ak[n] = rk[n] cos(φk[n])
bk[n] = rk[n] sin(φk[n])
rk[n] = (ak[n]2 + bk[n]2)1/2 is the amplitude envelope of the k-th harmonic at discrete time n,
φk[n] is the instantaneous phase function of the k-th harmonic at discrete time n,
Fs is the sampling frequency,
f0 is the fundamental frequency of the waveform or the note frequency,
k f0 is the frequency of the k-th harmonic,
K is the number of harmonics; K < floor(Fs/(2 f0)),
K f0 is the frequency of the highest harmonic and is below the Nyquist frequency, Fs/2.

The DC term is normally not used in audio synthesis, so the a0[n] term can be removed. Introducing time varying coefficients rk[n] allows for the dynamic use of envelopes to modulate oscillators creating a "quasi-periodic" waveform (one that is periodic over the short term but changes its waveform shape over the longer term).

Non-harmonic tone

Additive synthesis can also create non-harmonic sounds (which appear to be non-periodic waveforms, within the time-frame given by the fundamental frequency) if the individual harmonics do not all have a frequency that is an integer multiple of the fundamental frequency. By replacing the k-th harmonic frequency, k f0, with a time-varying and general (not necessarily harmonic) frequency, fk[n], (the instantaneous frequency of the k-th partial at the time of sample n) the definition (removing the DC term) of the synthesized output would be:

 

or

 

where

 .

If fk[n] = k f0, with constant f0, all partials are harmonic, the synthesized waveform is quasi-periodic, and the more general equations above reduce to the simpler equations at the top. For each non-harmonic partial, the phase term φk[n] can be absorbed into the instantaneous frequency term, fk[n] by the substitution:

 

If that substitution is made, all of the φk[n] phase terms can be set to zero with no loss of generality (retaining the initial phase value at y[0]) and the expressions of non-harmonic additive synthesis can be simplified to

  .

If this constant phase term (at time n = 0) is expressed as

 

the general expression of additive synthesis can be further simplified:

 

where

 

and

 


The two citations in the first para of "Inharmonic Partials" do not link to the citaions that they name. It is unclear to me whether it is the name or the link that is incorrect.

Ross bencina (talk) 17:03, 13 January 2012 (UTC)

Question to mathematicians

Thanks for your review.
On your observation,
  • What is the continuous version of these discrete equations ?
  • And how your continuous version is converted to discrete equations ?
Please show your mathematical equations. --Clusternote (talk) 10:07, 10 January 2012 (UTC)
P.S. In my views, without clear explanation of these on article, above complicated equations (which use time-averaging operation to erase phase term) is not rationalized. --Clusternote (talk) 11:01, 10 January 2012 (UTC)
Clusternote: The sum is not a time-averaging operation -- it's a time-integrating operation. The disputed term is:  . This is the sum from   (the start time) to   (the current time) of   (the angular frequency at each sample) divided by the sampling rate  . This sum is the discrete version of the integral term in the third equation in reference 3 [[4]] which reads  . We can prove that the discrete form of the equation follows from the continuous form in reference 3 by defining the discrete angular frequency at sample  ,  , to be the mean of the continuous angular frequency over one sample, i.e. the integral with respect to time of the continuous angular frequency   over the duration of sample  , divided by the length of a sample. Chrisjohnson (talk) 12:01, 10 January 2012 (UTC)
Thank you to kindly explanation. Probably I should check the deliberation of equations written on that source. I'll ask you farther details on the later. --Clusternote (talk) 12:51, 10 January 2012 (UTC)
  • I am sorry to trouble you. In my eyes, on the section "Non-harmonic signal", at least short explanations on the following points are necessary to clarify the definition. Without these, the intention of definition is hard to understand.
  • The phase term   is discarded (or redefined as following), and when needed, it is redefined as the integral of the instantaneous frequency [1]
[incorrect]:  [2] 
[ERRTA]:     [2] 
(Note: the roles of   and   seems inverted on 2nd reference,[2] (wow!) so corrected)
  • [incorrect]: On the discrete definition, discrete form of above integral form   is used, instead of original time-varying form, because ...
  • [ERRTA]: On the discrete definition, discrete form of above integral form,
      +       (or,   +   (?))
    is used, instead of original time-varying form, because ...
    Question: Why discrete form of integral form is used ?
    It is because the given discrete formulas describe an implementation. With a typical sampling frequency of 44100 Hz, the synthesis parameters (frequency and amplitude) change so slow that they can be approximated as constant over each sampling period, with little loss in audio quality. Integrating over the piece-wise constant function can be written as a sum. Olli Niemitalo (talk) 10:02, 13 January 2012 (UTC)
    Thanks, but, realy ? On that case, isn't it something like a     (where sampling interval  )   ?! ... I should personaly verify it, and also think more about it ... --Clusternote (talk) 03:39, 14 January 2012 (UTC)
    The derivation will be easier if the intervals start at sample points. Olli Niemitalo (talk) 07:40, 14 January 2012 (UTC)
    P.S.   Possibly, are you meaning "efficiency of code using constant (pre-calculated finite integration) instead of array inside summation" ? If so, I understand it, apart from integration range. (I didn't expect "efficiency of implementation" instead of "theoretical possibility of mathematical model" on the "Theory" section. best regards, --Clusternote (talk) 05:37, 14 January 2012 (UTC)
    There's both and efficiency aspect and a simplicity aspect. Synth programmers do not necessarily even think of what happens between the sample points for the slowly-varying control signals, but operate directly on the discrete signals. But you are right in that the introductory theory section should not be concerned with such matters. That's why it would be better to write the continuous formulas there and have the discrete ones in an implementation section. Olli Niemitalo (talk) 07:40, 14 January 2012 (UTC)
    I agree with you. (Also I expect your criticism on it by editing or talking or erasing ...) --Clusternote (talk) 08:16, 14 January 2012 (UTC)
    I erased the first footnote (going to have a look at the second one next) as it dealt with instantaneous phase which is an analysis concept not needed in synthesis. The phase of cosine or sine is its argument, so the route via analytic signals is unnecessary. There is a concept in the current theory called phase offset (phase term). Depending on how the synthesis equation is written, it can be constant or time-dependent. Discretization is what has to do with efficiency, not instantaneous phase which is a concept that is not required here. Olli Niemitalo (talk) 11:01, 14 January 2012 (UTC)
    I think so. I felt "[note 1]" is slightly inconsistent on partials latest version. --Clusternote (talk) 15:18, 14 January 2012 (UTC)
    Sorry, but I've slightly changed my opinion on it. Several mentions on instantaneous phase seems to be still needed on supplemental note using continuous form for inharmonic section. Without it, on supplemental note, we can't simply explain the intention of equations. --Clusternote (talk) 03:25, 16 January 2012 (UTC)
  • I erased the second footnote, because it was also based on an analysis-related concept, instantaneous frequency and rewrote the definition of time-dependent frequency in the inharmonic partials section, because it appeared erroneous. Olli Niemitalo (talk) 14:09, 14 January 2012 (UTC)
    Thanks for your comment. Several miss are correctable as tail of this message.
    I think latest version (except for an expression shown in the bold italic letter on the below)(with draw proposal inside paresis due to my mistake (must be instantaneous phase×constant) + additional clarification done on article (15 Jan. 09:45(UST))) is better for engineer preferring discrete form, however, also the concepts behind them should be described for readers more familiar with continuous form. And also possibly other readers will be convinced when they found equivalent "generic expression" were led by both descriptions (article body, and footnote). --Clusternote (talk) 15:18, 14 January 2012 (UTC)

...
<Inharmonic partials section end>
Note for readers: supplemental note for this section may be found on footnote.

Footnotes

Supplemental note for section "Inharmonic partials" using continuous form
The section "Inharmonic partials" was described with discrete equations, regarding implementation. Those equations were expressed using
  • instantaneous phase  ,  instead of phase   ,  and
  • instantaneous frequency in discrete form   ,  instead of time-varying frequency   .
For readers not familiar with above notations, descriptions using these are possibly not easy to understand at a glance. The following is a try to provide supplemental explanations for these readers.

In continuous form, a wave (correspond to a partial on the 2nd equation) is expressed as

 ·   
using angular frequency   and phase  .
Instantaneous phase
On the above case, instantaneous phase     is given by

   

where   denotes analytic representation of   ,  and expressed as   

Instantaneous frequency
Instantaneous frequency     is defined using instantaneous phase  , as following:

 

Additionally, instantaneous phase   can be redefined using angular frequency  , as following:[1]

      (in continuous form)

and, above integral form can be rewritten into its discrete form, by substituting    →     and     ,

      (in discrete form)

General expression of inharmonic additive synthesis
Second equation shown on the top of section can be rewritten using above result, as following:

 
 

The last form matches with the "general expression" shown on the tail of section.

--Clusternote (talk) 19:56, 14 January 2012 (UTC)
I added above supplemental note on "Footnotes" section. It is necessary to verify story written in discrete form, at least for readers who familiar with continuous form. --Clusternote (talk) 15:44, 15 January 2012 (UTC)



  • Why the absorption of phase term   to instantaneous frequency   is needed ? (probably to be consistent with above phase redefinition ? It is also Question)
sincerely, --Clusternote (talk) 11:34, 11 January 2012 (UTC) [errata]Clusternote (talk) 03:39, 14 January 2012 (UTC)
  • P.S. Moreover, this specific definition clearly seems eliminate more generic explanation like a "summation of sinusoidal waves", or "explicitly adding sinusoidal overtones together" written on lead section of article.
According to Smith III 2011, this specific definition was called and defined as:
Additive Synthesis as Early Sinusoidal Modeling:[2]
 
and more generic definition were, for example:
Spectral Modeling Synthesis:[3]
 
The former specific definition seems slightly too strict for this article, especially, without enough rationalization of phase redefinition (it cause integration) mentioned on previous my post. I'm glad if I can also read your opinion on it. sincerely, --Clusternote (talk) 06:48, 12 January 2012 (UTC)
Personally, I would be happy to have the equations written in a continuous-time form, as they are in Smith III 2011. The continuous-time form is probably more natural for most people that are not DSP specialists (but otoh the discrete-time form is going to be of more use to anyone wanting to code up the algorithm on a computer). As for absorbing the phase term, I think a bit too much is made of this point at the moment, and the emphasis should remain on the key process of additive synthesis. On that note, I would favour getting rid of the   form in the time-varying/inharmonic section and leaving just the   form. It seems to me that the reason to use the   form is largely to emphasise the link with a Fourier series / FFT, which is really relevant only for harmonic additive synthesis. Chrisjohnson (talk) 18:08, 12 January 2012 (UTC)
I agree with all of Chrisjohnson's points. The discrete equations could still be left in concise form in an Implementation section closer to the end of the article. Olli Niemitalo (talk) 18:58, 12 January 2012 (UTC)
I agree, too. I just don't want them changed to non-correct equations. Being an IP, I cannot import in an image, but could someone draw out a simple diagram with a big summation in the middle and a bunch of oscillators, with frequency and amplitude control, going into that summer? Maybe we could do the theory at the beginning with continuous-time equations, and then in an implementation section, show how the theoretical continuous-time equations get sampled and turned into the simplest and most general equations that currently live at the bottom of the Theory section. 70.109.183.99 (talk) 21:18, 12 January 2012 (UTC)
Something like this? wikibooks:File:SSadditiveblock.png Need to import to Commons first I guess. Olli Niemitalo (talk) 09:27, 13 January 2012 (UTC)
 
Image:SSadditiveblock.png
I've transferred File:SSadditiveblock.png from wikibooks to Wikimedia/Commons, with the help of CommonsHelper 2 on toolserver. (At the moment, it temporally marked as "insufficient license" due to my small miss-translation, however, it should be kept after reviewing, because already properly fixed. (according to w:commons:Commons:Copyright_tags#General, I should select {{PD-user-w}} when I transferred PD-user image from wikibooks (other project)) --Clusternote (talk) 05:20, 14 January 2012 (UTC)
  • I'm glad for your comment, Chris. I also think discrete expressions should be kept, and additionally, supplemental explanations mentioned on above my post are also needed to clarify the intention of definition for readers.
Since 2007, these equations had been almost unverifiable due to lack of sources.   A several days ago, by two additional citations[2][1], the intention of equations seems partially clarified, at least in my eyes.   And now, it should be more clearly explained as a definition with several rationality, instead of as inviolable theory. Also, whether this specific definition is only available one or not, seems need to verify.
(Possibly a few peoples on this field claim their theory or definitions don't need any verifiability nor farther supplemental explanations, however, it may not meet Wikipedia standard)
By the way: Procedurally, should I re-post above my proposal more formally ? (also in order to clarify possibly uncertain points on my proposal) --Clusternote (talk) 07:45, 13 January 2012 (UTC)
Yes please, if you have a rewrite of the harmonic & inharmonic theory section or parts of it, please post. Currently it is hard to see the proposal from the comments. Olli Niemitalo (talk) 10:59, 13 January 2012 (UTC)
 Perdon, are you meaning rewritten version of section ? Please wait for a while, I should re-prepare it ... --Clusternote (talk) 11:13, 13 January 2012 (UTC)
Yes, or a part of it. Olli Niemitalo (talk) 11:22, 13 January 2012 (UTC)
  • sorry for my slow hand. I've updated subsection "Harmonic tone". Now preparing rest. --Clusternote (talk) 15:04, 13 January 2012 (UTC)
  • Done. As a result, derivation of generic form (absorbing phase   (n>0) into frequency term   on last half of section) seems slightly redundant. --Clusternote (talk) 08:05, 14 January 2012 (UTC)
P.S. As for the questions on above my proposal: superficial short answer may be "by definition" and "to get 'generic expression' often used/required on somewhere" (although these superficial answers don't meets my essential question — rationality of definition and description). And essential answers may be, for example, "efficiency of implementations on DSP hardware/code", "understandability" (without interpretation of continuous formula), or "ease of development" (discrete formula is convenient as basis of programming) as you said. I expect several of them can be verified on reliable sources.
On the definition of mathematical model, too long formula modification without enough explanation ("where are we and where are going?"), tends to cause unnecessary difficulty and misleading. In this case, precondition of definition, intention of definition and modification, and merits of generic expression, should be supplementary explained on the theory section, in my opinion. --Clusternote (talk) 10:24, 13 January 2012 (UTC)

Opinion of other users

Because, with the exception of organ pipes or or Hammond B3 tonewheels or a collection of analog oscillators (in the 70s, I did a simple 3-harmonic additive synthesizer with a patchboard Moog synthesizer), when additive synthesis is normally done nowadaze, it's done with computers or DSP chips or related chips (like an ASIC or FPGA). These digital synthesizers (or synthesizer software) compute and output discrete-time samples not over continuous time. So the mathematics of what is really being done is not going to be:
 
but you substitute   which is what we do when we sample a continuous-time signal. Now, can you show us that when you make that substitution, you will get an expression that looks like:
  ?
There will be a slight change in definition for rk and fk, but the form is that.
Now, there is a reason for you to show us that you can do that. You should show us that you know the least bit about mathematics so that we don't think we're wasting our time trying to inform you of an arcane technical concept that you cannot understand (at least understand in a ready amount of time). You cannot demand that your declaration (that previous math is wrong) be respected just because you don't understand it. And you cannot expect to change the article from correct equations that you cannot understand to incorrect equations that you do understand. That's a ridiculous demand. It's a ridiculous fallacy to assert an argument that "whatever I can't understand must be incorrect." 70.109.183.99 (talk) 05:09, 12 January 2012 (UTC)
Maybe we'll answer this latter question once you've demonstrated that you're worthy (by showing us you can make that substitution and get the discrete-time result). Our time is too valuable to waste. If you cannot understand even the simplest fundamental concepts, what point is it in trying to teach you the somewhat less simple concepts. And your understanding is not the criterion for inclusion. You do not own the article. 70.109.183.99 (talk) 05:09, 12 January 2012 (UTC)

Personal attack from IP user (moved from Mathematician section)

Thanks for not doing this to the article, Clusternote. It's good that you want to check out and understand the mathematics. It's calculus-level math and it shouldn't be so hard to personally think about and confirm the math. But if you cannot do that yourself, do not conclude that you are the authority of the facts and change it in the article or crap up the article with stikeouts and the like. The main namespace Additive synthesis page is not a sandbox.

Theory section needs a drawing like in the JOS article.

Clusternote, this pair of edits were useful and good, which is why they remain. Please consider straightening out your own understanding about what is happening regarding the argument of a sin() or cos() function and what instantaneous frequency is, before attempting to transmit that understanding to others via a publicly-consumed article in the WP:main namespace. The talk page is good for that, but you'll have to try to form well-defined questions for the mathematicians and engineers to answer and then you need to listen. You can make contributions to the article that has value, but they won't be of value if they promulgate misunderstanding rather than understanding.

Now, what this article really needs, in my opinion, is a simple drawing that shows a few simple oscillators, each with independent control of frequency and amplitude, and the outputs of each of those oscillators summed into a big summer that has output y(t). I cannot upload this graphic. is there someone who would be willing to do it? I asked Clusternote if he/she might want to contribute to the article in that positive manner, but I suspect that this request was declined. 71.169.180.195 (talk) 19:07, 15 January 2012 (UTC)


As already discussed, we needs appropriate supplemental explanations on the discrete equations shown on the Theory section (its explanation is still inconsistent and lacks rationality), for "general readers" who studies mathematics in continuous form.

If you found any defeats on supplemental notes (also shown on next section), it should be improved instead of reverted. --Clusternote (talk) 06:15, 16 January 2012 (UTC)

Rephrase

Erm, the unique tone of an instrument is formed more by how those harmonics *change* over time, and by transient bits of noise and non-harmonic frequencies. Additive tries to emulate that by having a different envelope on each individual harmonic. I don't know how best to re-phrase it.

Sorry: I've reworded it a bit further. Hope all is well now. Dysprosia 09:53, 7 Sep 2003 (UTC)
I understand that additive is equivalent to wavetable if partials are harmonic. What if partials aren't harmonic? Which allows me to do this, and which does not? Petwil 06:02, 21 October 2006 (UTC)
wavetable synthesis (not to be confused with basic PCM sample playback cannot do inharmonic partials unless you were to detune the partials from their harmonic frequency by constantly moving the phase of the partial, requiring a lot of wavetable updates. in additive synthesis, the partials (sine waves) are synthesized separately, then added. in their separate synthesis, there is no need that they be at harmonic frequencies. the frequencies of partials in additive synthesis can be whatever is specified. r b-j 04:13, 22 October 2006 (UTC)


The Synclavier was a sampler and programmable harmonic definable wavetable FM synthesizer. It was NOT a real additive synth: you can construct a patch defining 24 fixed partials per voice and apply dynamic enveloping and a very simple FM modulator with envelope, only with the partial timbre upgrade you can specify several harmonic spectrums and fade between them in time. I think the article meant to refer to machines as the Kurzweil K150 or Kawai K5/5000 and remotely the Digital Keyboards Synergy, all them the first generation of additive hardware synths. The K 150 is a REAL -and a compromise between quantity of oscillators vs. poliphony- additive engine where you can program each partial individually with envelopes (it's a shame that the programming is only possible using an old apple computer, it can't be done from the front panel). The K5 does the same but is a simplification, being able to control only 4 groups of harmonics and not each one: practice shows that individual control is desirable up to the 16th partial... The K 5000 is the classic additive synth, but combined with samples: it's quite powerful but clumsy to work compared with software synthesis.

The true about the Synergy: The Synergy is a user definable PM (as FM) semi algorithmic with additive capabilities, 32 digital oscillators synth. This means that you could use it as a 16 partials two voice polyphony fully additive synth (with limited timbrical results) or the most usual way: complex 8 voice polyphony Yamaha FM style synthesis. You can think of it as a much more flexible algorithm, envelope (for frequency and amplitude for each oscillator) and filter equalization DX7 style synth. In fact, you can came very close to the original patches using a soft synth as FM7: you cannot do the best patches (as Wendy Carlos's collection) on a DX7 because of the limited envelopes and operator output fixed curves, not to consider the somewhat "metallic" quality of sound that all the DXs have.In comparison, the Synergy is really warm. That is all and is not small thing. — Preceding unsigned comment was added by r b-j at 08:17, 13 May 2007 (UTC), and edited by 190.190.31.69 at 20:46, 23 August 2010 (UTC))

(Preceding [unsigned comment] line itself was complemented by) 122.17.104.157 (talk) 00:08, 31 October 2011 (UTC)

Harmonic or inharmonics

I can't figure out from this article whether additive synthesis involves harmonic partials only, or if inharmonics can be used as well. For example, an early section reads: Additive synthesis ...[combines] waveforms pitched to different harmonics, with a different amplitude envelope on each, along with inharmonic artifacts. Usually, this involves a bank of oscillators tuned to multiples of the base frequency. The term "inharmonic artifacts" implies that they are not deliberate but faults of the technology somehow. The general idea I get here is that additive synthesis is about combining harmonic partials of the fundamental frequency. But further down we get: Additive synthesis can also create non-harmonic sounds if the individual partials are not all having a frequency that is an integer multiple of the same fundamental frequency. Finally, another section says: ...wavetable synthesis is equivalent to additive synthesis in the case that all partials or overtones are harmonic (that is all overtones are at frequencies that are an integer multiple of a fundamental frequency...).

So I'm confused. If I combines a bunch of waves, some of which are not harmonics of the fundamental, is this additive synthesis or not? I'd always assumed it was. Another sentence on this page says: Not all musical sounds have harmonic partials (e.g., bells), but many do. In these cases, an efficient implementation of additive synthesis can be accomplished with wavetable synthesis [instead of additive synthesis]. Yet I've spent time using what I thought was additive synthesis to create bell-like tones, by incorporating various harmonic and inharmonic partials (using sine waves only). It seems like additive synthesis is the right term, since a bunch of waveforms are being "added" together, whether or not they are harmonic. But then again, that's just my uninformed sense. Is there a definitive definition one way or the other? If so, let's edit the page to make that clear. If not, ....let's edit the page to make that clear! Pfly (talk) 09:51, 18 November 2007 (UTC)

Good point. I fixed it. Some copy edit can improve the prose, but I guarantee that the math is correct. 207.190.198.130 (talk) 06:00, 19 November 2007 (UTC)

Additional citations

Why and where does this article need additional citations for verification? What references does it need and how should they be added? Hyacinth (talk) 03:42, 30 December 2011 (UTC)

Personally, I think it's fine, but I'm not gonna de-tag it. I'll leave that to someone else. 71.169.185.162 (talk) 06:21, 30 December 2011 (UTC)
It seems slightly strange question. Until December 2010, this article lacked citations at all, and since then, I'm expanding this article and adding most of all citations on the "implementations" section. However, the other sections — the lead section (definition of notion), and resynthesis section (most interesting part) — still lack any citations at all. Your contributions are welcome ! --Clusternote (talk) 09:12, 30 December 2011 (UTC)
After digging several related references, I felt again that the descriptions on the lead section and re-synthesis section seem to be too naive (not practical), and possibly not based on any reliable sources (except for simple articles for beginners).
Although the descriptions are not incorrect, it seems to be hard to make association with existing reliable researches, and too abstract as foundation for adding extended results researched on several decades ago.
In my opinion, sometime, these sections should be totally re-written.
--Clusternote (talk) 05:54, 3 January 2012 (UTC)

Acoustic instruments and electronic additive synthesizers

This section is a bit of a mess. It delves too deep into the features of a few digital synthesizers and makes quite strongly biased claims about them ("it's quite powerful but clumsy to work", "In comparison, the Synergy is really warm. That is all and is not small thing", "it's a shame that the programming is only possible using an old apple computer"), is ambiguous and jargony at times and generally quite poor in grammar and style. Partially it feels like an advertisement for a synth. Some examples of additive synthesizers would be welcome, but I think this section needs a complete rewrite. Jakce (talk) 11:28, 22 September 2010 (UTC)

Agreed. I think there needs to be some consensus about whether an acoustic instrument is in any real sense an "additive synthesiser" or just a "Historical precursor." Is it enough to be "a mixture of many sources" to be additive? Does a symphony orchestra count? Is a sum of sinusoids required (as the Theory section suggests)? Is a Hammond tone wheel organ really an additive synthesizer? -- certainly not in the usual sense of a set of time-varying sinusoidal partials. I would be inclined to add a "Historical precursors" section for the pipe organ and hammond. Maybe some of the other stuff belongs there too. Ross bencina (talk) 07:48, 12 January 2012 (UTC)
I renamed the Implementations section as Timeline of additive synthesizers and moved Wavetable synthesis and Speech synthesis away from there. Organs are now listed as historical precursors. This sectioning should give a better separation between synthesis theory and synthesizer history, so that both can be expanded without compromising the other. I suppose history of advances in theory should be embedded in the theory sections rather than put on the timeline. Olli Niemitalo (talk) 01:30, 13 January 2012 (UTC)

Okay Cluster, we need to talk about what is meant by "realtime" or "real-time".

I don't think you have the common use or meaning down by "time-variant". "... time-vary transient wave" is time-variant or, if you're more a hardcore techie, nonstationary. What "real-time" means in any context is that the production of whatever is done at the time of consumption, not in advance. It is food that is not pre-cooked or pre-processed but cooked when it is eaten. As far as music synthesis or music processing is concerned, it means that the music is synthesized or processed at the time that it is heard. It means that it was not synthesized in advance and written to a soundfile to be played back later (usually because the computational cost of time in synthesis exceeded the time duration of the sound).

Real-time and time-variant really are different concepts. In music synthesis, one situation I can think of where they are related is in the cranking of a knob (or bending a pitch wheel or mod wheel) during a note. If the synthesis is not real-time and the sound is outputted from a sound file, you cannot do that in playback unless the processing to change the pitch or modulation (whatever the mod is) can be done to the sound playback via real-time post-processing.

I think you need to come up with good references that support your notion that "real-time" means "time-variant". It doesn't. But both concepts have much to do with sound synthesis. 70.109.177.113 (talk) 05:53, 3 January 2012 (UTC)

Hi, 70.109.177.113. Please create account before discussing, and show your sources for your opinion. I'm not interested on time wasting discussion with unknown person without any reliable sources. --Clusternote (talk) 05:58, 3 January 2012 (UTC)
Well, that's kinda a copout, Cluster. Consider the merit of the content of the text that appears before you, not whatever disembodied being that placed it there. Why, from what source, did you come up with the notion that in any context, let alone the context of music synthesis, that "real-time" means "time-varying"?
BTW there are some very good reasons I am posting as an IP and to itemize them here would obviate those reasons. I may edit pages that are not protected or semi-protected, so I have just as much "authority" (such as it is here in Wikipedia) as the next schlub. Just deal with the content rather than worry about who I am or may be. 70.109.177.113 (talk) 06:09, 3 January 2012 (UTC)

(reset indent)
If you need meaningfull discussion, please show your reliable sources on addtive synthesis at first, then briefly explain your opinion. I can't understand your previous complicated posts.

Note that additive synthesis for dynamic, time-varying waveform generation seems to be historically often called "realtime additive synthesis" in the meaning of both "realtime change of waveform, harmonics or timbre" and "realtime implementation, processing or computation". You can find several examples on Google Scholar search. Or more briefly, Sound on Sound's article on Oct. 1997 show both meaning of usage of the term "real time". --Clusternote (talk) 13:32, 3 January 2012 (UTC)

Again, I have removed this content change from Clusternote that some might call OR but I would just call an error of category. Clusternote, you are mistaken with your assumption that they didn't mean real-time when they wrote real-time synthesis. What is real-time computing is precisely what is meant in real-time synthesis and your own cites make that connection ("STONE-AGE ADDITIVE"). It's your original contribution to the article that is required to be defended with citations that actually support the addition. 70.109.177.113 (talk) 06:01, 4 January 2012 (UTC)
Clusternote, I think you're getting confused between what is often implemented at the same time. "additive synthesis for dynamic, time-varying waveform generation" implemented as "realtime additive synthesis" does not mean the two terms are equivalent in meaning. They have distinct meanings as 70.109.177.113 has explained. Ross bencina (talk) 07:53, 12 January 2012 (UTC)
Dear Ross, thanks to your advice. Probably as you said. Most of our difficulty is lack of reliable source, and moreover, lack of widely accepted common taxonomy of additive synthesis, both required as base of discussion and improvement of article. If you kindly provide several source or list of widely accepted terminologies and taxonomy on this field, several time wasting discussions without any reliable sources by several IP users are drastically reduced, and it should be great help to improve the article. (also discussion on categorization may be found on #Lack of categorization of various .22additive_synthesis.22).
As for "real-time ...": Several years ago, some article also edited by similar authors had denoted "time-varying ..." as "real-time ...", so, I've recognized it as a kind of unique jargon accepted on Wikipedia. However, now, that article is completely clean upped, and that unique expression seems gone. I should had checked the reliability of that expression when I had seen it.
Anyway, I expect your great contributions on this article (it was lacked any reliable sources until last year). best regards, --Clusternote (talk) 21:54, 12 January 2012 (UTC)

70.109.177.113, please show your reliable sources before editing article. I already show several sources on this issue on this page. You can't understand the situation yet. Almost all citations on article page were added by me. However, you didn't yet show any source supporting your opinion. Please don't revert article until when you can find any sources supporting your opinion. --Clusternote (talk) 06:10, 4 January 2012 (UTC)

You are the editor adding content without sourcing it. The sources you cite actually disprove the claim you make (that "real-time" in synthesis is not the same as real-time computing. You are totally mistaken and you need to do some studying. Start with the sources you cited above. 70.109.177.113 (talk) 06:21, 4 January 2012 (UTC)

False statements by the User:70.109.177.113

User:70.109.177.113 posted false statement on this thread.[5]

  • This user is confusing "realtime sound synthesis"[6] with Real-time computing:
    On his/her several edits, he/she repeatedly added link to real-time computing for ANS synthesizer [7] [8].
    However, in truth, ANS synthesizer was implemented using electronic-optical technology, and not using any real-time computing technology.
    And these incorrect edits are one of the evidences of his/her misunderstandings.
  • While this user misunderstand repeatedly as above, this user also claimed it as another user's misunderstanding. [9]

It seems lack soundness. --Clusternote (talk) 08:00, 4 January 2012 (UTC)

Please create account before further discussion

Please create account before further discussion. The person without account nor reliable sources is not worth the trust. --Clusternote (talk) 07:04, 3 January 2012 (UTC)

Sorry, but I don't think it's appropriate to discriminate against an anonymous IP, even in a talk page. IPs have most of the rights of editing, discussion, consensus, etc. that registered accounts do. There are legitimate reasons to contribute via an IP and it's a core Wikipedia principle to embrace them. Verifiable content is what matters, not who contributes it. Users never need to be trusted, whether they are a named account or an IP. Enough philosophy... back to your argument!  :) --Ds13 (talk) 09:14, 3 January 2012 (UTC)
Thanks for your comment. However, most problematic thing on this issue is, on the article or talk page, other users including this user didn't show any citations or sources, and only discuss on their uncertain memories or original researches. Above discussion essentially lacks sources, and discussion without sources tends to be biased subjectively. We needs reliable sources and responsible discussion. --Clusternote (talk) 10:21, 3 January 2012 (UTC)
Agree completely. This article needs more citations and less subjectivity. I'm going to make a pass through the article now to neutralize a few things. --Ds13 (talk) 18:28, 3 January 2012 (UTC)


By the way: If someone understand exact meaning of what this IP user wrote on this thread, please briefly explain it to me.

For me, this IP user's uncertain English couldn't understand well. And in my eyes, his/her uncertain, uncooperative, offensive words seems to be slightly inappropriate on discussion on Wikipedia. --Clusternote (talk) 14:44, 4 January 2012 (UTC)

Writing style

I'm not a subject-matter expert here, but I do know how to write for Wikipedia. This diff reverts typical changes to clean up language in articles that obscure matters for the "general reader". Simpler language that does the job concisely is actually the house style here. Charles Matthews (talk) 19:28, 12 January 2012 (UTC)

I agree that the diff introduces increasingly obscure language, however it also corrects falsehoods. Cleaning up the language to be black-and-white can only be appropriate if the facts are black and white. In this case I don't think they are. Ross bencina (talk) 19:54, 12 January 2012 (UTC)
We could cut out all the stuff in the middle (but I wouldn't recommend it). It exists to generalize from the Fourier series representation used for harmonic tones and to take it to the simplest, most general expression of additive synthesis (adding sinusoids with arbitrary frequency and amplitude). 70.109.183.99 (talk) 20:55, 12 January 2012 (UTC)

Could you explain the intended meaning of "quaai-periodic" there? In mathematics it is ambiguous, but a straight traditional Fourier series does not represent anything quasi-periodic, for sure. Charles Matthews (talk) 21:29, 12 January 2012 (UTC)

There is a problem with nomenclature overlapping with other disciplines. It is most closely related to Almost periodic function. It is a function where some periodicity is evident; one period or cycle looks nearly indistinguishable from the adjacent periods but may look very different from periods that are far away. Other terms used have been "quasi-harmonic", but that alludes more to the frequency-domain nature of the tone rather than the time-domain. Many musical instruments have notes that, at least after the initial onset, are quasi-periodic or quasi-harmonic. As far as the Fourier series, it means that the ak and bk coefficients are not constant (if they were constant, the result would be perfectly periodic, of course) but are functions of time that are relatively slowly varying, like a synthesizer ADSR envelope but more general than ADSR. 70.109.183.99 (talk) 21:52, 12 January 2012 (UTC)

Fixed-waveform vs time-varying

There has been some confusion between "real-time" (i.e. something capable of generating sound in real time, such as a computer program), and "time-varying", which in the context of additive synthesis means that the partials change amplitude and/or frequency over time. Clearly given these definitions it is possible to have both time-varying non-real-time synthesis and also non-time-varying real-time synthesis. I hope this entry provides sufficient explanation as to why I have replaced real-time with time-varying in almost all cases in this article. It should be noted that whether or not something is real-time, while of technical interest, is of little substantial relevance to the consideration of a synthesis techniques on its merits to produce particular timbres.

The fixed waveform / time-varying classification is supported by Roads 1996:

Additive synthesis is a class of sound synthesis techniques based on the summation of elementary waveforms to create a more complex waveform. Additive synthesis is one of the oldest and most heavily researched synthesis techniques. This section starts with a brief history of additive synthesis and explains its fixed-waveform and time-varying manifestations. (Curtis Roads, The Computer Music Tutorial, p134. MIT Press 1996)

"fixed waveform" is equivalent to "static" and "non-time-varying" which may also be used in this article. Ross bencina (talk) 19:50, 12 January 2012 (UTC)


You're completely correct, Ross, but there is some history of the first real-time additive synthesis machine (the Bell-Labs machine?) and there is a concept of real-time synthesis as opposed to creating a sound file in non-real-time and playing that back later (at the proper speed). This is part of the early history of additive synthesis. At first most of us couldn't do it with a real-time device where you would hit a key and out would come the sound. So it is appropriate to have some reference to the concept and I wouldn't even object to calling a Hammond B3 a sorta crude "real-time" additive device and I wouldn't object to calling an organ a precursor to additive synthesis.
Essentially, the only difference between "static" and "time-varying" is that the rk and φk are constant (for harmonic) and that rk and fk are constant for for non-harmonic.
I believe that one thing that may have been hanging Clusternote up is that if any of these parameters vary because someone turns a knob (in response to a MIDI Control message) and if the synthesis responds immediately, that does have to be real-time. But the concepts are different. 70.109.183.99 (talk) 20:46, 12 January 2012 (UTC)

Speech synthesis

Hi Clusternote. I understand the point of your recent edit, replacing text I deleted. Here's my perspective: until some reliable sources verify the relevance of the speech synthesis premise (intro sentence of that section?) then I struggle with any and all of its content being there. Will wait for more info. --Ds13 (talk) 02:33, 4 January 2012 (UTC)

The similarity between "speech synthesis" (imply analysis and re-synthesis) and "additive synthesis" (including resynthesis and analysis needed for it), is just what I already wrote on article. Historically, speech analysis and speech re-synthesis were implemented using extraction of peak frequencies of fomants, and reproduction of peak frequencies using oscillators (in the case of sinewave synthesis, adding sinusoidal waves as in the normal additive synthesis) or filters.
These method didn't directly implement harmonic analysis (and resynthesis based on it), however, the definition of additive synthesis should not be limited on harmonic analysis/resynthesis. A special case of additive synthesis based on harmonic analysis seems to be just called "Spectral modeling synthesis" (or, "Sinusoidal modeling" for the particular case using sinusoidal waves as basis function)[a 1]
  1. ^ Julius O. Smith III (2011), "Additive Synthesis", Spectral Audio Signal Processing, Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, ISBN 978-0-9745607-3-1 {{citation}}: External link in |chapterurl= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help)
    — See also the section "Spectral modeling synthesis" which linked to the term "Sinusoidal modeling" on above page.
--Clusternote (talk) 05:35, 4 January 2012 (UTC)
P.S. Anyway, thanks for your suggestion. The formal name and categorization of a kind of non-harmonic additive synthesis used on especially speech synthesis, is interesting theme to improve article significantly. I think it requires farther verification. --Clusternote (talk) 08:59, 4 January 2012 (UTC)

Yesterday I stripped all material that was clearly not related to additive synthesis from the speech synthesis section. There is no need to describe non-additive techniques, formants, CELP etc in this article. On the other hand I can see the relevance of the content that I've left. Possibly it would be better to replace with a related techniques section that includes the current content of the speech synthesis section. After all, additive synthesis is related to many other areas, not just speech synthesis research. --Ross bencina (talk) 05:45, 14 January 2012 (UTC)

Another defeats on articles

On the latest version of article, also another defeats are remains. --Clusternote (talk) 14:07, 10 January 2012 (UTC)

Ignore of "analysis" phase as Analysis / Resynthesis-style synthesis

 
Overview of Additive Re-synthesis
based on (Horner & Beauchamp 1995), (Yee-King 2007, p. 19)
[I think this diagram specialises too much on analysis/resynthesis to be described as an 'overview' of additive synthesis. For example, Yee-King 2007, p. 18, mentions using ADSR envelopes rather than analysis to generate the amplitude of the partials. Perhaps the diagram can be expanded to include several possible sources of control data? Chrisjohnson (talk) 18:20, 10 January 2012 (UTC) ] [I think this diagram would work well floated right in the Analysis/Synthesis section. With new title "Overview of Additive Analysis/Resynthesis"Ross bencina (talk) 16:25, 13 January 2012 (UTC)]
 
Overview of
Timbral Specification Techniques
for additive synthesis

(was Overview of Additive Re-synthesis)
based on (Horner & Beauchamp 1995), (Yee-King 2007, p. 19), and Smith III 2011

Hi, Chris. Here is alternate version. As for ADSR envelope, I interpret it as a kind of Mathematical Model using analog computer. BTW: whole image seems slightly too complex, and possibly not reach to quality std. ?--Clusternote (talk) 15:22, 13 January 2012 (UTC)

As written on (Horner & Beauchamp 1995), (Yee-King 2007, p. 19), etc., additive synthesis consists with

  1. "Analysis" phase to extract parameters of partials, and possibly estimate trajectories from these[1]
  2. "Modification" (or "Decomposition" ?[citation needed]) phase to alter parameters with several intention,[4], and
  3. "Re-synthesis" phase to produce sound from parameters.

And sometimes this aspect is called Analysis / Resynthesis-style synthesis. This aspect is slightly similar to

However, latest version of article ignores this aspect, without any rationalization. It is a clear defeat of article.
--Clusternote (talk) 14:07, 10 January 2012 (UTC)

The "Analysis" phase is not an essential part of additive synthesis. It might be an essential part of additive *re*synthesis, but not additive synthesis. Almost any synthesis technique could be used for resynthesis of some existing sound (with varying degrees of success, obviously!) and the analysis stage would be crucial in that process. But the analysis is not part of the synthesis. Additive synthesis can be used for abstract sound synthesis like any other technique, in which case no analysis is required. Leave it out, in my view. Electricdruid (talk) 14:47, 10 January 2012 (UTC)

Thank you to rapid comment. I know several peoples often oversimplify the definition, however, I can't found enough reliable sources supporting your opinion disregarding analysis phase. Where is your reliable sources ?
On the above definition written on top, "analysis" is not a part of "re-synthesis" but a part of "Analysis/Resynthesis"-style synthesis. --Clusternote (talk) 15:12, 10 January 2012 (UTC)
There's a nice piece about additive synthesis at http://music.columbia.edu/cmc/MusicAndComputers/chapter4/04_02.php
As you can see, there's nothing about analysis at all. Columbia Uni is a reliable source, I'd have thought. Where and how you generate control data for your additive synthesis is entirely up to you - most of the instruments you've included as examples allow various sources for the data, and many don't include any analysis features.
Indeed, the JOS reference you give even says "In order to reproduce a given signal, we must first analyze it..." (https://ccrma.stanford.edu/~jos/sasp/Additive_Synthesis_Analysis.html) - the point being that the analysis step is only necessary if we're trying to reproduce a given signal, not just to make a sound. — Preceding unsigned comment added by Electricdruid (talkcontribs) 11:08, 11 January 2012 (UTC)

I agree with Electricdruid that the analysis phase is not an essential part of additive synthesis. A reliable source for this is 'The Computer Music Tutorial' by Curtis Roads (MIT Press 1996). Many of the examples of additive synthesis on the page, such as the Hammond organ, clearly have no analysis phase -- and surely these should be classified as additive synthesis? Of course, analysis of the sort described above is sometimes used to generate envelopes for each partial for subsequent additive synthesis, but the references Horner & Beauchamp and Yee-King do not support the view that it is the only way to generate such control data. (An example: the THX sound is a frequently-heard instance of inharmonic time-varying additive synthesis being used with control data generated entirely algorithmically, rather than through analysis of existing sounds). Chrisjohnson (talk) 15:34, 10 January 2012 (UTC)

Thanks for your comment and reference. That difference may be wider meaning (as Analysis/Resynthesis-style synthesis) and narrower meaning (concentrate to the form of "Fourier series" originally used on re-synthesis phase). Narrower usage is often used on consumer models, however, researches seems to be carried out regarding wider viewpoint. For example, (Smith III & Serra 2005, Additive Synthesis) defines "additive synthesis" with referring "Speech analysis/synthesis" paper (McAulay & Quatieri, 1986). --Clusternote (talk) 15:58, 10 January 2012 (UTC)
I take the view that the 'wider meaning' of additive synthesis is the summation of sinusoidal waves (a Fourier series or otherwise) with amplitudes and frequencies produced by any method, and that it is a 'narrower view' to limit attention to the situation where the amplitudes and frequencies of the partials are generated by some form of analysis of other sounds. In an article titled Additive Synthesis we should not restrict attention only to systems where additive synthesis is combined with analysis. It is misleading to regard additive synthesis as being something "originally used on re-synthesis phase" [of an analysis/re-synthesis system], since additive synthesis without any analysis pre-dates any form of analysis/re-synthesis by a considerable period (e.g. the Telharmonium of 1897). (By the way, Smith & Serra cite McAulay and Quatieri only as a reference for cubic polynomial phase interpolation -- I'm not sure this is a relevant source?). Chrisjohnson (talk) 16:39, 10 January 2012 (UTC)
My selection of words "wider" and "narrower" are possibly opposition.
The notion of "Harmonic additive synthesis (time-invariant)"[6](or "Steady-state additive synthesis"[7] using only harmonics) may be probably backed to Pipe organ era. However, explanations using the term "additive synthesis" may be probably after 1950s when the term "sound synthesizer" was used after RCA Mark II Sound Synthesizer. And earliest "additive synthesizer" on computer music may imply Analysis/Resynthesis, as written on Smith III 2011,Additive Synthesis (Early Sinusoidal Modeling).
 
By the way, I updated categorization of various "additive synthesis" on the next sub-section. I am glad if I can read your comment on that. --Clusternote (talk) 18:30, 10 January 2012 (UTC)
  • On (McAulay and Quatieri 1986), your point seems reasonable. What I had to point out was, at least, several researchers of that field seems to be interested in Analysis/Resynthesis and Speech synthesis (= wider viewpoint), for example, title of Smith III & Serra 2005 "PARSHL: An Analysis/Synthesis Program for Non-Harmonic Sounds Based on a Sinusoidal Representation" show it. And now, I think farther verification is needed on this issue including Analysis/Resynthesis on earlier researches of additive synthesis, and also (possibly) interaction with research of Speech Synthesis, etc. --Clusternote (talk) 19:05, 11 January 2012 (UTC)

I think the subject of analysis merits inclusion in the article, but it should be made clear that it is not a part of all additive synthesis methods. Also a note on the use of the word "decomposition": In this context it is synonymous to "analysis", not to "modification" (a use not supported by the given references). The idea is that the signal is thought to be composed of (typically sinusoidal) components, and can thus be decomposed back into those components. See for example the abstract of Simmons 2002. Olli Niemitalo (talk) 16:14, 10 January 2012 (UTC)

Thanks for your comment. I think so. For the term "Decomposition", it may be probably my mistake, or non-standard use of terminology on somewhere. I temporally missed reference to it. When I found the URL, of course, I want to verify it. --Clusternote (talk) 17:14, 10 January 2012 (UTC)

I've cleaned up the Additive analysis/re-synthesis section based on the above discussion. Can we remove the cleanup tag now? Perhaps the illustration above could be included as "A schematic illustration of the additive analysis/resynthesis process". Ross bencina (talk) 12:22, 13 January 2012 (UTC)

Removed cleanup tag Olli Niemitalo (talk) 14:29, 13 January 2012 (UTC)

Clusternote, Chris, others: I think there is no such thing as "resynthesis" aside from if there is a preceding *analysis*. Any other form of control can only lead to "synthesis" (controlled synthesis, if you will). For this reason I think that while Clusternote's "new simplified" diagram is good and correct as a "overview of timbral specification techniques for additive synthesis" I don't think it belongs with "resynthesis" or "Analysis/resynthesis". Perhaps we should create a new "Control Strategies" section. I don't know when the current section was originally titled "Additive resynthesis" but I have now renamed it "Additive analysis/resynthesis" since that made the most sense based on incorporating all of the discussion in this section. Of course there is still a question about whether analysis/resynthesis should be mentioned at all. I think it is a significant enough area to mention -- so long as it isn't presented as the whole story, which it isn't. Ross bencina (talk) 05:05, 14 January 2012 (UTC)

Lack of categorization of various "additive synthesis"

As shown on [several references on previous version (search "terminologies"), variaties of additive synthesis are often categorized on several categories. For examples:

  1. Harmonic or Inharmonic
    • Harmonic additive synthesis which only use harmonic partials [6]
    • Inharmonic additive synthesis which also use inharmonic partials [8]
  2. Steady-state or Transient-state
    • Steady-state additive synthesis which only handle steady-state waves.[7]
    • Time-varying additive synthesis which also handle transient waves. [7][9]
  3. Source of control data (or control signal)   Analysis-base (using control data) or non-Analysis-base
    • Mathematically-generated
    • Human-generated
    • Analysis-generated
     
    For example, on "Time-varying additive synthesis", at least following two types are exist:
    • Control data driven progressions[7] (Analysis base)
    • Generated progressions by math (realtime or pre-calculated)[7] (non-Analysis base)
    Note: The latest version of article seems to be based on "non-Analysis" version.
  4. (Real-time processing or pre-processed (non-real-time))
  5. Real-time control or not

These Similar categorizations and several explanations are needed on article on appropriate section. , beforehand implicitly define additive synthesis without enough reliable sources.
--Clusternote (talk) 14:07, 10 January 2012 (UTC) [updated]--Clusternote (talk) 18:08, 11 January 2012 (UTC)

Comments on the proposed classification: I think the distinctions of harmonic/inharmonic, steady-state/time-varying and mathematically-generated/human-generated/analysis-generated control data for the partials are important, and need to be clear from the article. However, I think the article initially needs to focus on what is common to these different approaches (the summation of sinusoidal waves), before distinguishing the various types of additive synthesis. I don't think the 'real-time' classification is required - the realtime issue is only relevant in that the high computational cost of generating and summing many sinusoidal partials on a digital computer/synth motivated the development of a number of related techniques that are less computationally expensive. Chrisjohnson (talk) 19:08, 10 January 2012 (UTC)
I'm glad for your comment. As for the place to put categorization, at least, before Implementation section should be kept, to describe these like new proposal on "historical trends of notions". On "real-time processing ...", it may be only used on a few case (RMI Harmonic Synthesizer and describe relation to "Wavetable synthesis"), thus explicit explanation is possibly not needed. I've refrected your opinion on the above categorization. --Clusternote (talk) 18:08, 11 January 2012 (UTC)
I don't think categorization is that important, especially if there is no common agreement on the names of the categories. As a workaround, one could section the article into the problems/questions that would have lead to the categorizations, and discuss the alternative approaches there (not necessarily naming the categories): Frequencies of the partials, Source of control signals or something to that effect. Real-time control is something that can be mentioned, if needed, in plain speak for each synthesizer model listed. It's more like a practical point rather than anything fundamental in the theory: Will the sound change as I turn the knob? Olli Niemitalo (talk) 20:30, 10 January 2012 (UTC)
I'm glad for your comment. Really, lack of common agreement of categorization on additive synthesis seems important issue. Rough notion of each categorization seems to be almost shared, however, detail and naming seems various. On this issue, probably another, more generic discussion on alternative approaches may be needed to form several consensuses, as you said. I'm glad if someone do it ... however, it may be my task, of course.
As for "realtime control", it may be also another important aspect of additive synthesis (possibly related to several researches on "Real-time additive synthesis"). However, the number of controller simultaneously controlled by human is, so far fewer than the number of parameters on additive synthesis. Probably, several grouping of parameters or partials (as on Kawai K5 or Group additive synthesis[10][11][12]) may be needed, and ability of each implementations may various. Also it requires another sources and verification. Anyway, I added it to above categorization. thanks --Clusternote (talk) 18:08, 11 January 2012 (UTC)


Upper categorization

[new addition in 2011-01-11 18:00 (GMT)]
In addition to above internal categorizations, also upper categorization on widely accepted, more generic categorization system, may be also needed to clarify scope of article.

  • Linear synthesis
  • Anarysis/Resynthesis-style synthesis
    • Additive synthesis as Anarysis/Resynthesis-style synthesis
  • Hybrid synthesis
    Combination of multiple synthesis. For example, "Sines + Noise + Transients Models" also uses Sinusoidal modeling[15]

--Clusternote (talk) 18:08, 11 January 2012 (UTC)

Keep in mind that trying to create strictly hierarchical taxonomies is inconsistent with the nature of human activity. For example: additive synthesis was one inspiration to Spectral Modelling, or you could say Spectral Modelling *uses* additive synthesis or *is a form of*, or *is an elaboration of*. However to flip the heirarchy and say that additive synthesis is a subset or (degenerate) form of SMS unnecessarily privileges SMS as some kind of super technique that "owns" additive synthesis (indeed some might like it that way, including your reference -- one of the inventors of SMS). The neutral way, in my view is to avoid imposing unnecessary taxonomies and hierarchies and prefer enumerating relationships, common usage, and chronologies. --Ross bencina (talk) 06:05, 14 January 2012 (UTC)

Thanks for your information. As for the above upper categorization, I didn't yet add exact semantics as you said "super", "subset", etc., because it may be varies for each tree. If SMS was POV, it may be ignoreable because it was not "widely accepted, more generic categorization". I also think that "imposing unnecessary taxonomies" should be avoid, as you said. --Clusternote (talk) 08:26, 15 January 2012 (UTC)
P.S. also, you should keep in mind that taxonomy is only a tree like a hierarchical representation or approximation of more realistic, arbitrary relationships between things often represented by net structures. Practically, the later is modeled using, for example, relational modeling used on database, object modeling used on programming, or ontology engineering also referred on semantic web. Based on the later more accurate models, the former as approximation is more rationally designed, and expressed.
If taxonomy lacks base of more precise modeling, it might become merely a territorial dispute based on the subjectivity of the each stakeholders. However, on Wikipedia, everyone should always required neutral viewpoint, and also ensure that no preference is given on a few the interests of someone --Clusternote (talk) 23:12, 17 January 2012 (UTC)

Historical trends of notions

[new addition at 2011-01-11 18:00 (GMT)]
Related to the definition and categorization, also historical transition (or trend) of notions (including precedents), may be needed to clarify the story. The section "implementations" is not yet enough organized, thus may be fit to this purpose. Though, I haven't intention to form a rigid structure which eliminate exceptions.

The following is my outline, as a starting point:

  • medieval (12th) to early 19th: prior to Fourier's theorem, early empirical notion (timbral additive ?) similar to "Steady-state, Harmonic additive synthesis", was seen on pipe organs with multiple ranks (multiple sets of pipes on which each set (rank) has different timbre, formed in 12th century).
  • 19th to early 20th: Fourier's theorem became basis of earlier empirically known notion. Teleharmonium (c.1897) and Hammond organ (1934) using quasi-sinusoidal waves may be its implementations.
  • 1910s–1930s: The early notion of "Time-varying additive synthesis" was implemented by several Russians (and probably others; see 120years.net) as manually drawn summation of sinusoidal waves on the films. Under the influence of them, later ANS synthesizer (1937-1957) was developed.
  • late 1950s–1960s: notion of "sound synthesis" and "computer music" was established. Earlier researches on "additive synthesis" for computer music were probably also established in this era. (ToDo: citations for earlier researches)
  • 1970s–early 1980s: With the evolution of digital microelectronics including microprocessors, several additive synthesizers were implemented as digital hardware synthesizer. Bell Labs Digital Synthesizer (Alles Machine) was developed following earlier software experiments on Bell Labs (possibly non-Analysis base). Several later models including Fairlight CMI, Synclavier, had additive resynthesis feature on sampled sound (Analysis base).
  • mid 1980s–mid 1990s: most of consumer models without Analysis were released in this era.
  • mid 1990s–current: software synthesizer on PC appeared in late 1980s-early 1990s came to main stream.

--Clusternote (talk) 18:08, 11 January 2012 (UTC)

Questionable implicit definition of "additive synthesis" on "Theory" section

On "Theory" section, equations implicitly describes additive synthesis as

synthesis based on Fourier series (or IFFT),

instead of generic Fourier's theorem[2]. This implicit description also ignores preceding "analysis" phase on which Fourier transform (FFT or STFT) or "analysis filter-bank" are used to extract parameters from given audio signal. It should be improved.
--Clusternote (talk) 14:07, 10 January 2012 (UTC)

Moreover, in my eyes, the section title "Theory" seems to be slightly too grandiose and possibly cause misleading. If the section mainly discussed on definition and not mentioned on any theoretical predictions on cause & result (explained by theory), it should be named "Definition". --Clusternote (talk) 18:59, 11 January 2012 (UTC)

Several misuse of terminology

  1. The term "stationary wave" should be "steady-state wave" which means non-"transient wave".
    The former expression seems too closely related to physical oscillation phenomena.

  2. The term "real-time" on ANS synthesizer shouldn't be linked to "real-time computing".
    ANS synthesizer is an "optoelectronics" musical instrument implemented with optics and electronics circuits. It isn't use any computing power in the normal sense, as same as the ancient Fax without any microprocessors. Thus, link should be deleted.

--Clusternote (talk) 14:07, 10 January 2012 (UTC)

I reviewed the text. These changes have now been made. --Ross bencina (talk) 06:08, 14 January 2012 (UTC)

Rough draft solving these issues

The rough draft solving these issues (and several other issues) have been posted as revision 470472820 by me.
--Clusternote (talk) 14:07, 10 January 2012 (UTC)

References of this thread

  • Smith III, Julius O. (2011), Spectral Audio Signal Processing, Center for Computer Research in Music and Acoustics (CCRMA), Department of Music, Stanford University, ISBN 978-0-9745607-3-1
  • Smith III, Julius O.; Serra, Xavier (2005), Proceedings of the International Computer Music Conference (ICMC-87, Tokyo), Computer Music Association, 1987., Center for Computer Research in Music and Acoustics (CCRMA), Stanford University {{citation}}: |chapter= ignored (help); External link in |chapterurl= (help); Missing or empty |title= (help); Unknown parameter |chapterurl= ignored (|chapter-url= suggested) (help) (online reprint)
  • Horner, Andrew; Beauchamp, James (1995), "Synthesis of Trumpet Tones Using a Wavetable and a Dynamic Filter", Journal of Audio Engineering Society (JAES), 43 (10): 799 812 {{citation}}: Unknown parameter |other= ignored (|others= suggested) (help) (online replint)
  • Yee-King, Matthew (2007-01-15), "Music Technology Lecture 3, Sound Synthesis 2" (PDF), Lecture slide, Department of Informatics, University of Sussex{{citation}}: CS1 maint: date and year (link)
      Terminologies used on this article: "Harmonic Additive Synthesis (time invariant)", "Partial Addition", "Inharmonic Additive Synthesis (time invariant)", and "Time varying Additive Synthesis".
  • Interactive Swarm Orchestra (n.d.), Basic Synthesis Techniques, Interactive Swarm Orchestra / Interactive Swarm Space (a project realized as a collaboration between ICST of Zurich University of the Arts, and AILab of University of Zurich)
      Terminologies used on this article: "Steady-State Additive Synthesis" and "Time-varying Additive Synthesis".

--Clusternote (talk)

References

  1. ^ a b c d Smith III & Serra 2005, Additive Synthesis
  2. ^ a b c d e f g Smith III 2011, Additive Synthesis (Early Sinusoidal Modeling)
  3. ^ a b Smith III 2011, Spectral Modeling Synthesis
  4. ^ Yee-King 2007, p. 20 Data modification
  5. ^ Smith III 2011, Further Reading on Vocoders
  6. ^ a b Yee-King 2007, p. 12-13
  7. ^ a b c d e Interactive Swarm Orchestra n.d.
  8. ^ Yee-King 2007, p. 14-15
  9. ^ Yee-King 2007, p. 16-17
  10. ^ a b Julius O. Smith III, Group Additive Synthesis, CCRMA, Stanford University, retrieved 2011-05-12
  11. ^ a b P. Kleczkowski (1989). "Group additive synthesis". Computer Music Journal. 13 (1): 12–20.
  12. ^ a b B. Eaglestone and S. Oates (1990). "Proceedings of the 1990 International Computer Music Conference, Glasgow". Computer Music Association. {{cite journal}}: |chapter= ignored (help); Cite journal requires |journal= (help)
  13. ^ Horner & Beauchamp 1995, p. 803
  14. ^ Kahrs, Mark; Brandenburg, Karlheinz (1998), Applications of digital signal processing to audio and acoustics, Kluwer international series in engineering and computer science: VLSI, Computer architecture and digital signal processing vol.437, Springer, p. 208, ISBN 9780792381303
  15. ^ Smith III 2011, Sines + Noise + Transients Models

Another review from an interested party

I would like to offer a brief review of the page as of January 12, 2012. I admit I have not digested the totality of this talk page. But the fate of the article as a whole is important and I think there needs to be some structural change and clarification of priorities.

I think a good way forward for this article would be to introduce an improved structure which asserts a heirarchy of information. Once an appropriate heirarchy is established I may have more to say on the fine detail. Certainly I agree that "time varying" is a widely understood term that is independent and unrelated to "real-time computing" (you can have a time varying signal generated by a non-real-time system, and vice versa)

Most importantly, the first section after the introduction should be "Theory". "Additive resynthesis" is a subtopic and best introduced later. Alternatively there could be a general overview section that includes reference to Analysis/resynthesis but I can't help but feel that Analysis/resynthesis should be a separate page -- since the idea of analysis/resynthesis goes well beyond Additive *synthesis* which should properly be defined as the combination of sinusoids, possibly with mention of some historical precursors.

With regards to the theory section, my opinion is that harmonic additive synthesis is more properly originated in Pythagoras' purported experiments with harmonic (integer) subdivision of strings. And could also be considered in in the context of inharmonic modes/resonances. Note that the presences of sinusoidal resonances ("modes") in a sounding body -- which is the basis of being able to synthesis these sounds using sums of sinusoids -- has, in general, nothing to do with harmonic series, and is more related to modal synthesis and modal decomposition of a sounding body. Certainly this kind of understanding of the harmonic series, modes, and of sound as a sum of sinusoids, pre-dates J. Fourier's work and is arguably more relevant.

Therefore I suggest that "Theory" should begin not the Fourier Series but by discussing the harmonic series [10], and perhaps of modes in non-harmonic sounding bodes. Given that this is a sound synthesis article, begining by relating it to acoustic theory rather than Fourier theory would make sense.

Further, with regard to Fourier theory: additive *synthesis* is not necessarily to be understood as being the same as Fourier Resynthesis (although the former may be implemented in terms of the latter, i.e. Rodet's FFT^-1). A resynthesis using the DFT is often a *phase vocoder* resynthesis.

I agree that the "Implementations" section is not neutral. I think precedence should be given to systems that mix sine waves. Combining arbitrary sounds in an additive manner (i.e. mixing) is in some sense analogous to additive synthesis but unless the process yields a single coherent fused timbre in is just mixing (plenty of synthesiser have multi-layer mixed timbres, and no one confuses these as additive). Pipe organs are not strictly additive synthesis -- the "Implementations" section appears to favor what I would call "Historical precursors" as if they truly are additive synthesis. I recommend that real additive synthesis devices are listed, and then perhaps have a section on "Historical precursors".

There should be a section that maps out the chronology of sinusoidal additive synthesis: from adding individual sine waves, to hybrid analysis-resynthesis approaches such as Spectral Modelling Synthesis (perhaps listed as a "related technique". Some thought needs to be given to whether and how to make reference to phase vocoder.

A nit pick: I would question whether the introduction should say "sinusoidal overtones" the usual general term is "partial", since overtone implies a fundamental frequency -- and in general, additive synthesis is just as applicable to synthesizing inharmonic timbres (e.g. gongs) as it is to synthesising harmonic tones. In any case, the introduction could just say "that creates timbre by explicitly adding sine waves (or sinusoids) together. --Ross Bencina (talk)


— Preceding unsigned comment added by Ross bencina (talkcontribs) 07:34, 12 January 2012 (UTC)

I'm glad for your excellent review. Although several points are painful for me, your fourth paragraph mentioning Pythagoras and harmony is very interesting. For describing precedents of additive synthesis prior to the Fourier's theorem, several mentions on historical studies of harmonics, and clarification of relation between harmony and additive synthesis, seems to be necessary.
This article (Additive synthesis) in general, tends to biased to implementation side (especially by a electric engineer), and seems to lack more essential explanations in the broader viewpoints including acoustic engineering (sound waves as physical phenomenon) and psychoacoustics (harmonies and timbres as mental phenomenon). These broader viewpoints must provide both rationality of additive synthesis, and criticism on its potential and limits, in my opinion.
I expect your great contributions, especially on these aspects. sincerely, --Clusternote (talk) 05:00, 16 January 2012 (UTC)

(please add your overall opinions under here, if possible)