Talk:Structure tensor

Latest comment: 1 year ago by 185.116.129.142 in topic Error on 3D structure tensor image

Error on 3D structure tensor image edit

It seems that there is a little confusion between the images for the 3D structure tensor. The description for the surfel image seems wrong, with the egein value relation being the one for the line, and conversly for the line below. The ellipsoid images of the right seems also messed up, or am I missing something ? — Preceding unsigned comment added by 185.116.129.142 (talk) 14:59, 24 March 2023 (UTC)Reply

Conceptual Explanation Request edit

IMO this article (like many other in it's class) is in desperate need of a more in depth conceptual summary. It's wonderful that we have these exact mathematical descriptions, but the concepts for understanding how some of these things work do not require a degree in math. However *reading* about those concepts in these articles *does*. --Andy (talk) 21:26, 24 June 2011 (UTC)Reply

paper? edit

  • This article seems to be writen like an academic paper, and is therefore, not very encyclopedic. The original author or some other party should attempt to modify the article to make it read more like an encyclopedic text. CB Droege 19:55, 21 September 2006 (UTC)Reply
  • The purpose of the page is both as an introduction and tutorial on structure tensors. I appreciate the feedback, nevertheless, this was not a published academic paper and the subject matter is geared especially to those needing help with structure tensors for computer vision in a reference, i.e. encyclopedic, fashion. I am open to specific suggestions as to how to make it "...read more like an encyclopedic text" other than adding a history section. Thanks again for the feedback. S. Arseneau, 22 September 2006
    • This then is the problem with the article. It is a well done article, but Wikipedia is a place for encyclopedic articles, not tutorials or instructions. The article needs some work before it is apropriate for this context. CB Droege 14:09, 25 September 2006 (UTC)Reply
Not fully wikified but (arguably) looking better and good enough until edited? Rich257 20:19, 25 September 2006 (UTC)Reply

Copied from net doc? edit

Fixed incomplete definition edit

The definition of the structure tensor in this version of the article was incomplete and misleading. The eigenvalues of the matrix S, as defined in that version, are simply   (the square of the gradient modulus) and  ; the associated eigenvectors are the direction of the gradient and the same rotated 90 degrees. Thus that "structure tensor" is sumply a complicated way to express the gradient (minus its direction), and the coherence index is simply "gradient != (0,0)".
The structure tensor makes sense only when that matrix is integrated over some neighborhood; and then it summarizes the distribution of gradient directions within that neighborhood.
I have fixed that definition, hopefuly it is correct now. I also did some general cleanup of the article; I hope I did not lose anything important.
--Jorge Stolfi (talk) 06:26, 20 August 2010 (UTC)Reply

Removed passage on coordinate invariance edit

I removed this sentence, since it does not seem understandable to readers who do not already know what it means: "A significant difference between a tensor and a matrix, which is also an array, is that a tensor represents a physical quantity the measurement of which is no more influenced by the coordinates with which one observes it than one can account for it." The matrix S obviously depends on the coordinate system
--Jorge Stolfi (talk) 06:26, 20 August 2010 (UTC)Reply

Removed passage on tensor addition edit

I removed this paragraph and picture, since they do not seem to be understandable to readers who do not already know what they mean: "[[Image:TensorAddition.png|thumb|Tensor addition of sphere and step-edge case]]Another desirable property of the structure tensor form is that the tensor addition equates itself to the adding of the elliptical forms. For example, if the structure tensors for the sphere case and step-edge case are added, the resulting structure tensor is an elongated ellipsoid along the direction of the step-edge case.
--Jorge Stolfi (talk) 06:26, 20 August 2010 (UTC)Reply

Can the coherence index be defined on uniform regions? edit

The coherence index was defined in this version of the article as 0 when the two eigenvalues were zero, that is, when the gradient was uniformly zero within the window. However, the formula for the general case does not have a definite limit when λ1 and λ2 both tend to 0, so any definition is equally wrong. Essentially, such a region can be regarded as totally isotropic or totally coherent, or anything in between, depending on what value one chooses to assign to 0/0.
That article also stated that "[the coherence index] is capable of distinguishing between the isotropic and uniform cases." However, when λ1 = λ2 > 0, the first case of the definition yields 0, the same as the second case.
pending clarification, I have removed this claim and merely noted that "some authors" define the index as 0 in the uniform case.
--Jorge Stolfi (talk) 06:40, 20 August 2010 (UTC)Reply

Name "Second moment matrix" ambigous/improper? edit

How standard is the name "second moment matrix"? I ask because the name is used in other areas, such as statistics and mechanics, but the meaning does not seem to be the same. Or is it? --Jorge Stolfi (talk) 00:19, 21 August 2010 (UTC)Reply

  • The term "second-moment matrix" is a frequently used terminology in computer vision, because of an interpretation of the second-moment matrix in terms of second-order spectral moments of the Fourier spectrum. Formal statements about this can be found in the book by Lindeberg (1994) and the papers by Lindeberg and Garding (1996, 1997) cited among the references. Tpl (talk) 08:05, 21 August 2010 (UTC)Reply

The multi-scale structure tensor edit

Yesterday, I complemented this article with a description about the multi-scale structure tensor/second-moment matrix. I was, however, somewhat surprised by the way this text has been edited, with almost nothing left from the original text. In the revised article, there were also several statements that are incorrect and appear to be based on misunderstandings concerning the properties of this descriptor. Thus, it appears as if the revisions were not based on an understanding of the technical contents in the cited references. In the current version, I have reformulated this section with specific emphasis on explaining aspects of this theory that may not have been fully explicit for the author of the revisions. Please, let me know if the current text is more self-contained.

When editing articles in Wikipedia it is good manners to keep important material from other authors and not to delete material from others without a very good understanding of the contents. Tpl (talk) 08:15, 21 August 2010 (UTC)Reply

  • Sorry for that, but the original text was rather hard to understand.
    One problem with the original description is that its notation differed from that used in the rest of the article. It also seemed unnecessarily complicated, and failed to give the intuition behind the math.
    From any operator one can define a "multi-scale" version in an infinte number of ways. As I understand it, the "multiscale structure tensor" has three steps: (1) filter the image with some kernel hs (2) compute the pointwise tensor matrix  , and (3) filter this tensor field with some other kernel wr. The original text left the two radii r,s independent. However, if the parameter s is merely the radius of hs, then shrink+filter+expand with a fixed-radius kernel h is equivalent to filtering with an s-scaled hs. Moreover, Gaussian is theoretically a good choice, but in practice one must use approximate discrete kernels, and compute the multiscale decomposition recursively by filtering with a fixed kernel h and then downsampling by a fixed ratio at each stage. That is, the first scale parameter s is beter understood as simply the resolution of the digital image, or the level in an image pyramid, rather than a parameter of the filter h. This formulation has the advantage of forcing s to be truly a scale parameter, i.e. it excludes filters hs that depend on s in a more complicated, non-scale-like way.
    It also seems more natural to specify the filtering scale s and the ratio r/s, rather than r and s separately. (Note that if r << s the result is rather uninteresting.) But then, in the shrink+filter+expand formulation, the ratio r/s need not be mentioned explicitly, as it is already implicit in the choice of the mother (scale-inedpendent) kernels h and w.
    In practice, in fact, one shoud omit the final 'expand' step unless strictly necessary, since it merely wastes a lot of space without performing any useful computation. That is another argument for handling the "multiscale" aspect by image scale reduction, rather than by parametrizing the structure operator. (And this observation holds for most other "multiscale operators".)
    Note also that h could be a band-pass filter rather than a low-pass one; that is, at each scale one analyzes detail with that scale 'only', and not any larger or smaller scales. (This is another common interpretation of the term "multiscale", e.g. in wavelet analysis.) Yet in that case one would still probably want to use a Gaussian window w for integration.
    So, I believe that my formulation in terms of shrink+filter+tensor+integrate+expand with scale-independent (but completely arbitrary) mother kernels h and w is mathematically equivalent to your formulation with two kernels depending on two parameters --- but is more parsimonious, and easier to understand.
    But I an not going to fight with you on this matter.
    All the best, --Jorge Stolfi (talk) 22:58, 22 August 2010 (UTC)Reply

References to specific pages in references edit

When referencing material from a rather extensive book, I included specific page number to make it possible for others to find the specific statements that are relevant for this article. This explanatory text was, however, removed by a previous editor. Does anyone know about a better way of inserting explicit page and section references, e.g. on the form (Author 2010; section 9.5), when referencing a particular section or page in a book? Tpl (talk) 08:15, 21 August 2010 (UTC)Reply

  • Sorry about that,too. Page and section references can often be better obtained from the book index and table-of-contents, or (for online reading) with search tools; so the value for readers who may want to check them should be weighted against the cost of cluttering the reference list with extra entries.
    An alternative to creating a separate <ref>...</ref> is the "rp" template: the call {{rp|ch.23}} after the </ref> generates a superscript annotation, as in [1]: ch.23 . Hope it helps, --Jorge Stolfi (talk) 23:17, 22 August 2010 (UTC)Reply

Anisotropy is too abstract edit

The direction of gradient varies in the neighborhood of the pixel at the curved edge. Is it better to talk about curvature instead of anisotropy? The formula for curvature can be easily found from the distribution of gradient. See for example Documentation tab at Outliner project --Wladik Derevianko (talk) 21:32, 2 May 2011 (UTC)Reply

Curvature is only one aspect of anisotropy. If there are variations in the direction/orientation of the gradient it may also be related to, e.g., presens of noise or of two or more lines/edges in the neighborhood. --KYN (talk) 07:32, 3 May 2011 (UTC)Reply

Typo edit

"If we keep the local scale parameter s fixed [...]" should be "If we keep the local scale parameter t fixed [...]" — Preceding unsigned comment added by 92.230.48.68 (talk) 23:07, 8 March 2012 (UTC)Reply

Is it a tensor? edit

This matrix seems to not be a proper tensor in the sense of obeying rotational transformation rules. Anyone care to explain otherwise? — Preceding unsigned comment added by 132.3.33.81 (talk) 16:01, 1 October 2013 (UTC) Following that thought, the article begins "in mathematics", yet is entirly focused on image processing applications -- is there any reference to a formal treatment of this topic outside of computer graphics? — Preceding unsigned comment added by 132.3.33.80 (talk) 16:19, 1 October 2013 (UTC)Reply

It does not strictly satisfy the expected transformation properties of a proper tensor. But note that it is, in principle, constructed as the outer product of the image gradient and, hence, forms a 2nd order covariant tensor. This is then modified by computing a local average, typically weighted by a Gaussian kernel. As a result the structure tensor no longer transforms as a proper tensor with respect to scaling of the coordinate system. However, it transforms like a tensor with respect to rotation transformations(!), and this is what counts for the applications where it is used. To be useful also for various image scales, the structure tensor can be applied to a scale space, and this is done in some applications. Haven't seen it used in computer graphics though. --KYN (talk) 18:09, 1 October 2013 (UTC)Reply
Thanks -- I had trouble proving out the rotational transformation but I've got it now. Rotation invariance should be noted in the article, though I obviously lack the expertise to work on it. — Preceding unsigned comment added by 132.3.33.79 (talk) 20:09, 1 October 2013 (UTC)Reply
The rotation transformation relies on the Gaussian kernel (called w in the article) being circular symmetric, something that is not mentioned in the intial definition of the structure tensor in the aricle. --KYN (talk) 20:23, 1 October 2013 (UTC)Reply

Possible error in equation? edit

Is there an incorrect equation in the section Complex Version?

The expression of   (in terms of   and  ) seems incorrect to me:

In my opinion it should be:

 

instead of

 

as is stated there. — Preceding unsigned comment added by Cocus (talkcontribs) 11:52, 26 February 2018 (UTC)Reply