Talk:Feature (computer vision)

Latest comment: 3 years ago by Wikiedit738 in topic Introductory definition of the term 'feature'

Introductory definition of the term 'feature' edit

I find the current introductory definition, "a feature is a piece of information which is relevant for solving the computational task related to a certain application", to be far too broad and vague. This definition covers more or less any piece of information on any topic in any context. For example, the big O complexity of the computational task of sorting a database table is "a piece of information which is relevant for solving the computational task related to a certain application" , but it is obviously not a feature for the purposes of this article.

In my view the introductory definition should contain the following aspects: 1) the term 'feature' (as intended in this article) relates to images and analogous data structures, 2) the term relates to the content of an image, as opposed to meta-data, 3) a feature is typically (though certainly not always) either present or absent, and 4) a feature is typically (though not always) local.

Accordingly I propose to change the introductory definition to: "a feature is a piece of information about the content of an image, typically about whether a certain region of the image has certain properties". This seems to cover the content of the article quite well. However, I don't consider myself to be expert enough in this field to make the change myself. Could anyone else weigh in? Wikiedit738 (talk) 09:54, 12 August 2020 (UTC)Reply

expert tag edit

  • This article is tough to read for beginners in the subject, therefore maybe not rewriting but sectionning it. Lincher 04:24, 5 December 2005 (UTC)Reply

Other tags needed as well edit

From my viewpoint this article is too abstract. Although I count myself as an expert in the field, I have problems to find the article informative. From my viewpoint, the article appears to give a very specific and narrow viewpoint to the notion of features. Moreover, the reference list is too unbalanced. Tpl 12:26, 18 September 2006 (UTC)Reply

Much better now edit

I think that this article has become much better after the rewrite. To make it easier to search for this article in the Wikipedia search tool, and to have a better analogy with the companion articles on edge detection, corner detection, blob detection and ridge detection, one suggestion would be to rename this article to feature detection. Then, however, a redirect would have to be changed to the article on feature extraction. (I do not know how to do that.) From my viewpoint, it would be OK to remove the expert tag from this article. Tpl 15:13, 19 September 2006 (UTC)Reply

I also believe that the article is improved. About renaming, since the article is (or at least was intended to be) about features in general and not about detection in particular, this would make the name too narrow. This is and has been the argument why I also believe that the other articles mentioned above should be renamed by dropping "detection". Also, I don't understand why it becomes easier to find the article if we rename it. --KYN 07:03, 20 September 2006 (UTC)Reply
Concerning naming, if you type "feature" in Wikipedias search tool, you arrive at a disambiguation page. This page does not even mention "Feature (Computer vision)". Expecting an unfamiliar reader to type the latter is at least according to my opinion too much. Of course, one could modify the disambiguation page somewhat.
Precisely. --KYN 19:01, 20 September 2006 (UTC)Reply

Still, however, since the article in its current form describes precisely the notion of "feature detection", I think that a corresponding naming would be appropriate.

Well, until the recent changes it was intended to cover the general idea of features as they are used in computer vision, not detection in particular. This is still my intension. To this end, I have made some preliminary headings which will discuss other aspects of features than the dection part. Also, please note the discussion on detection versus extraction/estimation. --KYN 19:01, 20 September 2006 (UTC)Reply

I do, however, not agree on dropping "detection" for the articles on "edge detection", "corner detection", "blob detection" and "ridge detection". The use of "detection" is well established in the field. Tpl 15:48, 20 September 2006 (UTC)Reply

As I have already said, the issue is not about well established or not, it is about the topic of the articles. I am simply requesting that these articles are to be made more general than only about the detection part which, however interesting it may be, is only one part of the story. --KYN 19:01, 20 September 2006 (UTC)Reply
Since Wikipedia is an encyclopedia, my opinion is that it should focus on notions that are reasonably well established in the field.
Here we agree.
A main advantage of writing the articles in terms of detection as they are now, is that it is a direct correspondence between well established and working operational definitions and methods for feature detection.
And again. However, see below.
If one aims a more general descriptions, I think that tchere is a major risk that the article would be too abstract, as the article Feature (Computer vision) was before the major rewrite (and still is).
Yes there is such a risk, but we are bold are we not? On the serious side, I agree that the general idea of a feature as it is used in computer vision is somewhat slippery. I have looked in the set of textbooks which I have available and they invariably avoid giving a satisfactory definition of what a feature is. Despite this, most of them use the concept extensively. What can be reported in this article is how this concept is being used.
Also, I like to counter and say that the other articles which we have discussed are at the risk of being too technical since they almost only provide the reader, whom we should assume to have only a minimal knowledge in the field, a list of detection method without a proper introduction on why they should be used in the first place. In short, lack of context and overview can also be a problem.
Since computer vision is a non-trivial subject, there is a danger in simplifying the topic too much. To answer precisely WHY a method or a concept should be used is a very hard question. I would describe the different notions that have been developed over the years as tools. These tools have been demonstrated to work on different problems of varying complexit. The overall question of how to design a general purpose vision system is still open and will remain open for a forseeable future. Possibly one can for specific well-defined problems answer why a specific type of feature can be expected to be useful and/or have comparable advantages to other types of feature. If you would try to answer these questions for an uninitiated reader without context knowledge at all, howeverf, I'm afraid that you may run a high risk of running into problems. Tpl 17:27, 21 September 2006 (UTC)Reply
Concering feature maps on the other hand, there is also a rather well established theory based on differential invariants that one could describe as well. From my point of view, I find it better to start from concrete and existing (working) theories in the field than more open-ended outlines that can be critized as close to speculations.
Can you be more specific about which speculations you have encountered?

Concering feature hierarchies I agree that descriptors of different complexity can be constructed by using multiple concepts together. To say that feature hierarchies are well-established in the field, however, I have not been able to find any extensive support for. Tpl 17:27, 21 September 2006 (UTC)Reply

For the area of "feature maps", I could make an outline. For the more general topic of "Feature (Computer vision)" on the other hand, I'm more sceptical. Tpl 07:37, 21 September 2006 (UTC)Reply

Detection, estimation and extraction edit

Another aspect that needs to be discussed is the distinction between feature detection and feature extraction or feature estimation. I my view, detection implies a classification process: either the feature is there or not. A feature detector gives you the answer yes or no. Some features however, a more complex and call for a more sophisticated representation than a boolean variable. For example, line/edge features can be described in terms of their orientation, phase and certainty of presence. This information can conveniently be encoded into a matrix or tensor such as the structure tensor. Personally, I would not refer to the process of computing such a description as "detection", but rather as "estimation" or possibly as "extraction". On the other hand, if we use a Canny approach, we get a boolean statement in each pixel about edge presence, hence "detection" is appropriate.

I agree that the detectors described under edge detection, corner detection, blob detection and ridge detection imply that local decisions are made whether a feature of a certain type exists at a certain image point. This is the meaning of "detection" and with this additional motivation I insist that "detection" should be kept in the naming of these articles. I do also agree that there are other approaches to computing features, for example in terms of non-binary (scalar or vector-valued) feature maps. The second-moment matrix / structure tensor is an example of such a descriptor. At your home page at Wikipedia, I saw that you are working on an article on the structure tensor. If you release this article, others could complement. Nevertheless, I would find it better to describe feature map approaches in another article (or several other articles) than the feature detection articles. As the feature detection articles are now, they are quite well developed and describe state-of-the-art approaches. I think that it is better to have focused articles that give good and updated descriptions of well-defined topics than general articles that aim at too much and will remain incomplete relative to the claims. I you would like an overview article on these notions, I think that feature map would be a reasonable name. This article could then refer to the article on the structure tensor. From such a viewpoint it would also be better to rename the current article Feature (computer vision) into feature detection provided that current forward reference from feature detection is removed. Tpl 15:59, 20 September 2006 (UTC)Reply
To avoid the problem with different viewpoints on what should be meant by a feature, I have moved the specific material on precisely feature detection to a specific article on this topic. In the article on Feature (Computer vision), I have replace the corresponding material by a reference to that article. In this way, I think that we solve the problem of "detection" and you can continue to develop the notion of "feature maps". Tpl 16:32, 20 September 2006 (UTC)Reply


Delete this page? edit

I would suggest removing this page. feature_detection and feature_extraction already exist, and I think that spreading features over three pages is excessive and introduces artificial distinctions. I'm also not convinced that feature detectors are binary. Some of the most common (eg Harris, DoG) are real-valued and are binarized only by thresholding at some point. Others, eg FAST have a threshold built in and are definitely binary. Also, with Harris, the computation of the second moment matrix is required, so the extraction of this comes for free. As such, I think that the extraction of information about the local feature can not be completely separated from the detection process since in some cases they are so closely linked. Another example is edge orientation. This comes "for free" as part of the Canny edge detector, since it has to be computed anyway in order to perform nonmaximal suppression. Therefore this information is already and freely available as part of the detection process. While "estimation" may be a better term, in detection is more commonly understood term. I think it sould be kept.

I think that it would be better to describe feature map in terms of feature detection. I think detection is more natural and obvious as a first step (as well as being an early topic in many CV books). Feature maps are a quite natural generalization and are computed using exactly the same techniques (modulo a threshold).

Serviscope Minor 17:01, 20 September 2006 (UTC)Reply

Suggestion to a compromise: How about developing the notion of feature maps in a separate article to begin with? Then, when it has matured and reached the standards of the feature detection articles, we could try to merge the contents if we find that appropriate. With such a solution we avoid starting the process by messing up articles that today are in quite a reasonable shape. Tpl 17:59, 20 September 2006 (UTC)Reply

Exactly what do you mean by a "feature map"? This is a common notion in methods for neural networks like Kohonen's self-organizing feature map. --KYN 07:35, 21 September 2006 (UTC)Reply
What I had in my mind was maps of differential invariants. The most frequently used descriptors are the gradient magnitude, the gradient direction, the Laplacian, the determinant of the Hessian , the rescaled level curve curvature, the main principal curvature of the Hessian matrix and the components of the second-moment matrix mu mapped to invariant form P = mu_xx + mu_yy, Q = sqrt((mu_xx - mu_yy)^2 + 4 mu_xy^2), arg(C, S) = arg(mu_xx - mu_yy, 2 mu_xy).
May I ask you what technical material you have planned for the headers you have outlined for this article? Tpl 10:36, 21 September 2006 (UTC)Reply
First of all, I am not sure about putting this stuff into separate headings, they can also be presented in one integrated text. Anyway:
  • Feature image: Explain the idea that we can use and produce images, functions of spatial (and temporal) variables, which in each pixel holds information about features related to a specific image neighborhood. This idea probably lies very close to what you refer to as "feature maps"? The name "feature image" is well established, for examples look in the books by Bernd Jähne.
  • Feature descriptor: Explain the fact that a specific image feature (which can be described in terms of gray value variations) can be represented in different way, in terms of different types of descriptors. Some descriptors are booleans, other scalars, vectors or tensors. Exactly which representation or descriptor you choose depends both on the image feature and on the application. Example: an edge can be represented by a yes/no descriptor which only tells you if the edge is there or not, or by means of a structure tensor which also gives you certainty and orientation.
I have always heard feature descriptor used in a completely different context. That is, it is used to describe the extracted and processed local image patch around the detected feature, eg SIFT, image patches (maybe with rotation/scale/affine correction), N-jets, etc. The value of the descriptor is also often referred to as a feature vector.Serviscope Minor 17:21, 21 September 2006 (UTC)Reply
Do you have a reference at hand? I like to understand if and in what way there really is a a difference. Either way, it may be that the heading should be Feature representation instead, and cover what is stated above.
Sure. In the PCA SIFT paper, they describe the descriptors (various manglings of gradient (histograms) of image patches around the DoG maxiumum) as feature vectors. 128 dimensional vector for SIFT and 35-D for PCA-SIFT. I could dig out a few more if you like Serviscope Minor 19:20, 22 September 2006 (UTC)Reply
  • Feature space: one well established application of features is to compute several different features (e.g. from a region around each image point) and organize the result as one single vector, a feature vector. This vector is then used for further processing, typically in a process which classifies each pixel.
  • Feature hierarchy: In order to obtain information of high complexity or abstraction, it is a well established practice that features are extracted in several steps. First low-order features such as edges are extracted and these are further process to obtain information about, e.g., corners or curvature. The idea that certain features are (or can be) defined in terms of other features should be presented, since it has a practical consequence for how the estimation processing is implemented. Maybe there is a better name than "feature hierarchy" which is described in Granlund & Knutsson's book? There is also a strong relation between this idea and scale since features extracted at a higher level, i.e., with more estimation steps, typically refers to a larger neighborhood.
Something along these lines anyway. --KYN 12:07, 21 September 2006 (UTC)Reply
I can accept the notion of "feature images" instead of "feature maps". This is not a problem. Concering topics 2 and 3 on your list, please let me wait until you have developed these notions in more detail. Concerning feature hierarchies, however, I'm sorry to say that I'm more sceptical (see comments above). Tpl 17:27, 21 September 2006 (UTC)Reply
I only said that the practice of processing features in several steps is well established, not the name. What about "multi-level processing" or "feature extraction over several steps"? --KYN 18:21, 21 September 2006 (UTC)Reply

As I see it, the outline of this article is as it is now very much focused towards representations like the double angle. Correct? Tpl 18:26, 25 September 2006 (UTC)Reply

It mentions features like color, motion, texture, etc. I don't understand you question. Can you please be more specific? --KYN 20:34, 25 September 2006 (UTC)Reply