Talk:Decision boundary

Latest comment: 10 years ago by 70.162.89.24 in topic This article is wrong

Untitled edit

I wrote this article to replace what was basically a substub, but I don't know much about this topic, so I've tagged it for expert review. (I'd never even heard of support vector machines until now; I just mentioned them because they were mentioned in the original version of the article.) I also don't know what a decision space is, or how it relates to the concept of a decision boundary, but perhaps it ought to be merged somewhere (here, for example)? —User:Caesura(t) 19:57, 28 November 2005 (UTC)Reply

As far as I can tell, a decision space is just a 3 dimensional hyperplane belonging to a 4 dimensional hyperspace, which then seperates 4 dimensional objects in two groups. I only study this field right now, so it may be completly off.

This article is wrong edit

I cite the http://en.wikipedia.org/wiki/Universal_approximation_theorem

The article reads "If it has one hidden layer, then it can learn problems with convex decision boundaries (and some concave decision boundaries). The network can learn more complex problems if it has two or more hidden layers." This is not true. A feedforward neural network with one hidden layer and one output layer can approximate anything. More layers may be more efficient, more elegant, or otherwise desirable, but not strictly needed. — Preceding unsigned comment added by 70.162.89.24 (talk) 23:02, 26 February 2014 (UTC)Reply