Talk:Training, validation, and test data sets

Merge

edit

There is absolutely no value added of having two articles Training set and Test set separately when neither can be discussed alone. The concept is Training and test sets with references to information science, statistics, data mining, biostatistics, etc. Currently the two articles are near duplicates (or could be based on the available information. Can we imagine some information for either which is not relevant for the other? Sda030 (talk) 22:53, 27 February 2014 (UTC)Reply

I agree they should be merged. Both articles say as much in their introductions. Prax54 (talk) 04:03, 10 January 2015 (UTC)Reply
Merger done, some rewrites needed.Prax54 (talk) 15:55, 20 June 2015 (UTC)Reply

Totally agree with the suggestion - training set, testing set and validation set are all parts of one whole and should be presented in one topic. (MM-Professor of QM & MIS, WWU-USA)

synonym "discovery set"

edit

A training set is also called a discovery set, right? (See for example <DOI: 10.1056/NEJMoa1406498>.) Perhaps a link should be created so that looking up "discovery set" redirects to here. Now, "discovery set" just gets a bunch of mostly-irrelevant search results. 73.53.61.168 (talk) 11:17, 13 December 2015 (UTC)Reply

"Gold standard"

edit

I have seen the term "gold standard" been used at a few places in connection with articles about machine learning. On the page Gold standard (disambiguation), it says that in statistics and machine learning, gold standard is "a manually annotated training set or test set". What does it mean that the test set is manually annotated? And is "gold standard" a term that is important enough to be mentioned in this article perhaps? —Kri (talk) 16:00, 19 January 2016 (UTC)Reply

Remove GNG template

edit

Lots of mentions in ML literature. Wqwt (talk) 20:52, 22 March 2018 (UTC)Reply

Claim that the meaning of test and validation is flipped in practice

edit

It's not clear in _whose_ practice this terms are flipped. In lots of posts by recognized practitioners (e.g. [1]) they're not flipped. — Preceding unsigned comment added by FabianMontescu (talkcontribs) 19:35, 21 September 2018 (UTC)Reply

The traditional meaning of validation is described at Software_verification_and_validation. 130.188.17.16 (talk) 19:21, 9 November 2022 (UTC)Reply
See https://stats.stackexchange.com/questions/525697/why-is-it-that-my-colleagues-and-i-learned-opposite-definitions-for-test-and-val Ain92 (talk) 16:46, 30 March 2023 (UTC)Reply

References

In several areas of science, e.g. in bioinformatics, the test set is used during the development of a software or the training of a model. The validation is done on a completely different dataset, similar to the validation of an hypothesis or a theory elsewhere ins cience. For instance, in genomics, while training and test sets would come from a cohort of patients, the "validation", such as discovery of the same variants, would be done with an entire different cohort, coming from a different study. For 20 years, I used training, test, and validation datasets that way. I was utterly baffled when I discovered that the modern deep learning community decided otherwise. And the confusion is still here. See the illustrations of Cross-validation (statistics). NicGambarde (talk) 14:48, 5 January 2024 (UTC)Reply

Sampling Methods

edit

Should this page make some reference to the way in which data is sampled - split into training/validation/test sets? This article is written in the context of Machine Learning, and often when training/validation/test sets are sampled from the main data source they are done so either randomly or in a stratified way. I think that this is worthy of mention in this article, even if not in detail.

aricooperdavis (talk) 14:18, 13 November 2018 (UTC)Reply

Earliest source for this method

edit

I can't seem to find here or in other places the earliest source for this method. it seems the holdout method was separately proposed by Highleyman in 1962, and cross validation was separately proposed by Stone in 1974, but the mixture of those two method resulting the train/validation/test is yet to be credited to one person. is this the truth? earliest source here is the Bishop book in 1995, but I don't think he is the one responsible for proposing this in literature

Out-of-sample redirects here, but...

edit

... no explanation is given, let alone in bold. Can someone please rectify? Thanks. 92.27.180.78 (talk) 20:39, 3 May 2020 (UTC)Reply