Wikipedia talk:Wikipedia Signpost/2018-06-29/Recent research

Latest comment: 5 years ago by Amorymeltzer in topic Discuss this story

Discuss this story

  • I don't have access to the AfD sentiment analysis paper, but I'd be curious how robust those findings are. If they're strong enough, we could theoretically use such analysis to attempt to detect potentially improper closes. I don't think that's a good idea (at least for at the individual level) so perhaps it's something to be aware of. I wonder what other discussions have a large enough sample set that similar analyses could be attempted? RfA springs to mind, but I'm not sure the numbers are there; it would be interesting to see if sentiments have changed over the years. ~ Amory (utc) 14:39, 30 June 2018 (UTC)Reply
The sample size of RfA is so small in recent years (since 2012) that it would not produce any usable results. The only major change in that time is that the RfAs have slowly warped into yet another platform for a lot discussion about the process and adminship in general. RfA remains the Wild West of Wikipedia. Kudpung กุดผึ้ง (talk) 00:35, 3 July 2018 (UTC)Reply
Whatt other result would be possible except that positive expressions correlate with desires to keep? What classes of arguments for keep are there except that the subject is notable/the article is good/that it does meet policy? Or, for delete, that the subject is not notable/the article is not good/the article does not meet policy? I don't see how any of this could affect judging the quality of closes, especially considering closes aren't supposed to be a mere numerical count of votes. It would identify those closes where the closes did not match the sentiments most expressed, but that's not an indication that the close is bad--in fact, it's the usual situation for AfDs contaminated by single purpose accounts. (and similarly for RfAs) DGG ( talk ) 00:32, 6 July 2018 (UTC)Reply
One of the many reasons why I think it'd be a Bad Idea™ to do so. Regardless, even though it's unsurprising, it's noteworthy that they can actually detect a difference. ~ Amory (utc) 01:01, 6 July 2018 (UTC)Reply
  • The review of "On the Self-similarity of Wikipedia Talks: a Combined Discourse-analytical and Quantitative Approach" is unintelligible. (In partial mitigation, I hasten to add that the paper it reviews [1] rates extremely high on the gobbledygook index.) I would have expected the purpose of these reviews to be to give nonspecialist readers at least an inkling of the import of the work reviewed despite (as I will hazard is the common case) prior ignorance of such terms as web genre and dialogue theory; in this it fails spectacularly. And by the way, what does it mean for a paper to be "thoroughly structured"? How do figures that "support and underpin the findings" differ from figures that simply support the findings, or underpin the findings?
Also, I thought a Wikicussion was what you get from beating your head against the wall arguing with someone who just doesn't get it. EEng 14:47, 5 July 2018 (UTC)Reply