Wikipedia talk:WikiProject United States Public Policy/Assessment/Archive 2

Archive 1 Archive 2

Assessment Team Introduction

First of all, Welcome! As you probably already know, this is the first time the Wikimedia Foundation has attempted to indirectly affect content in Wikipedia. This is happening through collaboration with several university classes and recruitment of Wikipedia Ambassadors both online and on campus. Your mission, should you choose to accept it, is to determine the extent to which this project improved article quality within the scope of U.S. public policy.

Early in the project, some Wikipedians helped develop a version of the 1.0 assessment metric that I am hoping we can use for this project. There are pros and cons to it, but the big advantages for this project are 1) it is quantitative, which makes comparison and many statistical tests possible, 2) it is a rubric format which is easier for new Wikipedia contributors to understand and use, and 3) it uses the existing Wikipedia article classes. Hopefully, you are willing to give it a try and provide some input, because as you may have seen on the Tech blog and The Signpost, the foundation is trying to work with the community and find partners to create a process for improving assessment within Wikipedia.

Since we have to show that we measured article quality improvement, it gives our conclusions more power to do this with statistics. The first “experiment” is on the metric itself, so before deciding if we like it or hate it, let's see if it works. I designed a test in which 10 reviewers identified as policy expert, Wikipedia expert, or both, assess the same 25 articles. This doesn't mean that everyone reviews all 25. The sampling plan is on the outreach wiki, I just need to figure out how individual reviewers can post their results in a somewhat secure way. Also, it would give the results a lot more power if we can double the number of reviewers to 20. Ideally, I would like the metric analysis finished by early October. I will, of course, report results to you and we can discuss the implications.

What the project requires is a team of article reviewers committed through the end of the project in September 2011. Using the quantitative 1.0 assessment metric, each team member will review 10-20 specific articles approximately every two months. Many of the articles are starts or lower class articles, so it shouldn't take too long to review them. These articles will be selected randomly and designated for different experiments in quality assessment. For experienced reviewers this commitment will probably take about 2 hours every two months. Data collection will occur either on a wiki page for each assessor or via a shared document in a GoogleGroup.

These are some questions I still have:

  1. There will be reports published about these results. Do reviewers wish to be acknowledged?
  2. Is it sufficient to notify team members about assessment requests on the project assessment page?

Acknowledgement

To collect answers to the two questions above, I've made these subsections, and I'd encourage other reviewers to express their thoughts here. --RexxS (talk) 00:58, 1 October 2010 (UTC)

  • I have no need for acknowledgement, so I hereby place all of my contributions to this project, broadly construed, into the Public Domain, with no reservations. --RexxS (talk) 00:58, 1 October 2010 (UTC)

Notification

There are bots that will deliver on demand a customised message to the talk pages of a list of subscribed editors (just as WP:Signpost is delivered). I think I'd prefer the orange bar to remind me, in case I miss the notice on the project assessment page. --RexxS (talk) 00:58, 1 October 2010 (UTC)

Some questions

I'm one of the assessors, and just started on the articles I have to review; I have a couple of questions. I haven't read a whole lot of the material associated with this wikiproject, so please excuse me if these are answered on a page I should have been reading.

First, I was surprised and interested to see the article feedback tool on Equal Access to COBRA Act, the first article on my list to assess. I hadn't heard about this tool, but went ahead and evaluated it. However, I see that the criteria are different from those specified for the assessors to use. I take it then that the article feedback tool is something that the assessors are free to ignore or use as they see fit -- is that correct?

Next I noticed that the article in question has already been rated by another user, also an assessor, User:CanonLawJunkie, just yesterday. I checked CanonLawJunkie's assessment page, and that user has also been given this article to assess. When I run into this situation, I assume I should leave the first set of ratings in place, and record my own ratings on my own assessment page. Is that right? In fact, should I be only recording the ratings on my own assessment page? If nobody else has rated the article it seems harmless for me to rate it; I assume that my assessment page is the important place for me to put the ratings, though, because multiple users are going to rate the same article.

Thanks for any help. I will go ahead and add the ratings to my assessment page while I'm waiting for answers, so there's no hurry. Mike Christie (talk) 21:29, 25 September 2010 (UTC)

My understanding, Mike, is that "If you feel good about your assessment, go ahead and update the rating on the article's talk page" applies if there's not already a numerical assessment, as I'd not feel comfortable imposing my judgements over those of another reviewer. Did you check out the Wikipedia:WikiProject United States Public Policy/Assessment#Assessment Team section? I thought that gave me the clearest advice and explained many of my questions. Perhaps us reviewers could use a corner of this page to swap ideas? (as long as it doesn't nullify any of the assumptions made in setting up the process). --RexxS (talk) 22:50, 25 September 2010 (UTC)
I think this page makes sense for discussion about assessment. I am sure it is becoming clear that some things are not yet nailed down, (ie. whose assessment should go on the article when multiple people assess), so this is a good place to work out kinks in the assessment process. How do I create an archive for all the old discussion on this page? ARoth (Public Policy Initiative) (talk) 17:22, 27 September 2010 (UTC)
I've added autoarchiving with some typical parameters. Should work, but feel free to tweak it to taste. --RexxS (talk) 22:17, 27 September 2010 (UTC)
Thanks RexxS! I think there are some hopes that more assessment discussion will happen on this page, some of other WMF researchers are excited about the discussion here too, so later we will probably want your help organizing it into topics. As you know, I am not a very experienced Wikipedian, so I appreciate your improvements. ARoth (Public Policy Initiative) (talk) 02:37, 1 October 2010 (UTC)
You're welcome Amy, and I'm always happy to help where I can. But don't worry about trying things out yourself – "Be Bold" is one of the key principles on Wikipedia and you can't do anything that cannot be put right quite quickly. That the beauty of a wiki! Regards --RexxS (talk) 04:07, 1 October 2010 (UTC)
I did look at it but didn't read it closely enough; thanks for pointing me back to it. Re-reading it answered my questions. Thanks. Mike Christie (talk) 12:23, 26 September 2010 (UTC)
hi Mike Christie, the Article Feedback Tool is something the Public Policy Initiative has partnered with the WMF Tech team to pilot on articles included within WP:USPP. This tool is still under development and it's goal is to provide an easy input for Wikipedia users who typically just read the articles. Yes, it is different from the more in depth quality assessment we are doing. But please use it and take the survey (the link that says give feedback about this tool). Another experiment we will be doing is a comparative analysis of the results of that tool with to the assessment results from this metric. If you are really interested in assessment or have some thoughts on the Article Feedback Tool, stay tuned, because WMF definitely wants community involvement from Wikipedians who are interested in improving assessment in Wikipedia. The Article Feedback Tool is just in its infancy, so if you have some strong opinions and really good reasons why it should be a certain way, you can have an effect on the next version. ARoth (Public Policy Initiative) (talk) 17:22, 27 September 2010 (UTC)

WTF?

I am very much confused now. I am based in in Austria, and I am seeing the following notice in my watchlist:

A team is forming to test Wikipedia's article assessments, as part of the Public Policy Initiative. Interested article reviewers can sign up now.

A link at the end of the second sentence sent me here, but what I found here is very different from the expectations raised by that notice:

  • This is a subpage of a project related exclusively to the United States. (Not even a hint of a connection to the US in the edit notice, and apparently it's being shown to editors from everywhere.)
  • Although this is not made clear even on the page itself, it appears that this is about assessment only of articles related to public policy.
  • The words Public Policy Initiative, which mean nothing to me, are used on the page in a way that does not contribute to enlightening me, and there appears to be no link to any article or project page that would give further information.

Please, people: Think before broadcasting something to editors worldwide, and make sure that those who are not actually part of the target audience have a fighting chance of finding this out in a reasonable amount of time. Thank you. Hans Adler 08:10, 30 September 2010 (UTC)

To clarify the combined effect of the watchlist notice and this page: First I conjectured that apparently there was some entity external to Wikipedia, called the Public Policy Initiative, which has approached the Wikimedia Foundation and is now trying to find out how consistent Wikipedia's article assessment is. (Since someone concerned with public policy might have an interest in the quality of a huge open-content encyclopedia, this does not seem entirely implausible, and it's the only explanation that seemed to fit the watchlist notice.)
Now it looks as if the Public Policy Initiative is a taskforce of WikiProject United States Public Policy and is merely going to reassess the articles in its direct area of interest??? Hans Adler 08:17, 30 September 2010 (UTC)
Hi, Hans. I'm sorry for the confusion. The watchlist link points to the section of the page that has this, which should provide some context for those who haven't already read about the Wikimedia Foundation's Public Policy Initiative:

"For more context on these experiments, see "Experiments with article assessment" from The Signpost, 2010-09-13. For an overview of the Public Policy Initiative in general, see "Introducing the Public Policy Initiative" from the 2010-06-28 issue."

In short, this is a pilot that, while specifically focused on US public policy, is intended to lay the groundwork for outreach programs that will be useful and relevant throughout the Wikimedia community. There are a number of editors involved who aren't from the U.S. and don't have a particular interest in US public policy; the subject area just happens to be the testing ground, but we wouldn't have used a watchlist notice if we weren't interested in drawing the attention of editors from around the world. Perhaps you'd like to suggest better wording for the notice and/or the landing page that would make this more clear? Cheers --Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 13:38, 30 September 2010 (UTC)
Ah, nevermind, I see you've already done so and it has been updated. I agree, the new notice is an improvement. Again, sorry for the confusion.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 13:42, 30 September 2010 (UTC)

Question about Typical Assessment in Wikipedia

Does Wikipedia kind of self-regulate on article assessment by typically requesting that Wikipedians with expertise in the subject area are the ones to assess article quality? Or is assessment typically done by Wikipedians who have experience in assessment? ARoth (Public Policy Initiative) (talk) 19:28, 6 October 2010 (UTC)

Apart from the GAN and FAC processes, which exist in a framework, quality assessment of Wikipedia articles is not intrinsic. That is, members of WikiProjects assess articles according their own consensus and experience, and record that assessment within a WikiProject banner on the talk page. The intention, of course, is to classify articles within the project scope, and help editors decide what articles they may wish to improve. There is a (more or less) generally agreed set of B-Class criteria, and guidelines are often available to help decide between Stub/Start/C-Class. There's an example I assembled at WP:WikiProject Scuba diving/Assessment, if you want to get an idea of how a small project works. There are some very large and structured WikiProjects: WP:WikiProject Military history/Assessment has considerable guidance, a FAQ, Requests for assessment, and even a Backlogs section. The answer to your questions, Amy, that any editor is qualified to assess articles, but it usually regarded as good collaborative editing to make assessments on behalf of WikiProjects according to the guidelines for that project. HTH --RexxS (talk) 20:59, 6 October 2010 (UTC)
To add to what RexxS said, the initial assessment for all active projects was done primarily by a handful of interested members of the relevant Project. Beyond that, the only criteria for assessment are that you have to know that the assessment even exists on the talk page and how to change it, which in practice limits assessment to people who know what they're doing, both subject-wise and assessment-wise. Nifboy (talk) 21:36, 6 October 2010 (UTC)
If the article is C-class or lower and belongs to a WP, then the procedure is done by the members of that WP, and they may be or may not be experts on the matter. GA (Good Article) and FA (Featured Article) candidates must be nominated at Wikipedia:Good article nominations for GA, and Wikipedia:Featured article candidates for FA. They must meet the Wikipedia:Good article criteria for GA, and Wikipedia:Featured article criteria for FA. When a GA or a FA nomination is accepted, consensus must be reached that it meets the GA or the GA+FA criteria respectively, but all editors (registered user only) are elegible to review it. The GA process has a guideline on how to apply the GA criteria. (See: Wikipedia:Reviewing good articles.) Some WPs, such as Wikipedia:WikiProject Aviation, Wikipedia:WikiProject Chemicals, Wikipedia:WikiProject Chemistry, Wikipedia:WikiProject Cricket, Wikipedia:WikiProject History, Wikipedia:WikiProject Mathematics, Wikipedia:WikiProject Military history, Wikipedia:WikiProject Ohio, Wikipedia:WikiProject U.S. Roads, among many others, have also a process for discussing and determining whether an article that belongs to those projects should be given the A-class status, which is a class that is better than GA (the GA nomination must be successfully passed before), but worse than FA.
I think that if we could add some experts to the decision procedure for FA candidates, especially for the comprehensiveness, sourcing, neutrality and readability areas, and if some experts could edit articles in their subject area in a programmed and not random way, then we could achieve an excellent quality of the encyclopedia. One way would be to involve them and this is possible because, if you do not know, there is a kind of disease that runs in Wikipedia and is called wikipediholism, or obsession, or addiction with Wikipedia ~ hahaha! In this way, many "common" editors would be helped by [the presence of] experts, and in turn the experts would be helped by experienced editors for syntax and techniques regarding the use of the wiki software.
There is a really interesting article on Wikipedia, Reliability of Wikipedia. I think you'll find the Assessments section very interesting, if you have not read it before of course.
That's all for now. "Drop in" whenever you need to. Take care of you both and all the very best! –pjoef (talkcontribs) 20:38, 6 October 2010 (UTC)
I appreciate the responses to my question about assessment it helps my understanding. I think the level of experience inherent in assessing especially at the GA and FA article is part of the reason Wikipedia has improved so dramatically in reliability. Thanks Pjoef for the reference to the Reliability of Wikipedia. It has some great info related to this work AND you can bet some of those sources will appear in my literature reviews when we go to present the research from this project in various places, it is great time saver to be given references. ARoth (Public Policy Initiative) (talk) 18:09, 8 October 2010 (UTC)

Preliminary Data Analysis from first Assessment

I am excited to report that I did some preliminary data analysis based on the assessment results of Mike_Christie, Pjoef, Nifboy, CasualObserver'48, RexxS, Cordless_Larry, Ronk01, and Bejinhan. (thank you, thank you, thank you – all of you are amazing.) I ran Levene's test, unfortunately Wikipedia's description is not that great, there is a better one here. This test assumes equal variances between samples. In this case the samples are the articles and we are assuming that the variances in scores for all articles are approximately equal. This would mean that even with different assessors the scores using the quantitative metric are fairly consistent. The result from Levene's test indicates that we can assume the variances are the same. This is great news for the research, it means that Wikipedians at least have a similiar idea of what constitutes article quality and that idea is consistent in the metric we are using. Because I am a data nerd, I have to follow up this promising news with a disclaimer: this result only reflects the small part of the data so far, things could change, and it will be very interesting to compare to public policy expert results. ARoth (Public Policy Initiative) (talk) 19:28, 6 October 2010 (UTC)

Requesting feedback about the Article Feedback Tool

I just started a page for discussed the reader-facing feedback tool that was turned on a few weeks ago for articles in WikiProject USPP: Wikipedia:Article Feedback Tool. Please play around with the tool, if you haven't already, and share your thoughts.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 19:00, 7 October 2010 (UTC)

Actually, this page is the one to discuss the tool. The point still stands, we're looking for feedback and views on it.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 21:25, 7 October 2010 (UTC)

Second pass done; some comments

I've just completed the second pass of assessments, and I thought I'd comment here about a change I've made in response to the threads above. First, I added an "assessed version" link -- it's evident that this is going to be valuable to Amy and I think that we should all try and do something like that from now on. Second, I have changed my assessment to be a little more lenient on neutrality (in particular) and readability and formatting (to a lesser degree). I had been giving low scores in neutrality for a stub because the article doesn't cover multiple POVs; I've changed that approach to only rating the neutrality of what is actually there, and only take marks off if it's evident that there is a missing viewpoint or other NPOV violation. For formatting I was, for example, regarding it as a problem if there were no separate lead, since there should be one for all articles; however, the stubs don't have them. I compared my old and new scores for the three articles I was reassessing, and I found that the higher scoring article received exactly the same score, but the lower two articles went up by a couple of points. I think this is reassuring: the scoring rubric appears to be pretty stable and lead to repeatable results. Mike Christie (talk) 10:33, 8 October 2010 (UTC)

Part 2 completed and a problem

I've cocluded the second part ... so you can copy my assessment reviews from ... I'm kidding ~ lol.

The problem is this: it may happen that the assessors will review different versions of the same article. Yesterday, for example, I came across Fly America Act. But if you compare this version (8 October 2010; 2,435 bytes) with this other version (17 September 2010; 11,924 bytes) you will find that they are very different. This is not something that happens very often, but it can happen. –pjoef (talkcontribs) 13:07, 8 October 2010 (UTC)

In other words, if we want to get a more homogeneous data, then I think we all should review the same revision of an article. –pjoef (talkcontribs) 13:28, 8 October 2010 (UTC)

What I've been doing is to look at the article history and find the diff representing the article as it was on the given date (1 October for this round). I've noted that diff on my assessment page for each article (as 'assessed version'). In many cases, there's little change in the articles, but in a few - as you say - it can be significant. To give Amy's statistical techniques the most sensitivity, we should endeavour to assess the same version of the articles, and I'd encourage all reviewers to adopt a similar strategy. --RexxS (talk) 13:24, 8 October 2010 (UTC)
Yes, that was exactly what I did when I first saw my assessment page. The first article to assess was Executive Order 11478 (1 July 2010). So I looked at the Article history and I found that it was last updated on 28 December 2009!!! That does not function! Some articles have multiple edits on the same day! Give the link to the revision instead of the title. I think it is more simple. Isn't? –pjoef (talkcontribs) 13:36, 8 October 2010 (UTC)
Just to note that I made some comments regarding the version of the article used at User talk:ARoth (Public_Policy_Initiative). Cordless Larry (talk) 14:21, 8 October 2010 (UTC)
I've just read it. Thank you, Larry! –pjoef (talkcontribs) 22:18, 8 October 2010 (UTC)

Also, pay attention, that while I'm reviewing an article, I also work on it (whenever it is possible and appropriate) by adding citations, moving or adding sections, wikifying tables and templates, repairing broken links and etcetera. If we will work on revisions, that will be no more possible, of course. –pjoef (talkcontribs) 13:53, 8 October 2010 (UTC)

Cordless Larry brought up this same issue. The date will make a difference in the analysis, I am just so excited to have help, I don't want to nag. I think the version date will become even more important in future assessments. In the future, I will set up a link to the desired version, that would probably help. Also, to come out of the conversation with Cordless Larry, I am going to put an update on what is happening with the research on the Assessment tab weekly or bi-weekly. That being said, remember I will be gone for a few weeks very shortly here, but LiAnna is the communications associate for the project (she has done an incredible job) and she will be filling in for me. ARoth (Public Policy Initiative) (talk) 17:54, 8 October 2010 (UTC)
Pjoef, one of the goals of the project was to bring more attention and improvement to articles within the scope of public policy so I think it's great you are improving them as you go. If I make links to the version date, then any editing you do should not be an issue for the other assessors. ARoth (Public Policy Initiative) (talk) 17:54, 8 October 2010 (UTC)
It's my pleasure! I saw other assessors did about the same thing. –pjoef (talkcontribs) 22:18, 8 October 2010 (UTC)

First conflict with WP 1.0

Like Mike, I've altered my stance on interpreting some of the rubric. I accept, reluctantly, that a one-sentence stub could get a high score on sourcing (although the caveat of 'most appropriate sources' keeps me short of full marks). I now think the description in the rubric "Score based on adequacy of inline citations and quality of sources relative to what is available" is at odds with the level descriptors, which do not consider the last qualification. Similarly, the one-line stub can be completely readable, but does that fit with the description "Score based on how readable and well-written the article is"? (my emphasis in each case).

Of more concern is this sentence from WP:NPOV:

This means representing fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources

I still fail to see how an article that does not present any published significant views can comply with our NPOV policy, particularly when it is wholly, or largely, unsourced.

I've now found two articles that translate into grades different from WP 1.0. In one case, a wholly unsourced article has received a Start-Class grade from USPPI, whereas WP 1.0 Start-Class grade requires "enough sources to establish verifiability". In another case, an article has been graded as B-Class on the translation from USPPI rubric, but is grossly lacking in inline citations and has an uncited quotation, failing B-1: "The article is suitably referenced, with inline citations where necessary". In addition, it fails B-5 and B-6, as it contains only one (largely irrelevant) image which has no alt text.

I accept that I have a very strict view on sourcing and referencing, and I don't want to seem too negative, but sources must be seen to underpin the text of all Wikipedia articles. --RexxS (talk) 20:27, 9 October 2010 (UTC)

I think, based on the descriptions of the levels that "Verifiability" would be a better name for the "sourcing" category; after all, adding sourcing leads to completeness + neutrality, while verifiability improves the quality and reliability of the text but doesn't necessarily add anything. It depends if we want categories to be discrete or reflect a kind of "distance to FA", wherein completeness, sourcing, and neutrality necessarily feed into each other. (edit) All that having been said, I do think, especially in the case of stubs, that there is a certain floor to how few sources we really need for sourcing and neutrality, as described in the notability guideline. With respect to NPOV, I think that Public Policy Project is actually going to be an outlier in that only an article with super-high completeness scores can be said to really be neutral, since there are so many POVs to cover for any given issue. Nifboy (talk) 21:43, 9 October 2010 (UTC)
The translation scheme can be easily changed, and that wouldn't even really disrupt anything. I think you're right that Start should be defined as at least sourcing = 1. I'll change that soon, if there are no objections. The B-class mis-rating you point to is definitely trickier. It might make sense to add illustrations = 1 to B-class translation logic. But the sourcing requirements for B-class aren't really geared toward the overall sourcing level or the distance of FA-class sourcing, especially with the specification that only "important or controversial material which is likely to be challenged" requires inline citations. The solution there, I think, is to add the class directly. B-class criteria is rarely checked as closely as it should be, and there are many listed B-class articles that don't meet the strict criteria, but in cases where people recognize specific deficiencies that don't get picked up by the USPP scheme, a manually-added class will override the transated one.--Sage Ross - Online Facilitator, Wikimedia Foundation (talk) 20:28, 11 October 2010 (UTC)
Thanks, Sage. It's probably best to collect together a number of comments from these rounds before you start making changes to the translation. I'm just flagging things up as soon as I notice them (because I'm a forgetful animal). Have a good look at WP:BCLASS, WP:GACR, and WP:FAC when you're ready to make changes. You'll see they have several common threads. Even FA criterion 1c says "Claims are verifiable against high-quality reliable sources and are supported by inline citations where appropriate" and the last phrase is linked to WP:When to cite, effectively repeating the requirement for inline citation only where text is likely to be challenged or for direct quotations. I think you find it's not too difficult to match the USPPI rubric to the descriptors used for B-Class, GA, or FA. The classes (other than GA/FA) are defined by individual WikiProjects anyway, and there is no compulsion for one project's classification to match that of another, so I wouldn't worry too much about it. --RexxS (talk) 20:49, 11 October 2010 (UTC)
RexxS, I don't think comments like this are negative. We are exploring an assessment method with the purpose of generating consistent quantifiable results that align with the FA/GA principles and capture the 1.0 Assessment policies. So there will be issues that come up and we, as an assessment team, need to correct them and figure out the best way to measure article quality. If the metric does not jibe with established Wikipedia article quality standards we need to change the metric not Wikipedia. As far as the metric specifics go, I defer to you and other experienced Wikipedians on the language and policies. I think you have a good point that we should sort of collect the issues and address them all at once, what is the best way to keep track of issues as they arise, and then address them later? ARoth (Public Policy Initiative) (talk) 23:35, 12 October 2010 (UTC)
The easiest way, Amy, is to just make another subpage of the project – server space is cheap – and just copy or summarise stuff connected with translating the rubric scores into wikipedia grades. WikiProject United States Public Policy/Assessment/Translating grades would work if you want to create it (or use your preferred title). It will have a talk page where folks can argue, and you'll eventually find that it it get turned into something usable that you can later copy back into the main rubric page. Don't worry, you have a good group of editors interested in the project and they'll thrash out whatever detail is needed. And of course, nothing is set in stone, and it will improve incrementally – that's the wiki-way :) Cheers --RexxS (talk) 00:10, 13 October 2010 (UTC)
Also, on the second pass I've noticed most people shy away from evaluating the War on Drugs article. Is there an edit war on it? What are the implications of assessment when an article is currently undergoing heavy editing? ARoth (Public Policy Initiative) (talk) 20:35, 13 October 2010 (UTC)
There's not an edit war -- if you look at the history page you'll see that it's not all that active; most recent edits are by pjoef, one of the assessors. I assessed it myself, so I am not the right person to answer your question, but a possible explanation is the big articles just take longer to assess and people may be leaving it until last for that reason. Mike Christie (talk) 23:49, 13 October 2010 (UTC)
Basically that, plus the fact that I am somewhat opinionated about the subject and thus it took me a while to set aside my own POV to review it. Normally when I check for sourcing I check every single reference used, and War on Drugs has over a hundred; I ended up not checking all the sources, primarily because I concluded the article had much bigger problems in terms of neutrality and tone. The fact that I have a POV sympathetic to the article's actually made it easier, once I got down to the nuts and bolts of it, to conclude that as written it wasn't going to change any minds. Nifboy (talk) 00:10, 14 October 2010 (UTC)

Staying in the loop

Round 1.2 was finished on my part too. It resulted in a recent aside (at the top), which is relevant to discussions here, if you were not otherwise aware of it. Regards, CasualObserver'48 (talk) 09:58, 14 October 2010 (UTC)