Wikipedia talk:WikiProject Administrator/survey

Numerical versus open ended

edit

Sorry about any edit conflict I may have caused. I'm off the survey now. Changing some of the questions into numerical ratings seems useful, thanks. I would like to see the opportunities for input, comments and suggestions also be included although I know those are can't be tabulated as such. ChildofMidnight (talk) 19:35, 5 October 2009 (UTC)Reply

Traditionally, open ended questions are stuck at the end, but I agree that they are needed. I'm going to fiddle with it a bit more--Tznkai (talk) 19:37, 5 October 2009 (UTC)Reply
If we want to do both CE and OE questions in one go, we need a way to ensure that the same number of people answer all questions. Can Surveymonkey or similar sites allow for skip patterns? There's not a lot of need to ask for specifics when people are answering 'neither agree nor disagree'--it's only the ends of the scale that are important. (e.g. on a 1-10 agree/disagree scale, you want to drill down for answers from anyone who answers 1-3 and 7-10. 4/5/6 don't care so much about the issue) → ROUX  19:48, 5 October 2009 (UTC)Reply
It would seem that surveymonkey can, but there is a cost associated. However, I believe WMF has a Surveymonkey account. I'll ask Bastique about it. → ROUX  19:51, 5 October 2009 (UTC)Reply

One of the major problems in both Wikipedia and in market research is what colour to paint the bike shed. The more open-ended the questions are, the easier it is for there to be varying interpretation of the results and concomitant argument about which thing said by whom in response to question 7 is more important. This is a chronic problem on Wikipedia. If we first assess numerically how many people feel how strongly about a given statement, we will then have a clear path towards gathering more nuanced data in a much more focused way. Examples:

Q: What do you think about the current page protection policy?
A: I hate it because someone protected a page I was trying to fix with real facts
A: It's good except when admins protect the pages I want to work on
A: Great policy, works fine
A: It's inconsistently applied
A: All articles should be semiprotected

...and so on. How do you untangle those answers to find a clear path? It's very, very difficult, and people are paid significant sums of money to make up a trend analyse them. Trust me--I was one of them. Now, contrast: Statement: The current page protection policy is:

  Disagree      Agree
Too restrictive 1 2 3 4 5 6 7 8 9 10
Inconsistent 1 2 3 4 5 6 7 8 9 10
Effective 1 2 3 4 5 6 7 8 9 10

So let's say a plurality of people answer '8' for 'Inconsistent'. We now know that many people find the protection policy inconsistently applied. At that time we can start drilling down with open-ended questions to get more granularity in the data, because we are then asking specific questions about how the application of policy is inconsistent. It suddenly becomes much easier to identify trends in the data--which we could theoretically then convert to another set of agree/disagree scales in order to come up with specific actions and proposals for community approval. Quantitative data is essential for identifying actually problems, followed by qualitative questions which expand on the numerical answers given. In the context of an in-person or telephone interview, these things are handled programatically by flowcharts or computer software--e.g., someone answers '8' to question 7, which will trigger a a skip pattern to then drill down for a qualitative/open-ended answer or clarification. That would be somewhat unreasonable to do here, so two rounds of polling will be necessary--once to clearly identify areas of concern, and the second to get more nuanced input and, as Tznkai says, suggestions for solutions. But the actual problems need to be identified first. → ROUX  19:44, 5 October 2009 (UTC)Reply

Is a 1-5 or 1-10 scale more appropriate in your opinion?--Tznkai (talk) 19:54, 5 October 2009 (UTC)Reply
1-10 allows for more nuanced responses, 1-5 is easier to tabulate and provides more clarity in terms of having fewer choices. Overall I'd go with 1-10 as the clusters become more self-evident--e.g., people who would select '3' on a 1-5 might well select 4 or 6 on a 1-10, giving a better view into the trends. → ROUX  19:56, 5 October 2009 (UTC)Reply
It doesn't allow a "no opinion" response though I assume 1-9 scales just confuse people?--Tznkai (talk) 20:03, 5 October 2009 (UTC)Reply
Hmm, that would be a good idea. Make it 0-10, with 0 labeled as no opinion. Reduces noise in the data. → ROUX  20:07, 5 October 2009 (UTC)Reply
Good in theory, but upon reflection, I don't the use of 0 for anything other than "least agreement" will pan out. I was referring to the 3 1-5 being the balance point of "neither agree or disagree". Perhaps an additional "No answer" option?--Tznkai (talk) 20:11, 5 October 2009 (UTC)Reply
That does make sense; most of the research I worked on was in the context of face-to-face or telephone, so people could just say "I don't know" and it could be coded as such. Yes, a 'no opinion' option would be better than 0 in that case. → ROUX  20:12, 5 October 2009 (UTC)Reply

Construction principles

edit

First off, kudos to ChildofMidnight for kick starting this, and I hope to have continued help from him/her/it (don't know which). I think we should do the following if possible:

  • Develop more questions first, then remove redundant questions later
  • Focus on general concerns (are administrators competent) rather than specific incidents (is Tznkai competent)
  • Prefer quantifiable data to qualitative...
  • ...but make room for open ended suggestions
  • Avoid words that have gained specialized meaning when possible (vote, !vote, civility, personal attacks, alphabet soups)

Feel free to add more as we go.--Tznkai (talk) 19:59, 5 October 2009 (UTC)Reply

In order: yes, yes (Tznkai is competent), yes, and yes--at the very end. I've asked at User talk:Bastique re: Surveymonkey, which would make this process a lot easier. → ROUX  20:03, 5 October 2009 (UTC)Reply
Added one.--Tznkai (talk) 20:14, 5 October 2009 (UTC)Reply

Distribution

edit

I did look at Surveymonkey.com, and was able to sign up for th free account. I see it has limitations though. I wouldn't be able to use one of their templates, it would have to be posted from scratch. I don't know if there are other limitations or not in regards to size or number of participants. Is the "surveymonkey" going to be the distribution method? — Ched :  ?  00:04, 7 October 2009 (UTC)Reply

I'm not sure yet. Will work on it, or you can--Tznkai (talk) 14:28, 7 October 2009 (UTC)Reply