Wikipedia:Version 1.0 Editorial Team/Selection trials

This page is an archive which chronicles the tests run by the SelectionBot. Early bot work was done by User:MartinBotII, but when this bot became unavailable for 1.0 work it switched to User:VeblenBot.

First trial edit

A pilot trial was conducted using articles from chemistry, physics, medicine and mathematics, and this generated these results:

This pilot trial used the simple algorithm described here:

The rating of an article is its quality rating times its importance rating. Articles which have a rating of 20 will automatically be included in the release version. (This includes top importance that are at least start class, high-importance that are at least B-class, mid importance that are at least GA-class; and no low importance).

Score Importance
Top High Mid Low
7.5 6 4 2.5
Quality
FA 7.5 56.25 45 30 18.75
A 6.5 48.75 39 26 16.25
GA 6 45 36 24 15
B 4.5 33.75 27 18 11.25
Start 4 30 24 16 10
Stub 2 15 12 8 5

NOTE: the minimum rating can be increased to get better articles (at the expense of quantity) or decreased to get more articles (at the expense of quality).

The test was considered to be a great success - it generated a very viable selection of articles.

January 2007 trials edit

Results:

Trial running now (slowly), banding the links-in "score" (average importance of the links in) into four bands, roughly the same as the other system's values (anything with an average below 20 gets a score of its links-in rating /10). I'm using a multiplier of 0.9 for all projects again, and the scores themselves are being printed to the bot tables. Due to a slight hiccup by me, I'm not running a trial on the chemistry project right now, but will rectify the situation when I can. For WP: maths, we've got a lot more articles, so the threshold of 20 may need to be raised. I'm just waiting for the others to finish now... (we're up to Physics - the last and longest one). Martinp23 21:23, 8 January 2007 (UTC)[reply]


February, 2007 trials edit

Initial test edit

A pilot trial was conducted using articles from chemistry, physics, medicine and mathematics, and this generated the results accessible here. This pilot trial used the simple algorithm described here:

The rating of an article is its quality rating times its importance rating. Articles which have a rating of 20 will automatically be included in the release version. (This includes top importance that are at least start class, high-importance that are at least B-class, mid importance that are at least GA-class; and no low importance).

Score Importance
Top High Mid Low
7.5 6 4 2.5
Quality
  FA 7.5 56.25 45 30 18.75
  A 6.5 48.75 39 26 16.25
  GA 6 45 36 24 15
B 4.5 33.75 27 18 11.25
Start 4 30 24 16 10
Stub 2 15 12 8 5

NOTE: the minimum rating can be increased to get better articles (at the expense of quantity) or decreased to get more articles (at the expense of quality).

The test was considered to be a great success - it generated a very viable selection of articles.

Proposed full-scale test edit

One major problem remains before the bot can be used to generate large lists; we need to be able to reliably judge importance across a wide spectrum of articles. We will need to compensate for (a) project importance level (e.g., USA vs. Texas vs. Dallas) and (b) assessment practices at the project concerned (e.g., depending whether they have 1 or 100 "top"). The latter isn't a problem now, but a few projects may try to cram lots of their articles into our releases if there isn't a check built in from the start.

Two (more refined) algorithms have been proposed:

Option A (Multiplicative scale) edit

See full discussion here. This approach takes the multiplication scale outlined above and refines it, to take account of importance of the individual WikiProject.

  • 2.2, world in general (if there is one)
  • 2, continents
  • 1.9, regional blocs like the European Union
  • 1.8, major countries (top fifth percentile GDP)
  • 1.8, science, history, arts, entertainment, etc. (general projects)
  • 1.6, moderately important countries (40th-80th percentile GDP or bigger than and including Iraq)
  • 1.3, minor countries (everything else larger than and including Andorra
  • 1, global cities
  • 0.9, each area of science, history, sports, etc. (major, i.e. Chemistry or Football) (definition of major? >=90,000,000 Google hits?)
  • 0.8, each area of everyday life (major, i.e. Train or Trees, singular, >150 million Google hits)
  • 0.7, each area of science, history, sports, etc. (minor, i.e. Developmental psychology, everything not major)
  • 0.7, tiny countries like Monaco and major cities (1,000,000+ population or a global city, debatable)
  • 0.5, TV shows (major, >=12,500,000 Google hits, after searching for MythBusters, Oprah, and CNN)
  • 0.4, minor cities (not major)
  • 0.2, TV shows (minor, i.e. not major)
The importance of a Wikiproject's articles would be multiplied by its importance rating to get the final rating. These numbers could be tweaked a bit, though. Feel free to edit it without posting a new message. Importance in this case is defined by Google hits (i.e. roughly how many people know about it). There is now a page for rating the importance of projects at Wikipedia:Version 1.0 Editorial Team/Work via Wikiprojects/Importance because we will need to rate them sometime.

Option B (Additive scale) edit

Each article is given a score, based on an additive scale. This is designed to allow for fine tuning, while keeping the numbers as integers rather than fractions. The aim is that for Version 0.7, any article with a score equal to or over 1000 would be included. The algorithm should be applicable to any article in Wikipedia once fully established. The system weights importance more than quality.

The proposal would be to run the bot first without the "correction factors" to see how well it works, then phase in correction factors as needed (if needed). We can also tweak the numbers as needed, based on the initial results.

The formula would be

Score = Q' + I' + P'

where Q' = corrected article quality, I' = corrected article importance, and P = corrected project ranking.

The more detailed version of this is

Score = (Q + QC) + (I + IC) + (P + PC)

Where Q, I and P represent the uncorrected quality, importance and project ranking, and QC, IC, PC represent the corrections to those ranks. The corrections are for each WikiProject.

To keep things simple for now, we will work with uncorrected values.

Score = Q + I + P
Quality of article

In addition to a basic "raw quality score" (Q) of

  • FA 500
  • GA/A 400
  • B 300
  • Start 150
  • Stub 50
Importance

This might be based upon

  1. The importance of the article topic (I) is based on the ranking (R) as judged by a relevant WikiProject, in conjunction with the importance of that WikiProject's general subject area, called the project ranking (P').
  2. The importance of the article topic as judged by other parameters, such as the number of mainspace links in to that article (perhaps weighted for the importance of those links).

To keep things relatively simple at this point, only the first of these parameters will be considered; for a more detailed description of the second proposal see this outline.

Importance of article (I') I = R + P + H + L + IW

where

  • R = ranking by a WikiProject of importance P. Top = 300, High = 200, Mid = 100. A "top" importance ranking by a high level project should add around 400-450.
  • H = (no. of hits over a 28 day period divided by 250) (max 200). There are around 1000 articles on en that would have > 25,000 hits, and around 200 with > 50,000.
  • L = no. of mainspace links into the article (max 400)
  • IW = (no. of interwiki links to the article multiplied by 4)

If no R has been given by a project, R and P cannot be included, and the formula used will be

I = 2 x (H + L + IW) rounded up to the nearest whole number.

We can test various weightings of these parameters to find an answer set that works well.

WikiProject ranking (P')

We will initially test with a basic "raw project rank" (P) of (detailed description to be determined)

  • Top-level in hierarchy (e.g., History) 300-350
  • High-level in hierarchy (e.g., History of Poland) 200-300
  • Mid-level in hierarchy (e.g., Kings and Queens of Poland) 100-200
  • Low-level in hierarchy (e.g, WikiProject:Mieszko I of Poland) 50-100

Later this could be corrected either by using the importance of the "key project article" - e.g., chemistry for Wikipedia Chemistry, coin for WP Numismatics. It could also include a correction for the assessing policies of the project. It might (if needed) be judged manually. Details of project ranking might be refined here.

Some example articles edit

I picked a few representative articles, mainly some "places" articles of varying importance, but including a few people too. Feel free to add in your own suggestions.

A selection of articles
article Quality (Q) Ranking (R) Project rank (P) No. hits/250 Mainspace
links in
Interwiki x 2 Score
(option A)
Score
(option B)
Harry Potter B (300) Top (300) Novels (300) 210k hits (200) 1225 (400) 74 (296) 1796
Uranus FA (500) NA NA Unknown
(assume 50)
401 (400) 86 (344) 1916
Sweden B (300) Top (300) Sweden (300) 50k hits (200) 29010 (400) 139 (556) 2056
Troy, New York
(popn. 49k)
B (300) NA NA Unknown
(assume 10)
782 (400) 5 (20) 1150
Tonbridge
(popn. 32k)
Start (150) High (200) Kent (200) Unknown
(assume 8)
338 (338) 4 (16) 910
Walvis Bay
(popn. 65k,
strategic port)
Start (150) NA NA Unknown
(assume 10)
127 (127) 15 (60) 514
Dnipropetrovsk
(popn. 1.1m)
B (300) High (200) Ukraine (300) Unknown
(assume 8)
300 (300) 28 (112) 1220
Earsdon
(village)
Stub (50) NA NA Unknown
(assume 1)
32 (32) 0 (0) 83
Elias James Corey
Nobelist
B (300) High (200) Chemistry (330) Unknown
(assume 8)
132 (132) 14 (56) 1056
W. Somerset Maugham B (300) NA NA Unknown
(assume 15)
196 (196) 30 (120) 962
Angela Merkel B (300) NA NA Unknown
(assume 80)
445 (400) 70 (280) 1820

Everything look reasonable to me, except that I'd have expected Maugham to do better - this is a combination of (NA for project ranking and the fact that people tend to score much less well on links-in than do places (compare Merkel with Troy in the above - which is more important?).

Possible problems edit

Importance edit

Importance has proved to be a thorny issue, with many projects electing to avoid the issue altogether by only tagging for quality. In the 1.0 project we have seen this first hand, as editors take offence when told that their favourite FA is not important enough to be included. This problem is likely to be even more serious when the relative importance of projects is being debated. This issue cannot be ducked, however; if we want to produce a broad selection of Wikipedia yet be selective, importance is probably the main criterion used for making that selection.

The ideal way to assess importance is (as with quality) to let the experts (the WikiProjects) do this themselves. This simplifies the problem, but each WikiProject then needs to be ranked for its importance. This is likely to be about as popular as trying to put down hard figures for the Value of life. As such, we must try to find fairly objective ways to do this - the correction factor helps with this, but better would be to find some external ranking schemes. Another problem is that most articles on Wikipedia at present don't have any assessment for importance - though this might begin to change if people see that it helps get their articles included in Version 1.0.

Another way to approach the problem is to try and rank individual articles ourselves using objective criteria, possibly using one of the two methods described here, summarised as:

  • Just a simple number - e.g., "346 articles link to this article".
  • A more complex algorithm that would factor in the importance of those 346 articles, perhaps via iteration. I imagine a first run that ignores this factor, then later runs phase it in gradually. You would have the bot read a table called (say) ImportanceOld that contains the full listing of importance numbers generated on the previous run, and use that in generating ImportanceNew; at the beginning of the next run the bot would copy ImportanceNew over ImportanceOld.

Some other machine based ranking methods include:

  • Number of hits on each article, as is used for the Wikicharts.
  • A count of interwiki links, i.e., how many different language Wikipedias have that article, as was used in this discussion.

It may be possible to have the bot use all of these methods, in which case the importance rating should be even more reliable.