Wikipedia:Bot requests

  (Redirected from Wikipedia:BOTREQ)

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference). If your task involves only a handful of articles, is straightforward, and/or only needs to be done once, consider making an AWB request at WP:AWBREQ. If it might be solved with a SQL query, try a request at WP:SQLREQ. URL changes may seem deceptively simple, however a search-replace of text within a URL is not advised due to archive URLs and other issues. A number of bots specialized for URL work can be notified for requests at WP:URLREQ. To request a new template see WP:RT. To request a new User script see WP:SCRIPTREQ.

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).

Please add your bot requests to the bottom of this page.
Make a new request

Bot for merging Russian locality permastubsEdit

After a discussion here a couple weeks ago, there was a rough local consensus that it might be beneficial to merge the majority of Russian rural locality articles (95% of which are two-line permastubs) to list articles (currently these lists are by first-level division, such as List of rural localities in Vologda Oblast). As you can see on that article, Fram is in the process of merging the pertinent information from the individual stubs into tables, but it's tedious work and there's something on the order of like 10,000 or so of such articles.

I was wondering if it's possible/plausible to create a bot that could automate any part of that process? ♠PMC(talk) 04:04, 29 November 2019 (UTC)

If the stubs in question have all an identical structure, it could work. Otherwise we might get WP:CONTEXTBOT issues. That said, did that merger proposal get more widely advertised than just the user talk page you link there? Jo-Jo Eumerus (talk) 07:02, 29 November 2019 (UTC)
Not hugely, but to be honest, the total interested audience for these stubs is basically Nikolai Kurbatov, their creator, Ymblanter, as probably our most prolific Russia-focused editor, Fram, who came across them and proposed the merge, and myself, because I maintain the "List of rural localities in X" articles. ♠PMC(talk) 07:15, 29 November 2019 (UTC)
I would say if one can add the stubs with identical structure to the lists it would be already very useful. Everything else can be done manually (or not done at all, we have quite a few fully developed articles on the topic).--Ymblanter (talk) 07:45, 29 November 2019 (UTC)
Yes, it would be a great help if a bot could do this. The only harder parts are getting the population information from the article, and "deciding" whether to redirect the article or whether to keep it as a standalone article. Perhaps the bot can use some measure of the length of the article and do a cut-off based on this? It's a redirect, so any errors in this regard can be easily reverted by anyone. Fram (talk) 07:54, 29 November 2019 (UTC)
I wonder if the bot could scrape the info onto a sub-page or a draft page of some kind to be checked by humans before being mainspaced. That way we can make sure the info is getting properly, er, populated. ♠PMC(talk) 07:56, 29 November 2019 (UTC)

@Premeditated Chaos, Fram, Ymblanter, and Jo-Jo Eumerus: I took a pass at parsing out population data into User:AntiCompositeNumber/rustubs, trying to get data from the infobox, {{ru-census}}, and string matching. The character count is also included. (It's lower than the MW byte count because of UTF-8 character encoding.) --AntiCompositeNumber (talk) 01:38, 1 January 2020 (UTC)

@Premeditated Chaos, Fram, Ymblanter, and Jo-Jo Eumerus: AntiCompositeNumber, a ping needs to be on a new line to work. Jo-Jo Eumerus (talk) 10:07, 1 January 2020 (UTC)

Lists of new articles by subjectEdit

My kingdom for a bot that compiles new articles in a new subject area (e.g., added to a WikiProject's scope). @PresN, currently runs a script that does this manually (see one of the "New Articles" threads at WT:VG) but would love to be able to do this for other projects so that new editors get visibility/help and that the project can see the fruits of its efforts. (Also discussed at PresN's talk page.) Special:Contributions/InceptionBot currently finds articles that might be within scope but this proposal is instead a log of recent additions to a topic area (similar to how the 1.0 project compiles). It could be useful if delivered directly to a WikiProject/noticeboard page or, alternatively, updated on a single page and transcluded à la WP:Article alerts. czar 20:07, 15 December 2019 (UTC)

@Czar: This seems like a fairly simple task, but I want to make sure I have all the details right: For every wikiproject that opts-in, each week, generate a list of articles that had that wikiproject's tag added to their talk page within that week. Information about the article should be included, including importance and quality rating and author. Should non-articles (cats, templates, files, etc) be considered as well, or just articles? What about drafts? Are newly-created redirects important? Do you want articles that were removed from the WikiProject, deleted or redirected too (this would make it more complex)? --AntiCompositeNumber (talk) 20:04, 3 February 2020 (UTC)
@AntiCompositeNumber, yes, that's right! Can detect on the addition of the template or the addition to the category associated with the WikiProject (i.e., ArticleAlerts uses a combination of the banner and the talk category). I'd recommend including the quality rating but excluding the importance, à la {{article status}}. I'd recommend cutting scope to only include articles to keep the v1 reasonable. (Let someone request the extras if they have a valid case, but ArticleAlerts currently lists relevant deletions and AfC drafts.) In WT:VG#New Articles (January 27 to February 2), as an example, I personally don't find the category reports useful. The goal of this bot, to my eyes, is to make WikiProject talk pages closer to topical noticeboards, so editors interested in a topic receive a digest of new article creations to pitch in either to contribute to the article or simply to welcome new/isolated editors. So while I recommend against listing files, cats, templates, drafts, importance, deletions, or redirects, if there is any stretch goal, I'd particularly recommend incorporating InceptionBot's possibly related articles, in case there are new articles from the last week that may be eligible for the project but just haven't been tagged. But, yes, core function is just new, on-topic articles. czar 03:07, 8 February 2020 (UTC)
Czar, how does this (for WP:VG) look? I'll add some explanatory text to it before calling it ready, but wanted to get your thoughts on what the output looks like now. I haven't written the bot part of the bot yet, just the category analysis. The report's only based on the category and doesn't particularly care about the template since that data is much more accessible. Also, do you know of wikiprojects that would be interested in the reports? --AntiCompositeNumber (talk) 22:34, 8 February 2020 (UTC)
@AntiCompositeNumber, love it. Looks great! I'd start with WT:VG just for testing and can advertise/expand it to other projects. (FYI @PresN, curious if this test is missing any articles you'd normally catch) czar 00:31, 9 February 2020 (UTC)
@AntiCompositeNumber and Czar: Ah, the machines are coming for my (machine's) job... Yeah, there's some differences between this bot and my script's output for the week:
  • Source 2 on Feb 5 is missing- my script lists this because it was previously a redirect (for 2+ years) and was converted to a 'real' article on that day
  • Jablinski Games on Feb 7 is missing - was created as a redirect on Jan 30 with no talk page tag (so, not in the project), then converted to a 'real' article on Feb 7 and a talk page tag added.
  • You have Animal Crossing Plaza on the 2nd when it was created/tagged on the 1st, but I think that might just be a time-zone issue
  • You list Candy Crush Saga as new on Feb 1, when it's years old; this appears to be because of a crazy revert war with a vandal on the talk page on Jan 31.
So, from this limited sample set, it appears the main miss is considering redirect->!redirect as a 'creation', and not discounting the 'creation' of an existing page. That said, I fully expect there to be weirdness around page moves and double-page moves as well, but those are smaller corner cases. The other major difference is that my script would have listed 17 new categories as well (in addition to listing new article deletions and redirections/moves to draft space (aka soft deletions) this week, and (none this week) new templates/template deletions). --PresN 05:36, 9 February 2020 (UTC)
@PresN and Czar: Jablinski Games and Animal Crossing Plaza are both just time zone issues. The bot considers the last 7 full days in UTC, so Animal Crossing Plaza was tagged at 01:05 and Jablinski shows up today.
The other two are because of the data source. My tool is only querying the categorylinks database table for recent additions of the category, so it doesn't pick up redirect -> article conversions. WP 1.0 Bot gets around this by logging article metadata into it's own database, but that data isn't super accessible outside of parsing the on-wiki logs (afaict). The categorylinks table only cares about the page id, not the page title, so moves don't affect it. So while there is data for new catgorization of drafts, I won't see articles that were previously tagged and were moved to mainspace. There is, of course, data for the tagging of drafts, files, categories, etc: I'm just ignoring it. Listing articles currently tagged for AfD or PROD wouldn't be too difficult, it's just a category intersection. Code if you're curious --AntiCompositeNumber (talk) 16:13, 9 February 2020 (UTC)
@AntiCompositeNumber: OK, so assuming I'm reading this right, there's not really a good way to get un-redirects; they'd show up when the redirect is first created (which isn't ideal as most redirects never get undone, and most un-redirects are years later) but that's it. Same for draft->mainspace, but no issues with page moves. So your version would cover the majority of cases, but would miss those edge cases. That's probably fine for most wikiprojects, though- my non-data-based feeling is that it's the media projects that have the most "article created, redirected, and later re-created" occurrences, whereas projects that get less attention from eager fans don't get as many articles created prematurely.
Your code is definitely more readable than my spaghetti nonsense, though- for an example of what happens if you try to base this off of the WP1.0 bot output and then compound it by actually just parsing the html of Wikipedia:Version 1.0 Editorial Team/Video game articles by quality log directly without any sort of api access and then make it worse by parsing top to bottom aka reverse temporal order, here's the python function that does the logic of building the list of article objects that appear to have been created in the date range given:
Extended content
  def parse_lists(lists, headers, assessments, new_cats, dates, dates_needed):
    NULL_ASSESSMENT = '----'
    max_lists = dates_needed * 4
    extra_headers = get_extra_headers(headers) # Note "Renamed" headers

    # Initial assessment
    for index, list in enumerate(lists):
      if index <= max_lists:
        for item in list.find_all('li'):
          contents = _.join(item.contents, ' ')
          offset = count_less_than(extra_headers, index) - 1
          date = dates[int(max((index-(1 + offset)), 0)/3)] #TODO: handles 3+ sections
          assess_type = assessment_type(contents)
          # Assessment
          if assess_type == ASSESSMENT:
            namespaced_title = get_title(item, ASSESSMENT)
            title = clean_title(namespaced_title)
            old_klass = NULL_ASSESSMENT
            new_klass = get_newly_assessed_class(item, namespaced_title)
            if (not is_file(namespaced_title)
            and not is_redirect_class(new_klass)
            and not (title in assessments and was_later_deleted(assessments[title]))): # ignore files, redirects, and mayflies
              if is_category(namespaced_title):
                init_cat_if_not_present(new_cats, namespaced_title)
                init_if_not_present(assessments, title)
                assessments[title]['creation_class'] = new_klass
                assessments[title]['creation_date'] = date

          if assess_type == REASSESSMENT:
            namespaced_title = get_title(item, REASSESSMENT)
            title = clean_title(namespaced_title)
            old_klass = get_reassessment_class(item, 'OLD')
            new_klass = get_reassessment_class(item, 'NEW')
            if not is_file(namespaced_title):
              init_if_not_present(assessments, title)
              if is_redirect_class(new_klass): # tag redirect updates as removals, unless later recreated
                if not (is_draft_class(old_klass) and 'creation_class' in assessments[title]): # Ignore if this a a draft-> mainspace move in 2 lines
                  assessments[title]['was_removed'] = 'yes'
              elif is_redirect_class(old_klass): # treat redirect -> non-redirect as a creation
                assessments[title]['creation_class'] = old_klass
                assessments[title]['updated_class'] = new_klass
                assessments[title]['creation_date'] = date
              else: # only add the latest change, and only if there's no newer deletion
                if 'updated_class' not in assessments[title] and not was_later_deleted(assessments[title]):
                  assessments[title]['updated_class'] = new_klass

          # Rename
          if assess_type == RENAME:
            namespaced_old_title = get_rename_title(item, 'OLD')
            namespaced_new_title = get_rename_title(item, 'NEW')
            if not is_file(namespaced_new_title) and not is_category(namespaced_new_title):
              new_title = clean_title(namespaced_new_title)
              if is_draft(namespaced_old_title) and not is_draft(namespaced_new_title):
                init_if_not_present(assessments, new_title)
                if not was_later_updated(assessments[new_title]) and not was_later_deleted(assessments[new_title]):
                  assessments[new_title]['creation_class'] = DRAFT_CLASS
                  assessments[new_title]['updated_class'] = "Unassessed"
                  assessments[new_title]['creation_date'] = date
              if is_draft(namespaced_new_title) and not is_draft(namespaced_old_title):
                init_if_not_present(assessments, new_title)
                if not was_later_updated(assessments[new_title]) and not was_later_deleted(assessments[new_title]):
                  assessments[new_title]['creation_class'] = "Unassessed"
                  assessments[new_title]['updated_class'] = DRAFT_CLASS
                  assessments[new_title]['creation_date'] = date

          # Removal
          if assess_type == REMOVAL:
            namespaced_title = get_title(item, REMOVAL)
            # Articles
            if not is_file(namespaced_title):
              title = clean_title(namespaced_title)
              if title not in assessments: # don't tag if there's a newer re-creation
                assessments[title] = { 'was_removed': 'yes' }
                if is_category(namespaced_title):
                  assessments[title]['creation_class'] = CATEGORY_CLASS
                if is_draft(namespaced_title):
                  assessments[title]['creation_class'] = DRAFT_CLASS
            # Categories
            if is_category(namespaced_title) and namespaced_title not in new_cats:
              new_cats[namespaced_title] = 'was_removed'

    return {'assessments': assessments, 'new_cats': new_cats}
--PresN 04:25, 10 February 2020 (UTC)

Remove sister project templates with no targetEdit

Among moth articles, and I suspect many others, there are sometimes template links to Wikispecies and Wikimedia Commons but there's nothing at the target location in the sister project. I'd love to see a bot which could go through and check these and remove the deceptive templates.

Even better if it could remove links to Commons if the only file in Commons is already in use in the article.

Another refinement would be to change from a general Commons link to a Commons category link when that exists.


Thank you. SchreiberBike | ⌨  03:59, 16 December 2019 (UTC)

SchreiberBike, forgot to say here but   BRFA filed a few days ago. ‑‑Trialpears (talk) 10:17, 9 January 2020 (UTC)
@Trialpears: I'd seen that and had no objection to it. Does it relate to the request above though? Thanks, SchreiberBike | ⌨  21:53, 9 January 2020 (UTC)
Oh sorry it's at Wikipedia:Bots/Requests for approval/PearBOT 7. ‑‑Trialpears (talk) 22:02, 9 January 2020 (UTC)

Please remove residence from Infobox personEdit

Hi there, re: this permalinked discussion, could you stellar bot handlers please remove the |residence= parameter and subsequent content from articles using {{Infobox person}}? Per some of the discussions, Category:Infobox person using residence might list most of the pages using this template. And RexxS said:

"Using an insource search (hastemplate:"infobox person" insource:/residence *= *[A-Za-z\[]/) shows 36,844 results, but it might have missed a few (like {{plainlist}}); there are at least 766 uses of the parameter with a blank value."

I don't know if this helps. This is not my exact area of expertise. Thanks! Cyphoidbomb (talk) 05:31, 27 December 2019 (UTC)

I explained my objection to this proposal in the Removal section immediately below the closed discussion in the permalink above. It is not a good idea to edit 38,000 articles if the only objective is a cosmetic update. Further, there is no rush and the holiday season is not a good time to make a fait accompli of an edit to the template performed on Christmas Day. Johnuniq (talk) 06:40, 27 December 2019 (UTC)
Cyphoidbomb, I already have a bot task that can handle this, but it sounds like there is some contention about the actual removal, so ping me somewhere if and when the decision about how to deprecate the param is finished. Primefac (talk) 16:24, 27 December 2019 (UTC)
@Primefac and Johnuniq: OK, I'm certainly in no hurry. Cyphoidbomb (talk) 16:34, 27 December 2019 (UTC)
I just want to add support for this. I'm quite tired of seeing the residence error when I do quick previews before saving edited bios. МандичкаYO 😜 11:02, 8 February 2020 (UTC)

A heads up for AfD closers re: PROD eligibility when approaching NOQUORUMEdit

Revisiting this March discussion for a new owner

When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as an expired PROD ("soft delete"). As a courtesy/aid for the closer, if would be really helpful for a bot to inform of the article's PROD eligibility ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). Cribbing from the last discussion, it could look like this:

  • When an AfD listing begins its seventh/final day (almost full term) with no discussion, a bot posts a comment on whether the article is eligible for soft deletion by checking the PROD criteria that the page:
    • isn't already redirected (use API)
    • hasn't been PROD'd before (check edit summaries and/or diffs; or edit filter if ever created)
    • has never been undeleted (check logs)
    • hasn't been in a deletion discussion before (check page title and talk page banners)
    • nice-to-have: list prior titles for reference, if the article has been moved or nominated under another name before
  • To check whether anyone has participated in the AfD, @Izno suggested borrowing the AfD counter script's detection

This would greatly speed up the processing of these nominations. Eventually would be great to have this done automatically, but even a user script would be helpful for now. czar 19:26, 29 December 2019 (UTC)

@Czar: Is it good enough if a bot just reports these attributes for AfD expired with no discussion?
  1. Whether the page is redirected or not
  2. List up all previous WP:AfD and WP:AFU with the results.
--Kanashimi (talk) 05:52, 24 January 2020 (UTC)
I was thinking that a more general scoped bot which tells the AFD whether there were previous redirectings, (un)deletions and deletion discussions might be useful to inform the discussion of past changes. Jo-Jo Eumerus (talk) 09:23, 24 January 2020 (UTC)
@Kanashimi, that would cover 75% of the criteria a closer needs to know (and would at least be a start!) so would need to remind the closer to check the page history for prior PRODs as well. Otherwise, yes, that's exactly what I think would work here. Essentially, if it detects positive for any of those criteria, would be nice to summarize that it's ineligible for soft deletion because of x criterion. czar 13:03, 25 January 2020 (UTC)

@Czar: For Wikipedia:Articles for deletion/Log/2020 February 3, I extract information like this: report. Is the information enough? --Kanashimi (talk) 10:06, 4 February 2020 (UTC)

@Kanashimi, it's a start! I was thinking of formatting along the lines of:
Extended content

posting something like this to the AfD discussion when no one else has !voted

In this case, wouldn't need to list the entire history but just say at a glance (or the strongest reason) why the article isn't eligible for soft deletion. Eh? czar 02:46, 8 February 2020 (UTC)
@Czar: How about this report? --Kanashimi (talk) 11:13, 8 February 2020 (UTC)
@Kanashimi, this is great! If it ran at the beginning of the 7th day of listing for Articles for deletion/Vikram Shankar and Articles for deletion/Anokhi, which for lack of participation would appear eligible for soft deletion, the closer would know that it's not actually the case. I'm not sure that the list of deletions/undeletions is needed for this case but open to other opinions. At the very least, pictorial image use is historically discouraged in AfD discussions. A few fixes:
  • The Wikipedia:Articles for deletion/List of Greta Thunberg speeches would need a tweak. What would make it ineligible is if the existing article (under discussion) was redirected elsewhere, leaving its history in the same location, meaning that someone redirected it in lieu of deletion. In this case, the article (and its page history) was moved to a new location, so this case should check both whether the title redirects AND whether the page history remains. Page moves would still be eligible for soft deletion/expired PROD by my read.
  • Tok Nimol is presented as undeleted but its log doesn't show a restoration?
  • Reem Al Marzouqi: The rationale for this should not be "previously deleted" but specifically "previously discussed at AfD", which supersedes whether or not it was deleted. (Deletion itself doesn't make the article ineligible—e.g., Hasan Piker and Heed were each only deleted through CSD—but specific signs that someone has previously considered the article ineligible for PROD.) Same applies to the remaining "2nd+ nomination"s listed.
  • And of course there's the caveat that the script wouldn't have actually run on most of these (all but four?) since the rest had at least some participation.
  • Will this script catch whether the article was previously PROD'd? If not, would want to add something to the text to remind the closer to check. The rationale for Heed (cat)'s ineligibility, for example, is that the article was previously PROD'd and contested (03:32, 1 June 2009), not that it was previously deleted via CSD. Same for Paatti, which actually shows the PROD in the log (most do not, to my understanding).
  • Ayalaan actually appears eligible for soft deletion. Its deletion was through CSD and it appears to have not been previously PROD'd. As long as the script confirmed that the article was not tagged for PROD before, this would be a great case of where the script could say that the article appears eligible.
Thanks for your work on this! It's going to be really helpful. czar 14:18, 8 February 2020 (UTC)
  • @Czar: I fixed some bugs and generate 1, 2, 3, 4, 5.
Ayalaan: Do you mean that, all CSD is not taking into account? If so, it is easy to fix it.
Heed (cat): It seems not easy to parse comments, and it is expensive to fetch all revisions. So I have not decided yet.
Please check the results and tell me if there are still some things to fix. --Kanashimi (talk) 08:01, 9 February 2020 (UTC)
Yep, CSD/BLPPROD doesn't affect PROD/soft deletion eligibility (WP:PROD#cite_ref-1), so don't need to track that.
If the v1 won't parse edit summaries or diffs, I've modified the collapsed section above with some suggested boilerplate. Of course, would be great if it could, but this would do for now.
It looks like all of those results would not run because the bot detects participation for each? The case of Madidai Ka Mandir should let the bot run since the only participation is from a delsort script. Henri Ben Ezra should let the bot run too (to post that it's ineligible based on having a prior AfD). So would need to tighten participation detection. If the bot/script is detecting one or fewer delete/redirect participations, the bot should run (e.g., Ayalaan and Anokhi). Probably also want the bot to only run when the nom hasn't been relisted, or else it could potentially run twice on the same nomination. 
As for the logs, related discussions, and previous discussions, I think it might be overkill to post these. It could be potentially interesting as its own bot task, if there is consensus for it, but I think simply showing "soft deletion" eligibility is sufficient for this task. I'll ask Wikipedia talk:AfD for input. czar 16:42, 9 February 2020 (UTC)
The latest version: 1, 2, 3, 4, 5.
Please check the results and tell me if there are still some things to fix. ---Kanashimi (talk) 00:57, 10 February 2020 (UTC)
I didn't check all logs, but from the ones I did, the log analysis looks good! It doesn't look like the tests were doing "no quorum" detection, so as long as the script knows when it should run on a discussion (one or zero !votes in the last 24 hours of the AfD's seven-day listing) then sounds good to proceed to the next step/trial. Thanks! czar 01:49, 17 February 2020 (UTC)
@Czar: If you think it is good enough, I will file a bot request. I will generate some reports at sandbox next days. --Kanashimi (talk) 23:14, 17 February 2020 (UTC)

Fixing Vital Articles botEdit

The bot (approved here) for updating vital articles counts, icons, and corresponding talk pages has been inoperable for a while, per this discussion. Could one of you please look into fixing it? Thanks! Sdkb (talk) 19:03, 3 January 2020 (UTC)

  BRFA filed --Kanashimi (talk) 07:06, 21 January 2020 (UTC)
@Sdkb and Spaced about: I will not count the articles listed in level other than current page, to prevent from double counting. Please tell me if it is better counting the articles still.   Thank you --Kanashimi (talk) 11:40, 23 January 2020 (UTC)
This page Wikipedia:Vital articles/Level/2 is off by 10 now. It should be at 100. --Spaced about (talk) 11:49, 23 January 2020 (UTC)
@Spaced about: There are 10 level 1 articles in the list, so the bot will not count them. Is it better counting them still? --Kanashimi (talk) 11:56, 23 January 2020 (UTC)
@Kanashimi: Yes, they should be included at any level, so, including level 1 at level 2, and level 1+2 at level 3, and so on, would be helpful. --Spaced about (talk) 12:02, 23 January 2020 (UTC)
@Spaced about: OK. I will fix the code and re-execute the bot. --Kanashimi (talk) 12:11, 23 January 2020 (UTC)
  Done --Kanashimi (talk) 13:29, 23 January 2020 (UTC)
VA5: Sports, games and recreation is completely wrong right now. Also, I think it'd be better to list an article's icon for current quality status always first (instead of icons for peer reviews etc.) because that way they're more easily compared via skimming and they're what the vital article project is most concerned about. The icons that haven't been traditionally listed (peer review, in the news) might even be unnecessary.--LaukkuTheGreit (TalkContribs) 15:25, 23 January 2020 (UTC)
  Fixed If there are quality status, it should show first. But it seems there are some articles without quality status. e.g., Talk:Virtual camera system in Wikipedia:Vital articles/Level/5/Everyday life/Sports, games and recreation. --Kanashimi (talk) 16:00, 23 January 2020 (UTC)
@Kanashimi: I am rather confused, is there an active bot that regulary updates the count of the level 5 vital articles? I saw that someone counted all of them and I'm very happy, but it's not clear how did that. The idea was to have a bot that counts the amount of articles, so we'll know when we are done with the 50,000 goal. Is there such a bot? Fr.dror (talk) 15:16, 2 February 2020 (UTC)
@Fr.dror: The task was just approved. I will updates the counts daily. --Kanashimi (talk) 22:52, 2 February 2020 (UTC)
@Kanashimi: many thanks to you and everyone else working on this. Is there functionality here to add the VA tag to the talk pages of the articles listed? It looks like that used to be done by Feminist's SSTbot, but that bot now says it's been deactivated. I'm a bit confused overall why so many of the bots related to VA have stopped functioning, given that there don't seem to have been any major technical changes that might have broken them. The VA project is ongoing, and the bots that help with it are thus needed on an ongoing basis as well. Sdkb (talk) 06:56, 3 February 2020 (UTC)
@Sdkb: I can also maintain the template {{Vital article}} in the talk pages, but it needs to write some codes. Is it OK if we add {{Vital article}} to talk pages with class=Start for those articles without {{Vital article}}? --Kanashimi (talk) 07:27, 3 February 2020 (UTC)
Since it'll be reviving SSTBot task 4, hopefully it won't require writing too much new code. For articles that haven't been assessed yet and are without the VA tag, it'd probably be best to leave them unassessed rather than labeling them all start-class; it's possible to add the tag without marking the class, right? Sdkb (talk) 07:46, 3 February 2020 (UTC)
SSTbot 4 is quite stupid as it involves compiling article lists manually. I don't really know how to code beyond an elementary level, and I kind of just got tired of "operating" it using AWB. feminist (talk) 08:28, 3 February 2020 (UTC)
@Feminist: It is OK. The bot may read the list now. But it is still needing to write some code... --Kanashimi (talk) 08:34, 3 February 2020 (UTC)
  BRFA filed --Kanashimi (talk) 12:13, 5 February 2020 (UTC)

Consolidating multiple WikiProject templates into taskforces of template:WikiProject Molecular BiologyEdit

Related post: Wikipedia:Bot_requests/Archive_79

I'm in need of help replacing all instances of a set of WikiProject templates as taskforces of the one unified template: {{WikiProject Molecular Biology}}. Unfortunately a simple transclusion of the new template wrapped in the old templates isn't enough, since some pages have multiple WikiProject templates, so will need to be marked with multiple taskforces. It's therefore similar to when Neurology was merged into WP:MED.

Example manual edit:


{{WikiProject Molecular and Cellular Biology|class=GA|importance=high|peer-review=yes}}
{{WikiProject Computational Biology|importance=mid|class=GA}}


{{WikiProject Molecular Biology|class=GA|importance=high|peer-review=yes
  |MCB=yes     |MCB-imp=high
  |COMPBIO=yes |COMPBIO-imp=mid

Broadly, I think the necessary bot steps would be:

  1. If {{WikiProject Molecular and Cell Biology}} OR {{WikiProject Genetics}} OR {{WikiProject Computational Biology}} OR {{WikiProject Biophysics}} OR {{WikiProject Gene Wiki}} OR {{WikiProject Cell Signaling}}
    Then add {{WikiProject Molecular Biology}}
  2. For {{WikiProject Molecular and Cell Biology}} AND {{WikiProject Genetics}} AND {{WikiProject Computational Biology}} AND {{WikiProject Biophysics}} AND {{WikiProject Gene Wiki}}
    Remove {{WikiProject MCB/COMPBIO/Genetics/Biophysics/Gene Wiki|importance=X|quality=y}}
    Add |MCB/COMPBIO/genetics/biophysics/Gene Wiki=yes + |MCB-imp/COMPBIO-imp/genetics-imp/biophysics-imp/GW-imp=X (note: GW → Gene Wiki)
  3. For whichever WikiProject template has the highest |importance= and |quality=, add that as the overall |importance= and |quality= to {{WikiProject Molecular Biology}}
  4. Additionally add to articles in the following categories:

Thank you in advance! T.Shafee(Evo&Evo)talk 07:09, 12 January 2020 (UTC) (refactored/edited by Seppi333 (Insert ) 05:48, 18 January 2020 (UTC))

We posted threads about this on different pages at the same time, so I figured I'd follow-up here as well. I can implement this myself using template wrappers and/or a new bot (re: Wikipedia talk:WikiProject Molecular Biology#Template:WikiProject Molecular Biology, as described in the sub-section); I just need a little more feedback from WT:MOLBIO. That said, you've more or less answered my question on how to do it here. Seppi333 (Insert ) 01:56, 16 January 2020 (UTC)
@Primefac: You mentioned in the earlier thread on this topic that one can use Anomiebot to merge templates using {{Subst only|auto=yes}} template to merge one banner into another, but is there any support for merging multiple banners on a single page into 1? If not, are there any bots that have been approved to merge multiple project banners on talk pages (particularly where 2+ banners occur on a single page) into a single parent banner? Asking because I could likely modify the source code of a bot designed to merge the banners of another project's task forces for this purpose, especially if there's one written in python. Seppi333 (Insert ) 03:45, 16 January 2020 (UTC)
@Seppi333: You make a good point about what to put as overall WP:MOLBIO class and importance based on WP:MCB, WP:GEN etc. at WT:MOLBIO. I think the best option is to simply use the current taskforce importance (if something's high importance to the WP:GEN taskforce, chances are it's high importance to the WP:MOLBIO wikiproject). The edge case is when two taskforces currently indicate different importance levels (e.g. Talk:DNA_gyrase). In such cases it might be safest to use the median rounded up for the overall importance (high+low→mid, high+mid→high), but maybe that's over complicating things. T.Shafee(Evo&Evo)talk 04:45, 16 January 2020 (UTC)
Sure, I ran a bot like this last weekend. I could probably put in a BRFA today or tomorrow if I get time. Primefac (talk) 10:55, 16 January 2020 (UTC) I did just notice, though, that there are also sub-projects for each of the (now) sub-projects; are those tasks forces (such as genetic engineering or education) being handled by the replacement template as well? Primefac (talk) 10:59, 16 January 2020 (UTC)
@Primefac: No, don't think so. The primary reason the Gene Wiki sub-task force was added is that it has its own banner (w/ corresponding article categories: {{WikiProject Gene Wiki}} & Category:Gene Wiki articles) which is currently present on ~1800 pages. I think we're probably just going to go with the current task force listing in the {{WPMOLBIO}} template.
@Evolution and evolvability: I added the signaling parameter for categorizing cell signaling articles; Category:Metabolism is an article category and the metabolic pathways task force doesn't have its own category, so I couldn't add the metabolism one.
Addendum, re: The edge case is when two taskforces currently indicate different importance levels (e.g. Talk:DNA_gyrase). In such cases it might be safest to use the median rounded up for the overall importance (high+low→mid, high+mid→high), but maybe that's over complicating things.. It wouldn't be that technical to encode that. Programatically, one just needs to ordinally encode low→1, mid→2, high→3, top→4 (NB: this method implicitly assumes that there's an equal "importance distance" in a mathematical/statistical sense between importance ratings, which might not necessarily be true - it depends on how people go about rating importance on average), then use round(median(list of ratings)) or round(average(list of ratings)), then remap whatever number it returns back to an importance rating. E.g., the average rating of task forces that rate an article as low, high, and top is (1+3+4)/3, which would be rounded to 3 → high importance. Seppi333 (Insert ) 02:56, 18 January 2020 (UTC)

@Evolution and evolvability: I refactored the request in Special:Diff/936327774/936342626 to reflect the changes to the template. You might want to look it over just to make sure nothing seems off. Seppi333 (Insert ) 05:48, 18 January 2020 (UTC)

@Seppi333: That looks correct to me! Great to see it coming together. I'll also go through the taskforce pages and relevant template documentation over the next few days to make sure the instructions for tagging new articles is up to date (example). T.Shafee(Evo&Evo)talk 23:11, 22 January 2020 (UTC)
@Primefac: Are you still interested in doing this? Either way, can you point me to the bot script you had in mind in the event I have a need for reprogramming it to run a similar bot in the future? Seppi333 (Insert ) 04:42, 21 February 2020 (UTC)
This seems to meet the criteria for Task 30, so I should be able to get to it this weekend. Primefac (talk) 11:59, 21 February 2020 (UTC)

Update parameter on template long since changedEdit

The parameters were changed 9 July 2018 per this discussion: Template talk:WikiProject Christianity#Parameter Correction. church-of-the-nazarene was changed to holiness-movement as was the -importance parameter. However, per the discussion above, and as I've seen, it wasn't updated everywhere. Jerod Lycett (talk) 04:24, 21 January 2020 (UTC)

Jerodlycett, I only found 6 pages with the parameter set, and I have fixed those manually. Were you expecting there to be more? --AntiCompositeNumber (talk) 15:29, 21 February 2020 (UTC)
@AntiCompositeNumber: I went through a number on my own with AWB, but I couldn't pull more than 25k, and there were >50k transclusions, so I had no idea how many would remain. Jerod Lycett (talk) 16:17, 21 February 2020 (UTC)

Adding a new templateEdit

Hi, I want to add this template (Template:Ash'ari) to all the pages/articles that are listed/linked. Thanks in advance!--TheEagle107 (talk) 01:45, 22 January 2020 (UTC)

Page list
  1. 2016 international conference on Sunni Islam in Grozny
  2. A Guide to Conclusive Proofs for the Principles of Belief
  3. Abd al-Mu'min
  4. Abd al-Qahir al-Jurjani
  5. Abd al-Rahman al-Tha'alibi
  6. Abd el-Krim
  7. Abdallah ibn Alawi al-Haddad
  8. Abdel-Halim Mahmoud
  9. Abdullah al-Harari
  10. Abu Bakr al-Turtushi
  11. Abu Bakr ibn al-Arabi
  12. Abu Hayyan al-Andalusi
  13. Abu Imran al-Fasi
  14. Abu Ishaq al-Isfarayini
  15. Abu Nu'aym al-Isfahani
  16. Abu al-Walid al-Baji
  17. Ahmad Baba al-Timbukti
  18. Ahmad Zarruq
  19. Ahmad Zayni Dahlan
  20. Ahmad al-Dardir
  21. Ahmad al-Ghumari
  22. Ahmad al-Rifa'i
  23. Ahmad al-Tayyeb
  24. Ahmad al-Tijani
  25. Ahmad al-Wansharisi
  26. Ahmad ibn 'Ajiba
  27. Ahmed Mohammed al-Maqqari
  28. Al-Adil I
  29. Al-Ahbash
  30. Al-Akhdari
  31. Al-Ash'ari
  32. Al-Ashraf Musa, Emir of Damascus
  33. Al-Baghawi
  34. Al-Bahuti
  35. Al-Baqillani
  36. Al-Baydawi
  37. Al-Bayhaqi
  38. Al-Baz al-Ashhab
  39. Al-Farq bayn al-Firaq
  40. Al-Ghazali
  41. Al-Hakim al-Nishapuri
  42. Al-Hasan al-Yusi
  43. Al-Hattab
  44. Al-Juwayni
  45. Al-Kamil
  46. Al-Khatib al-Baghdadi
  47. Al-Khatib al-Shirbini
  48. Al-Laqani
  49. Al-Maqrizi
  50. Al-Maziri
  51. Al-Milal wa al-Nihal
  52. Al-Munawi
  53. Al-Nasir Muhammad
  54. Al-Nawawi
  55. Al-Qastallani
  56. Al-Qushayri
  57. Al-Raghib al-Isfahani
  58. Al-Safadi
  59. Al-Sakhawi
  60. Al-Sha'rani
  61. Al-Shahrastani
  62. Al-Shatibi
  63. Al-Suhayli
  64. Al-Suyuti
  65. Al-Tha'labi
  66. Al-Zarkashi
  67. Ali Gomaa
  68. Ali al-Jifri
  69. Almohad Caliphate
  70. Alp Arslan
  71. Ash'ari
  72. Ayyubid dynasty
  73. Emir Abdelkader al-Jazairi
  74. Fakhr al-Din al-Razi
  75. Hamza Yusuf
  76. Hasan al-Attar
  77. Ibn 'Aqil
  78. Ibn Abi Zayd al-Qayrawani
  79. Ibn Adjurrum
  80. Ibn Arafa
  81. Ibn Asakir
  82. Ibn Ashir
  83. Ibn Ata Allah
  84. Ibn Barrajan
  85. Ibn Daqiq al-'Id
  86. Ibn Furak
  87. Ibn Hajar al-Asqalani
  88. Ibn Hajar al-Haytami
  89. Ibn Hibban
  90. Ibn Juzayy
  91. Ibn Kathir
  92. Ibn Khafif
  93. Ibn Khaldun
  94. Ibn Mada'
  95. Ibn Malik
  96. Ibn Sidah
  97. Ibn Tumart
  98. Ibn al-Hajj al-Abdari
  99. Ibn al-Jawzi
  100. Ibn al-Jazari
  101. Ibn al-Qattan
  102. Ibn al-Salah
  103. Izz ad-Din al-Qassam
  104. Izz al-Din ibn 'Abd al-Salam
  105. Jalal al-Din al-Dawani
  106. Jamal al-Din al-Mizzi
  107. Khalil ibn Ishaq al-Jundi
  108. List of Ash'aris and Maturidis
  109. Mamluk
  110. Muhammad 'Ilish
  111. Muhammad Alawi al-Maliki
  112. Muhammad Arafa al-Desouki
  113. Muhammad Mayyara
  114. Muhammad Metwalli al-Sha'rawi
  115. Muhammad Said Ramadan al-Bouti
  116. Muhammad al-Tahir ibn Ashur
  117. Muhammad al-Zurqani
  118. Muhammad ibn Ali al-Sanusi
  119. Nizam al-Din al-Nisapuri
  120. Nizam al-Mulk
  121. Noah al-Qudah
  122. Nur al-Din al-Haythami
  123. Nur al-Din al-Samhudi
  124. Omar al-Mukhtar
  125. Qadi Ayyad
  126. Qutuz
  127. Said Nursî
  128. Saladin
  129. Shams al-Din al-Kirmani
  130. Shihab al-Din al-Qarafi
  131. Sultanate of Rum
  132. Taj al-Din al-Subki
  133. Taqi al-Din al-Subki
  134. The Moderation in Belief
  135. Yusuf ibn Tashfin
  136. Zain al-Din al-'Iraqi
  137. Zakariyya al-Ansari

— Preceding unsigned comment added by TheEagle107 (talkcontribs) 03:47, 22 January 2020 (UTC)

This is a very large nav template and placement in an article will matter, a lot. Placement probably shouldn't be automated, and it probably should be collapsed by default. -- GreenC 15:48, 22 January 2020 (UTC)

Bot to update number of Duolingo users doing a courseEdit

Hi there! I’ve been editing the Duolingo Wikipedia article to keep it up to date with the number of learners on each course. I was wondering if there’s a boy that could update the lists daily, rather than having to do it myself, or how I could create such a bot? Thanks! :-) — Preceding unsigned comment added by CcfUk2018 (talkcontribs) 03:28, 22 January 2020 (UTC)

I'm not sure that's the kind of minor statistical data that Wikipedia needs, let alone needs updated on a daily basis. ♠PMC(talk) 15:02, 22 January 2020 (UTC)
Turing test: bot or boy? -- GreenC 17:05, 22 January 2020 (UTC)

Mass RfD nominationEdit

This is a multi-part request.

The first part should be relatively uncontroversial: it is to generate a page (for instance, User:Tigraan/Exxx redirects) containing a list of all pages which are redirects and whose title matches the regexp E1?[0-9]{3}[a-j]?. (If there is an easy way that I could do it myself, please enlighten me.) Bonus points if the page contains the current redirect targets as well. I estimate this would be around 1000 pages.

The second part would be, after manual inspection of the redirects to clear up false positives, to mass-tag those redirects for a WP:RFD bundled nomination. That certainly requires consensus but I got mostly ignored when asking at the places I would think to ask: I posted at Wikipedia_talk:WikiProject_Food_and_drink#Food_additives_codes_redirect_to_chemical_compounds_instead_of_E_number_article (where you can read a sketch of the RfD nomination rationale) and Wikipedia_talk:Redirects_for_discussion#Nominating_lots_of_related_redirects, both of which combined attracted a whole one other comment (supporting the proposed RfD) after a week. (If you want to see more solid consensus, please tell me where to ask for it.)

The third part would be to clean up after the RfD, either by untagging and leaving things in place if rejected, or by retargeting the redirects according to a relatively simple scheme. TigraanClick here to contact me 17:49, 23 January 2020 (UTC)

@Tigraan:Part one's easy: is across all mainspace pages, and is only what's linked from E number. As far as actual tagging goes, I dunno. Someone with AWB could probably do it easily enough, but a BRFA would be required to use a fully automatic bot (which is what I would do). I don't know if anyone's got an RfD tagging bot already that could help. --AntiCompositeNumber (talk) 23:39, 2 February 2020 (UTC)
DannyS712, you have the tag bot, right? ‑‑Trialpears (talk) 23:48, 2 February 2020 (UTC)
@Trialpears: yes, but only for CfD. Should I file a brfa for this? DannyS712 (talk) 03:44, 3 February 2020 (UTC)
DannyS712, if you want. This is probably not the last time approval for all XfDs would be helpful. ‑‑Trialpears (talk) 06:43, 3 February 2020 (UTC)
  • @AntiCompositeNumber: thanks for the quarry request, especially the modified version which eliminates quite a few false positives (such as E112). I downloaded the CSV. I checked all titles and ~20 articles and it all looks in order.
@DannyS712: I will give a look at the AWB solution this weekend, and keep in touch if I need you/your bot. TigraanClick here to contact me 12:02, 3 February 2020 (UTC)

Replace Template:Distinguish in Category namespace for Template:Category distinguishEdit

There are about 2,000 transclusions: Special:WhatLinksHere/Template:Distinguish&namespace=14&limit=500

For an example of the change, see change history of Category:Literature:

  • {{distinguish|Category:Publications}}
  • {{category distinguish|Publications}}

Thanks. fgnievinski (talk) 21:18, 26 January 2020 (UTC)

I feel like it might be a better idea to merge the templates and implement the changed wording automatically. You should consider opening a discussion at WP:TFD. --AntiCompositeNumber (talk) 03:42, 27 January 2020 (UTC)
Aye, in this case I'd think that two templates is one too much. Jo-Jo Eumerus (talk) 09:22, 27 January 2020 (UTC)

Bot for lint errorsEdit

I know this is going to be quite a bit of work, however I feel it will have significant value once the process has caught up.

I refer to the error cat "Tidy bug affecting font tags wrapping links (4,275,998 errors)" as at today, some of which date back to 2006.

As an example the followong sinature;
[[User:AndonicO|<font face="Papyrus" color="Black">'''A'''</font><font face="Papyrus" color="DarkSlateGray">ndonic</font><font face="Papyrus" color="Black" size="2">'''O'''</font>]] <small><sup><font face="Times New Roman" color="Tan">[[User talk:AndonicO|''Talk'']]</font> | <font face="Times New Roman" color="Tan">[[User:AndonicO/My Autograph Book|''Sign Here'']]</font></sup></small>
has various errors that may cross various error categories and will never be fixed as per current methodology, a bot that does a simple find and replace, with something like; [[User:AndonicO|talk]] signature adjusted by lint bot for lint errors.
would fix every instance of each signature as identified and could cover many instances in order, this is especially important for these aged and non-active users, and could also be used to identify current user signatures with errors and we could offer a reformatted signature solution.Thoughts121.99.108.78 (talk) 00:03, 28 January 2020 (UTC)

I don't think there is consensus to drastically reformat editors' signatures in the way that is proposed here. I have been performing edits like this one that fix Linter errors without changing the rendering of the signatures, and I have had no negative feedback, as far as I can remember. At least one bot, Wikipedia:Bots/Requests for approval/Ahechtbot 2, has been approved to perform a limited set of Linter-related fixes to talk pages. It is possible that Ahecht would be willing to run a bot with a broader set of fixes. – Jonesey95 (talk) 01:21, 28 January 2020 (UTC)
I don't see that there's an absence of consensus either. Past objections to 'fixing' signatures were mostly because the proposed 'fixes' were based on ill-defined personal preferences criteria like changing something like <b>...</b> to '''...'''. Lint errors are a clear criteria. This would probably have consensus, although that's still not a guarantee. Basically, take it to WP:VPR and see how the dice lands. Headbomb {t · c · p · b} 01:42, 28 January 2020 (UTC)
Thanks bothHeadbomb Jonesey95
Ahechtbot is a blunt tool. I've only been doing find-and-replace on signatures that (a) are present and identical on very large numbers of pages and (b) affect the function or appearance of the rest of the talk page, not just the signature itself. For the example above, since you're not getting bleedover to other parts of the text, it's not really worth the overhead and extra edits. Frankly, this should be labeled as a Tidy bug, and it should be fixed so that <font>[[link]]</font> works just as well as [[link|<font>link</font>]]. Yes, I know that font tags are deprecated, but there are literally millions of pages that use the former format. --Ahecht (TALK
) 14:57, 28 January 2020 (UTC)

tyi's and if you want to add anything (talk) 10:07, 28 January 2020 (UTC)

Ahecht and others: formerly, <font>[[link]]</font> did work like [[link|<font>link</font>]]. That is, <font color="x">[[link]]</font>, and also <font style="color:x">[[link]]</font> both were processed by Tidy into [[link|<font...>link</font>]] (piped appropriately, of course). The font tag had to immediately wrap the Wikilink or External link, otherwise it was ignored. The font color, but not the font style, is detected as the Tidy font link bug, but the font style version of the Tidy font link bug is quite rare. Tidy has been replaced, so now font coloring tags immediately wrapping a Wikilink or external link are overridden, as you would logically expect, by default link colors. The replacement parser is an HTML 5-compatible upgrade from Tidy and we are not going back. Wikipedia:Linter#How you can help was written November 23, 2017, and since it was first written it has always said that it is OK to fix lint errors, including on talk pages, but one should "[t]ry to preserve the appearance." So, for more than two years, it has officially been OK to de-lint user signatures, preserving the appearance, and this has never been officially challenged or disputed; it is the consensus. (However, I don't think there's a consensus on systematic lint fixing by bot.) The Tidy font bug is a high priority lint error and I would favor fixing these lint errors in a systematic way by bot, taking care, of course, to exclude talk page discussions where fixing an instance of this error would confuse a question about this exact behavior. —Anomalocaris (talk) 02:22, 3 February 2020 (UTC)
Wikipedia:Bots/Requests for approval/PkbwcgsBot 17 has been approved to do this using WPCleaner but right now; WPCleaner is broken for me so I am unable to do this task until WPCleaner is up and running again. Pkbwcgs (talk) 17:21, 7 February 2020 (UTC)

Correcting links to Mexican Federal Telecommunications Institute documentsEdit

This is simple and can be handled by just about any bot.

The Federal Telecommunications Institute (IFT) of Mexico made a one-character change in document URLs that will need updating. Hundreds of Mexican radio articles cite its technical and other authorizations.

They added a "v" to the URL, so URLs that were formerly

changed to

Is this possible to have done as a bot task? The articles that need it are mostly in Category:Radio stations in Mexico or Category:Television stations in Mexico. Raymie (tc) 20:10, 4 February 2020 (UTC)

Huh. I was certain there is some kind of general bot for this kind of link replacement operation... Jo-Jo Eumerus (talk) 17:35, 7 February 2020 (UTC)
Jo-Jo Eumerus, I think GreenC (talk · contribs) ends up doing most of them at WP:URLREQ --AntiCompositeNumber (talk) 20:34, 7 February 2020 (UTC)

@Raymie: In addition they now serve https only, but left no http->https redirect, and most all of the links on WP are http. This should be done by URL-specific bot because of archive URLs and {{dead link}} tags (some may already be marked dead and/or archived that need to be unwound once corrected). Could you post/copy the request to URLREQ, there is a backlog but I will get to it. -- GreenC 20:02, 8 February 2020 (UTC)

Bot to fix up some blank peer reviewsEdit

Hi all, I saw that "peer reviews" are now included on WP:Article alerts (yay!). Unfortunately it turns out there's more than a few reviews that editors either haven't been opened properly. These will clog up article alert lists and I was wondering if I could have some help with a bot to process them (or even generate a list to give me).

In short:

  • The list of all transclusions in article space is here: Special:WhatLinksHere/Template:Peer_review&namespace=1&limit=500
  • An example of an unopened peer review is here: [[1]] - ie, a review without a corresponding "archive1" page
  • I suggest that all such unopened peer reviews not opened within the last week be simply removed from the talk page with the summary "remove unopened peer review"
  • I can manually remove this, but as a repetitive action that may take some time I'd be very grateful if a bot could do it for me :)
  • If there is a way to preserve the code of this, I can keep a link in the WP:PR archives so it can be run ever year or so.

Thanks for your help, --Tom (LT) (talk) 08:47, 10 February 2020 (UTC)

Tom (LT), According to, there are 42 pages transcluding {{peer review}} without a corresponding WP:Peer review/<title>/archive page right now. Unless misfiled peer review requests are more common than that query indicates, it seems like this is something human editors can handle. You can re-run the query in the future by logging in to Quarry, hitting Fork, then hitting Submit Query. --AntiCompositeNumber (talk) 17:48, 21 February 2020 (UTC)

Request for change of (soon to be) broken links to LPSNEdit

Thread moved to Wikipedia:Link_rot/URL_change_requests#Request_for_change_of_(soon_to_be)_broken_links_to_LPSN and poster notified. -- GreenC 03:26, 14 February 2020 (UTC)

Bot needed to tell Wikiprojects about open queries on articles tagged for that WikiProjectEdit

I sometimes put queries on article talkpages, some get answered quickly, some stick around indefinitely and occasionally old ones get resolved. My suspicion is that my experience is not unusual, but I hope that this is a software issue and that a lot more article queries could be resolved if the relevant editors knew of them. Would it be possible to have a bot produce reports for each Wikiproject of open/new talk page threads that are on pages tagged to that project? ϢereSpielChequers 09:49, 10 February 2020 (UTC)

The main thing is how do those 'queries' get detected? What constitutes 'queries'? Headbomb {t · c · p · b} 12:44, 10 February 2020 (UTC)
One way that might work, but would throw a lot of false positives, would be to notify the project(s) if there is a post that has no reply after a week. Another option would be to have some form of template like {{SPER}} that could summon the bot if a user wanted more input. Primefac (talk) 12:50, 10 February 2020 (UTC)
@Headbomb I was assuming a new query would be any new section on the talkpage of an article tagged for that wikiproject, excluding any tagged as {{resolved}}. @Primefac I'm not sure of the false positives, other than on multi tagged articles. If that did get to be an issue it might be necessary to have people going through such a report the option to mark a section as not relevant to their wikiproject. So an article about a mountain might be tagged under vulcanism, climbing, skiing and still get a query as to the gods that some religion believes live on it. But I suspect that thhe false positives will nnot be a huge issue. ϢereSpielChequers 16:16, 10 February 2020 (UTC)
"any new section on the talkpage of an article tagged for that wikiproject, excluding any tagged as {{resolved}}" given that most sections on talk pages don't need to be marked as {{resolved}} to begin with, I can't see this idea/criteria getting consensus. The signal-to-noise ratio would be ludicrously small. Taking Talk:Clara Schumann from a few sections above as an example, that would be 39 'queries' for that article alone. Headbomb {t · c · p · b} 00:50, 11 February 2020 (UTC)
Clearly that article is not typical. But the most recent thread is from January this year, the previous one from last October, so a report of any new section would include it now provided new was interpreted as broadly as thirty days. If we only went back 7 days it would already have dropped off the report. In the unlikely event of needing to make the report shorter, if someone has a tool for identifying signatures it could list single participant threads. ϢereSpielChequers 07:17, 11 February 2020 (UTC)
I would certainly veto such an idea. These would likely be spam levels of updates, and duplications for those that would watch the updater and also the article itself. I would assume most would unwatch the updated list of "queries" pretty quickly, which would be pointless. A better solution is to post on the wikiproject talk page if a post doesn't get enough attention.
There are also a LOT of inactive/semi active Wikiprojects that would get a lot of bot updates, for no one to read. Seems like a lot of work and edits when we could simply post something on the wikiproject talk page to gain additional input. Best Wishes, Lee Vilenski (talkcontribs) 08:45, 11 February 2020 (UTC)
Yes lots of wikiprojects are inactive, perhaps some will be revived by having this report, others will be unchanged. The report would be a success if an increased proportion of talkpage queries get a response, 100% response rate would be nice, but this report aims to reduce a problem not to totally resolve it. As for posting things on WikiProject talkpages, that is reasonable advice to the regulars, not something we expect newbies to do, and in case it wasn't obvious, it is unnoticed queries by newbies that I worry most about. ϢereSpielChequers 09:15, 11 February 2020 (UTC)
This can be done without a bot with RecentChangesLinked where you set it to "Show pages linking to". --Izno (talk) 16:25, 11 February 2020 (UTC)
That just gives you an indication that there has been a change to a page not that there is a query that needs to be responded to on that page. Keith D (talk) 00:03, 12 February 2020 (UTC)
  • If I'm reading the intent correctly, I think this can be resolved by, alternatively, (1) using WikiProject banners to encourage editors to ask the question there instead, (2) relying on WP:Article alerts to list all WP:RFCs of consequence in the project, or (3) adding some tag with lower stakes than an RFC (e.g., a variant of {{help me}}) and submitting a feature request for WP:Article alerts to track that template/tag. On the whole, could use more evidence that this is an actual problem. Agreed that it would be a lot of noise to create a listing for every new, unreplied talk page section on every project page, especially when such sections do not necessarily require responses (e.g., "FYI" messages). czar 01:57, 17 February 2020 (UTC)

Copy coordinates from lists to articlesEdit

Virtually every one of the 3000-ish places listed in the 132 sub-lists of National Register of Historic Places listings in Virginia has an article, and with very few exceptions, both lists and articles have coordinates for every place, but the source database has lots of errors, so I've gone through all the lists and manually corrected the coords. As a result, the lists are a lot more accurate, but because I haven't had time to fix the articles, tons of them (probably over 2000) now have coordinates that differ between article and list. For example, the article about the John Miley Maphis House says that its location is 38°50′20″N 78°35′55″W / 38.83889°N 78.59861°W / 38.83889; -78.59861, but the manually corrected coords on the list are 38°50′21″N 78°35′52″W / 38.83917°N 78.59778°W / 38.83917; -78.59778. Like most of the affected places, the Maphis House has coords that differ only a small bit, but (1) ideally there should be no difference at all, and (2) some places have big differences, and either we should fix everything, or we'll have to have a rather pointless discussion of which errors are too little to fix.

Therefore, I'm looking for someone to write a bot to copy coords from each place's NRHP list to the coordinates section of {{infobox NRHP}} in each place's article. A few points to consider:

  • Some places span county lines (e.g. bridges over border streams), and in many of these cases, each list has separate coordinates to ensure that the marked location is in that list's county. For an extreme example, Skyline Drive, a long scenic road, is in eight counties, and all eight lists have different coordinates. The bot should ignore anything on the duplicates list; this is included in citation #4 of National Register of Historic Places listings in Virginia, but I can supply a raw list to save you the effort of distilling a list of sites to ignore.
  • Some places have no coordinates in either the list or the article (mostly archaeological sites for which location information is restricted), and the bot should ignore those articles.
  • Some places have coordinates only in the list or only in the article's {{Infobox NRHP}} (for a variety of reasons), but not in both. Instead of replacing information with blanks or blanks with information, the bot should log these articles for human review.
  • Some places might not have {{infobox NRHP}}, or in some cases (e.g. Newport News Middle Ground Light) it's embedded in another infobox, and the other infobox has the coordinates. If {{infobox NRHP}} is missing, the bot should log these articles for human review, while embedded-and-coordinates-elsewhere is covered by the previous bullet.
  • I don't know if this is the case in Virginia, but in some states we have a few pages that cover more than one NRHP-listed place (e.g. Zaleski Mound Group in Ohio, which covers three articles); if the bot produced a list of all the pages it edits, a human could go through the list, find any entries with multiple appearances, and check them for fixes.
  • Finally, if a list entry has no article at all, don't bother logging it. We can use WP:NRHPPROGRESS to find what lists have redlinked entries.

I've copied this request from an archive three years ago; an off-topic discussion happened, but no bot operators offered any opinions. Neither then nor now has any discussion has yet been conducted for this idea; it's just something I've thought of. I've come here basically just to see if someone's willing to try this route, and if someone says "I think I can help", I'll start the discussion at WT:NRHP and be able to say that someone's happy to help us. Of course, I wouldn't ask you actually to do any coding or other work until after consensus is reached at WT:NRHP. Nyttend (talk) 15:53, 12 February 2020 (UTC)

You could use {{Template parameter value}} to pull the coordinate values out of the {{NRHP row}} template. It would still likely take a bot to do the swap but it would mean less updating in the future. Of course, if the values are 100% accurate on the lists then I suppose it wouldn't be necessary. Primefac (talk) 16:55, 12 February 2020 (UTC)
Never heard of that template before. It sounds like an Excel =whatever function, e.g. in cell L4 you type =B4 so that L4 displays whatever's in B4; is that right? If so, I don't think it would be useful unless it were immediately followed by whatever's analogous to Excel's "Paste Values". Is that what you mean by having a bot doing the swap? Since there are 3000+ entries, I'm sure there are a few errors somewhere, but I trust they're over 99% accurate. Nyttend (talk) 02:57, 13 February 2020 (UTC)
That's a reasonable analogy, actually. Check out the source of Normani#Awards_and_nominations: it pulls the wins and nominations values from the infobox at the "list of awards", which means the main article doesn't need to be updated every time the list is changed.
As far as what the bot would do, it would take one value of {{coord}} and replace it with a call to {{Template parameter value}}, pointing in the direction of the "more accurate" data. If the data is changed in the future, it would mean not having to update both pages.
Now, if the data you've compiled is (more or less) accurate and of the not-likely-to-change variety (I guess I wouldn't expect a monument to move locations) then this is a silly suggestion – since there wouldn't be a need for automatic syncing – and we might as well just have a bot do some copy/pasting. Primefac (talk) 21:27, 14 February 2020 (UTC)
Y'know, this sort of situation is exactly what Wikidata is designed for... --AntiCompositeNumber (talk) 22:29, 14 February 2020 (UTC)
Primefac, thank you for the explanation. The idea sounds wonderful for situations like the list of awards, but yes these are rather accurate and unlikely to change (imagine someone picking up File:Berry Hill near Orange.jpg and moving it off site), so the bot copy/paste job is probably best. Nyttend (talk) 02:23, 15 February 2020 (UTC)
By the way, Primefac, are you a bot operator, or did you simply come here to offer useful input as a third party? Nyttend (talk) 03:12, 20 February 2020 (UTC)
I am both botop and BAG, but I would not be offering to take up this task as it currently stands. Primefac (talk) 11:24, 20 February 2020 (UTC)
Thank you for helping me understand. "as it currently stands" Is there something wrong with it, i.e. if changes were made you'd be offering, or do you simply mean that you have other interests (WP:VOLUNTEER) and don't feel like getting involved in this one? This question might sound like I'm being petty; I'm writing with a smile and not trying to complain at all. Nyttend (talk) 00:27, 21 February 2020 (UTC)

Harvard BotEdit

I've been using User:Ucucha/HarvErrors.js for a few days now, and it's a pretty nice little script. However, the issues it highlights should be flagged for everyone to see and become part of regular cleanup. For example, in Music of India, two {{harv}}-family templates are used to generate reference to anchors, designed to point to a full citation.

However, inspecting the page reveals those anchors aren't found anywhere on the page. Even a manual search won't find the corresponding citations on that page, because this isn't an issue of someone having forgotten a |ref=harv in a citation template, they just aren't there to begin with.

A bot should flag those problems, probably with a new template {{broken footnote}}, or possibly on the talk page.

Headbomb {t · c · p · b} 15:13, 21 February 2020 (UTC)

Looks like the inline ref problem template {{citation not found}} is designed for this already. --AntiCompositeNumber (talk) 15:18, 21 February 2020 (UTC)
@AntiCompositeNumber: – {{citation not found}} is too general, for citations that are completely missing (see harv problems, #1). This is more specific. The citation could be there, but simply not linked to correctly. The bot shouldn't add a tag to a footnote tagged with {{citation not found}}, though, since that would be almost sure to be redundant. Headbomb {t · c · p · b} 15:35, 21 February 2020 (UTC)
Perhaps a new template such as {{Harv error}} would be needed. Or one might rig the CS1 templates to produce an automatic error message ... Trappist the monk? Jo-Jo Eumerus (talk) 19:36, 21 February 2020 (UTC)
@Jo-Jo Eumerus: well, that's what {{broken footnote}} is. @Trappist the monk: CS1 should really emit |ref=harv automatically though. That would kill a great deal of those errors (although certainly not all). Headbomb {t · c · p · b} 20:12, 21 February 2020 (UTC)
Um, the preceding post was by me, not Trappist. I just pinged them. Jo-Jo Eumerus (talk) 20:50, 21 February 2020 (UTC)
Brainfart, meant to ping Trappist for the second part only. Headbomb {t · c · p · b} 21:01, 21 February 2020 (UTC)
This is a very good idea. Recently came across this in Easter Island ref #113 (Fischer 2008). There is no reference for Fischer 2008. In fact the reference is a faux-Harvard <ref>Fischer 2008: p. 149</ref> Lot of permutations for Harvard reference problems that a specialized bot could become expert on. -- GreenC 20:09, 21 February 2020 (UTC)
Since there appears to be interest in this,   Coding... No real preference about what template should be applied. --AntiCompositeNumber (talk) 22:50, 21 February 2020 (UTC)

Deleting tracking parts of URL from sources etc.Edit

There are a lot of URL in sources, that have tracking extensions by Facebook attached, they should be deleted. ( I think that would be a fine job for a bot, and as it's probably happening unintentional by some editors, who copy'n'paste this without much thinking, it should probably done once per day or week or so. Same goes probably for Google Analytics extensions with UTM:{} Grüße vom Sänger ♫ (talk) 15:02, 22 February 2020 (UTC)

@AManWithNoPlan and Sänger: Probably a good idea to at least offload some of that to User:Citation bot. Headbomb {t · c · p · b} 16:06, 22 February 2020 (UTC)
This can be prone to breaking archive URLs and creating link rot if one is not careful. See WP:WEBARCHIVES for a list of the archives used on Enwiki and the formats they use. The regex at Wikipedia:Bots/Requests_for_approval/DemonDays64_Bot_2 is an example, it uses lookback to avoid URLs that are embedded in an archive URL, User:DemonDays64 could probably help explain it. The other problem is that if you retain the tracking bits in the archive URL but remove it from the source |url= they are now mismatched and look like different URLs, other bots might pick up on that and restore the archive URL version of the source URL, since it is the authority (once the link is dead). Personally, I would bypass any citation that involves an archive URL too many complications. -- GreenC 16:23, 22 February 2020 (UTC)
Tracking bits are evil, they must go away. If web archive used this evil URL in the past, that's something we have to live with, better link rot then supplying facebook with anything. Grüße vom Sänger ♫ (talk) 16:27, 22 February 2020 (UTC)
@Sänger: not challenging the idea (it'd be great to clean up links if there weren't side effects) but think about this: we'd be hurting Facebook by leaving them; it gives them bad data every time someone clicks one that isn't actually in the place it was supposed to be. Still would be a good idea if only the archive bots would reliably understand. DemonDays64 (talk) 17:48, 22 February 2020 (UTC)
I think KolbertBot 4 operated by Jon Kolbert has approval for this. ‑‑Trialpears (talk) 22:48, 22 February 2020 (UTC)