Open main menu

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).

Please add your bot requests to the bottom of this page.
Make a new request

Bot to improve names of media sources in referencesEdit

Many references on Wikipedia point to large media organizations such as the New York Times. However, the names are often abbreviated, not italicized, and/or missing wikilinks to the media organization. I'd like to propose a bot that could go to an article like this one and automatically replace "NY Times" with "New York Times". Other large media organizations (e.g. BBC, Washington Post, and so on) could fairly easily be added, I imagine. - Sdkb (talk) 04:43, 19 November 2018 (UTC)

  • What about the Times's page? The page says: 'The New York Times (sometimes abbreviated as the NYT and NY Times)…' The bot might replace those too and that might be a little confusing…The 2nd Red Guy (talk) 14:55, 23 April 2019 (UTC)
    • And this page too! Wait, what if it changes its own description on its user page?The 2nd Red Guy (talk) 15:43, 23 April 2019 (UTC)
  • I would be wary of WP:CONTEXTBOT. For instance, NYT can refer to a supplement of the Helsingin Sanomat#Format (in addition to the New York Times), and maybe is the main use of Finland-related pages. TigraanClick here to contact me 13:40, 20 November 2018 (UTC)
    • @Tigraan:That's a good point. I think it'd be fairly easy to work around that sort of issue, though — before having any bot make any change to a reference, have it check that the URL goes to the expected website. So in the case of the New York Times, if a reference with "NYT" didn't also contain the URL, it wouldn't make the replacement. There might still be some limitations, but given that the bot is already operating only within the limited domain of a specific field of the citation template, I think there's a fairly low risk that it'd make errors. - Sdkb (talk) 10:52, 25 November 2018 (UTC)
  • I should add that part of the reason I think this is important is that, in addition to just standardizing content, it'd allow people to more easily check whether a source used in a reference is likely to be reliable. - Sdkb (talk) 22:01, 25 November 2018 (UTC)
@Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the |work= and the agency in the |agency= but often the agency ends up in the |work= field, so the bot couldn't blindly make changes without some considerable room for error. I have a sense of what needs to be done: extract every cite on Enwiki with a |url= containing, extract every |work= from those and create a unique list, manually remove from the list anything that shouldn't belong like Reuters etc.., then the bot keys off that list before making live changes, it knows what is safe to change (anything in the list). It's just a hell of a job in terms of time and resources considering all the sites to be processed and manual checks involved. See also Wikipedia:Bots/Dictionary#Cosmetic_edit "the term cosmetic edit is often used to encompass all edits of such little value that the community deems them to not be worth making in bulk" .. this is probably a borderline case, though I have no opinion which side of the border it falls other people might during the BRFA. -- GreenC 16:53, 26 November 2018 (UTC)
@GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)
Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)
Well, for NYT at the least, citations with a |url= could be safely assumed to be referring to the New York Times. Headbomb {t · c · p · b} 01:20, 8 December 2018 (UTC)
Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)
Conversation here seems to have died down. Is there anything I can do to move the proposal forward? - Sdkb (talk) 21:42, 14 January 2019 (UTC)
I am not against this idea totally but the bot would have to be a very good one for this to be a net positive and not end up creating more work. Emir of Wikipedia (talk) 22:18, 14 January 2019 (UTC)
@Sdkb: you could build a list of unambiguous cases. E.g. |work/journal/magazine/newspaper/website=NYT combined with |url= Short of that, it's too much of a WP:CONTEXTBOT. I'll also point out that NY Times isn't exactly obscure/ambiguous either.Headbomb {t · c · p · b} 17:47, 27 January 2019 (UTC)
Okay, here's an initial list:

Sdkb (talk) 03:54, 1 February 2019 (UTC)

What about BYU to Brigham Young University?The 2nd Red Guy (talk) 15:41, 23 April 2019 (UTC)

Changing New York Times to The New York Times would be great. I have seen people going through AWB runs doing it, but seems like a waste of human time. Kees08 (Talk) 23:32, 2 February 2019 (UTC)

@Kees08: Thanks; I added in those cases. - Sdkb (talk) 01:19, 3 February 2019 (UTC)
Not really sure changing Foobar to The Foobar is desired in many cases. WP:CITEVAR will certainly apply to a few of those. For NYT/NY Times, WaPo/Wa Po, WSJ, LA Times/L.A. Times, are those guaranteed to a refer to a version of these journals that were actually called by the full name? Meaning that was there as some point in the LA Times's history were "LA Times" or some such was featured on the masthead of the publication, in either print or webform? If so, that's a bad bot task. If yes, then there's likely no issue with it. Headbomb {t · c · p · b} 01:54, 3 February 2019 (UTC)
For the "the" publications, it's part of their name, so referring to just "Foobar" is incorrect usage. (It's admittedly a nitpicky correction, but one we may as well make while we're in the process of making what I'd consider more important improvements, namely adding the wikilinks to help readers more easily verify the reliability of a source.) Regarding the question of whether any of those publications ever used the abbreviated name as a formal name for something, I'd doubt it, as it'd be very confusing, but I'm not fully sure how to check that by Googling. - Sdkb (talk) 21:04, 3 February 2019 (UTC)
The omission of 'the' is a legitimate stylistic variation. And even if 'N.Y. Times' never appeared on the masthead, the expansion of abbreviations (e.g. N.Y. Times / L.A. Times) could also be a legitimate stylistic variation. The acronyms (e.g. NYT/WSJ) are much safer to expand though. Headbomb {t · c · p · b} 21:41, 3 February 2019 (UTC)
It is a change I have had to do many times since it is brought up in reviews (FAC usually I think). It would be nice if we could find parameters to make it possible. Going by the article, since December 1, 1896, it has been referred to as The New York Times. The ranges are:
  • September 18, 1851–September 13, 1857 New-York Daily Times
  • September 14, 1857–November 30, 1896 The New-York Times
  • December 1, 1896–current The New York Times
New York Times has never been the title of the newspaper, and we could use date ranges to verify we do not hit the edge cases of pre-December 1, 1896 The New York Times articles. There is The New York Times International Edition, but it seems like it has a different base-URL than I can go through the effort to verify the names of the other publications throughout the years, but do you agree with my assessment of The New York Times? Kees08 (Talk) 01:51, 4 February 2019 (UTC)

Is anyone interested in this? I still think it would save myself a lot of editing time. Headbomb did you have further thoughts? Kees08 (Talk) 16:21, 15 March 2019 (UTC)

@Kees08: I definitely still am, but I'm not sure how to move the proposal forward from here. - Sdkb (talk) 21:45, 21 March 2019 (UTC)

Credits adapted fromEdit

Thousands of articles about music artists, albums and songs reference the source in the body text (example: OnePointFive). Such references belong in a <ref> block at the end of the page and not in the body text. Most of these references follow a common pattern, so I hope this kind of edit can be made by a bot.

I suggest making a bulk replacement from

= =Track listing= = Credits adapted from [[Tidal (service)|Tidal]].<ref name="Tidal">{{cite web|url=|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>


= =Track listing<ref name="Tidal">{{cite web|url=|title=ONEPOINTFIVE / Aminé on TIDAL|publisher=Tidal|accessdate=August 15, 2018}}</ref>= =

Difference sources: Tidal (service), “the album notes”, “the album sleeve”, “the album notes”, “the liner notes of XXX” Different heading names, including “Track listing”, “Personnel”, ”Credits and personnel”. Variants: “Credits adapted from XXX”, “All credits adapted from XXX”, “All personnel credits adapted XXX”

Does this sound feasible/sensible? --C960657 (talk) 17:14, 28 February 2019 (UTC)

References should not be located in section titles. Pretty sure there is a guideline about it, and not good for a couple reasons. The correct way is current, create a line that says "Source: [1]" or something. -- GreenC 17:43, 28 February 2019 (UTC)
Citations should not be placed within, or on the same line as, section headings.WP:CITEFOOT — JJMC89(T·C) 03:38, 1 March 2019 (UTC)
Also (from MOS:HEADINGS): Section headings should: ... Not contain links, especially where only part of a heading is linked. Unless you use pure plain-text parenthetical referencing, refs always generate a link. --Redrose64 🌹 (talk) 12:41, 1 March 2019 (UTC)
You are right. It could not find a guideline on how to place the reference, if it is the source of an entire section/table/list. "Source: [1]" is a good suggestion, perhaps even moved to the last line of the section.--C960657 (talk) 17:25, 1 March 2019 (UTC)
Note, I replaced == with = = above so the bots that update the TOC of this page can function as normal. Headbomb {t · c · p · b} 22:12, 1 April 2019 (UTC)

Make Articles in Compliance with MOS:SURNAMEEdit

I've noticed that a lot of articles are not in compliance with MOS:SURNAME, especially in Category:Living people. I've manually changed a few pages, but as a programmer, I think this could be greatly automated. Any repeats of the full name, or the first name, beyond the title, first sentence, and infobox should not be allowed and replaced with the last name. I can help out in creating a bot that can accomplish this. InnovativeInventor (talk) 01:21, 21 March 2019 (UTC)

Just bumped into this: Wikipedia_talk:Manual_of_Style/Biography#Second_mention_of_forenames, so there should be detection of other people with the same last name. Additionally, this bot should intend to provide support for humans, not to automate the whole thing (as context is important). InnovativeInventor (talk) 03:57, 21 March 2019 (UTC)

@InnovativeInventor: Is this about the ordering of names in a category page, or about the use of names in the article prose? --Redrose64 🌹 (talk) 17:07, 21 March 2019 (UTC)
@Redrose64: This is about the reuse of names in the article prose and ensuring that the full name is only mentioned once (excluding ambiguous cases where the full name is necessary to clarify the subject of the sentence). InnovativeInventor (talk) 19:40, 21 March 2019 (UTC)
I don't like this, and I'm calling WP:CONTEXTBOT on it. Consider somebody from Iceland, such as Katrín Jakobsdóttir - the top of the article has
Or somebody from a family with several notable members - have a look at Johann Ambrosius Bach (which is quite short) and consider how it would look if we used only surnames: After Bach's death, his two children, Bach and Bach, moved in with his eldest son, Bach. --Redrose64 🌹 (talk) 21:05, 21 March 2019 (UTC)
@Redrose64: The idea is that this will be a human-assisted bot, not a completely automated bot. Just something that can speed up the process. I agree that it depends on the context. But, it would be nice to assist efforts to regularize articles that are not in compliance with MOS:SURNAME.InnovativeInventor (talk) 03:23, 22 March 2019 (UTC)
InnovativeInventor - Considering it will be human assisted, wouldn't it be better to include the functionality inside AWB or create a user script? Kadane (talk) 21:35, 22 March 2019 (UTC)
Kadane I think something that can crawl all of Wikipedia's bio pages would be better. Not sure though. I'm not familiar with the best way to help regularize all the bio pages. InnovativeInventor (talk) 23:46, 22 March 2019 (UTC)

A heads up for AfD closers re: PROD eligibility when approaching NOQUORUMEdit

When an AfD discussion ends with no discussion, WP:NOQUORUM indicates that the closing admin should treat the article as one would treat an expired PROD. One mundane part of this process is specifically checking whether the article is eligible for PROD ("the page is not a redirect, never previously proposed for deletion, never undeleted, and never subject to a deletion discussion"). It would be really nice, when an AfD listing is reaching full term (seven days) with no discussion, if a bot could check the subject's page history and leave a comment on, say, the beginning of the listing's seventh day as to whether the article is eligible for PROD (a simple yes/no). If impossible to check each aspect of PROD eligibility, it would at least be helpful to know whether the article has been proposed for deletion before, rather than having to scour the page history. A bot here could help the closing admin more easily determine whether to relist or soft delete. More discussion here. czar 21:12, 23 March 2019 (UTC)

@Czar: preliminary thoughts:
  • not currently a redirect - detectable via api ([1])
  • Never previously proposed for deletion: search in past edit summaries?
  • Never undeleted - log events for the page ([2])
  • Never subject to a prior deletion discussion: check if the title contains 2nd, 3rd, etc nomination.
Does that sound about right in terms of automatically verifying prod eligibility? --DannyS712 (talk) 21:37, 23 March 2019 (UTC)
@DannyS712, I would add to #4: check the talk page for history templates indicating prior deletion listings. E.g., it's possible that the previous AfD was under a different article title altogether. (Since those instances would get complicated, would also be helpful for the AfD comment to note if the article was previously live under another title so the closer can manually investigate.) re: #2, I would consider searching edit summaries for either added or removed PRODs or mentions of deletion (as PRODs not added via script may have bad edit summaries). Otherwise this sounds great to me! czar 21:54, 23 March 2019 (UTC)
@Czar: okay, this seems like something I could do, but it would be a while before a bot was up and running. As far as I can tell, the hardest part will be parsing the AfD itself - how to detect if other users have cast a !vote, rather than just commenting, sorting the AfD, etc. Furthermore, since I'm not very original and implement most of my bot tasks via either AWB (not very usable in this case) and javascript, the javascript bot tasks are generally just automatically running a user script on multiple pages. So first, I will be able to have a script that alerts the user if an AfD could be subject to PROD, and then post such notices automatically. The first part is just a normal user script, so it (I think) doesn't need a BRFA, and I'll let you know when I have a working alpha and am ready to start integrating the second part. This will be a while though, so if anyone else wants to tackle this bot request I won't be offended :). Thanks, --DannyS712 (talk) 22:07, 23 March 2019 (UTC)
You should look to see how the AFD counter script counts votes. That aside, the first iteration can always just add the information regardless. --Izno (talk) 00:44, 24 March 2019 (UTC)
This seems vaguely related to this discussion on VPT. --Izno (talk) 00:41, 24 March 2019 (UTC)
Yes Izno, you are correct. I will make a note there that a bot request is the manner being pursued. I think your idea of an edit filter might also be useful. That would ensure the presence of a specific string of text in the edit summary which the bot could search for IAW #2. I agree that simply adding a message to the effect that the subject being discussed either is or is not eligible for soft deletion without relisting would be good for the initial iteration and suggest that it might be best to maintain that as the functional standard indefinitely. I do want to thank the many editors who have stepped up to assist in this effort. I am proud of my affiliation with such a fine lot. Sincerely.--John Cline (talk) 01:52, 24 March 2019 (UTC)

Indian settlements: updating census dataEdit

Most articles on settlements in India (eg. Bambolim) still use 2001 census data. They need to be updated to use the 2011 census data. SD0001 (talk) 18:10, 29 March 2019 (UTC)

Is 2011 Census data available on WikiData? Template:Austria metadata Wikidata provides an example template and User:GreenC bot/Job 12 was a recent BRFA to add the template to Austria settlement articles: Example. -- GreenC 19:16, 29 March 2019 (UTC)
I don't think they're there on wikidata. This site does provide the data in what could be considered machine-readable format, though. SD0001 (talk) 16:08, 30 March 2019 (UTC)
Another site is If these sites were scraped and converted to CSV, the data could be uploaded to Wikidata via Wikipedia:Uploading metadata to Wikidata. Although this is a big job given the size of India, and the next census is in 2021, when it would be done over again. The number of potential locations must be immense, I went to and entered "Hyderabad" and it brought up a list of villages one having a population of 40 people, although which village of "Haiderpur" it is who knows as there are many listed. -- GreenC 17:28, 30 March 2019 (UTC)
The link I've given above already has the data in in Excel format. Ignore the columns part-A ebook and part-B ebook, what we need are the ones under "Town amenities" and "Village amenities". That's two Excel sheets for each of the 35 Indian states and union territories. Some of these files are phenomenally large as you said - Andhra Pradesh contains 27800 villages, for instance. SD0001 (talk) 20:54, 30 March 2019 (UTC)
Ah I see better. Checking Assam "Town Amenities" spreadsheet, for "Goalpara" (line #17), it has a population of 11,617 but our Goalpara says 48,911. If we assume this is for the Goalpara district it is 1,008,959, but in the spreadsheet it only adds up to about 20,000 (line #15-#18). Since most people there speak Goalpariya it seems unlikely there was a sudden population loss due to emigration. Are the spreadsheet numbers in some other counting system, or decimal offset? -- GreenC 22:34, 30 March 2019 (UTC)
GreenC, 11617 is the number of households. Population is 53430, which is reasonable. To get total population of Goalpara district, you need to add up populations in line #15-#25 plus line #2161-#2989 in 'Village amenities' sheet, which roughly gives a figure close to 1,008,959. SD0001 (talk) 23:22, 30 March 2019 (UTC)
Ah thanks again, SD0001! A program to extract and collate the data looks like the next step. I can't do it immediately as I am backlogged with programming projects. Extracting the data and uploading to Wikipedia per Wikipedia:Uploading metadata to Wikidata would be more than half the battle. Also ping User:Underlying lk who made the Wikidata instructions. -- GreenC 00:19, 31 March 2019 (UTC)
It seems like we have 2011 population figures for over 70,000 Wikidata entities, though once we only consider entities with an article, it drops to less than 4,000.--eh bien mon prince (talk) 05:15, 31 March 2019 (UTC)
Interesting queries, thanks. Notice some Wikidata entries are referenced some not. Probably the data was loaded by different processes with variable levels of reliability and completeness. I would not be comfortable loading into encyclopedia until it has been checked against a known source and the source field updated. Found Administrative divisions of India helpful to understand the census divisions though the more I look the bigger and more complex it becomes. -- GreenC 14:14, 31 March 2019 (UTC)
@Magnus Manske: This might be Gameable. --Izno (talk) 15:57, 31 March 2019 (UTC)

WikiProject Civil Rights MovementEdit

I'm trying to set-up a bot to perform assessment and tagging work for Wikipedia:WikiProject Civil Rights Movement. The bot would need to rely only on keywords present in pages. The bot would provide a list of prospective pages that appear to satisfy rules given it. An example of what the project is seeking is something similar to User:InceptionBot. WikiProject Civil Rights Movement uses that bot to generate report Wikipedia:WikiProject Civil Rights Movement/New articles. Whereas that bot generates a report of new pages, the desired bot would assess old pages. Mitchumch (talk) 16:27, 1 April 2019 (UTC)

At Wikipedia:Village pump (technical)#Assessment and tagging bot I didn't intend that you should try to set up your own bot. There are plenty of bots already authorised to carry out WikiProject tagging runs. Just describe the selection criteria, and we'll see who picks it up. --Redrose64 🌹 (talk) 19:46, 1 April 2019 (UTC)
The selection criteria are keywords on pages:
  • civil rights movement
  • civil rights activist
  • black panther party
  • black power
  • martin luther king
  • student nonviolent coordinating committee
  • congress of racial equality
  • national associaton for the advancement of colored people
  • naacp
  • urban league
  • southern christian leadership conference
Mitchumch (talk) 22:02, 1 April 2019 (UTC)
Redrose64 Since no one responded, is there another option? Mitchumch (talk) 20:00, 29 April 2019 (UTC)

Population for Spanish municipalitiesEdit

Adequately sourced population figures for all Spanish municipalities can be deployed by using {{Spain metadata Wikidata}}, as was recently done for Austria. See this diff for an example of the change.--eh bien mon prince (talk) 11:35, 11 April 2019 (UTC)

  BRFA filed Well, since my bot for Austria is already written and completed, I might as well do this too. -- GreenC 13:47, 11 April 2019 (UTC)

Russia district mapsEdit

Replace image_map with {{Russia district OSM map}} for all the articles on this list, as in this diff. The maps are already displayed in the articles, but currently this is achieved through a long switch function on {{Infobox Russian district}}; transcluding the template directly would be more efficient.--eh bien mon prince (talk) 11:58, 11 April 2019 (UTC)

@Underlying lk: should be pretty similar to the German maps, right? --DannyS712 (talk) 22:31, 11 April 2019 (UTC)
Yes pretty much. In fact, the German template is based on this one.--eh bien mon prince (talk) 13:26, 12 April 2019 (UTC)
@Underlying lk: I can do this. I have a few BRFAs currently open, but once some finish I'll file one for this task --DannyS712 (talk) 04:20, 14 April 2019 (UTC)

Category:Pages using deprecated image syntaxEdit

Category:Pages using deprecated image syntax has over 89k pages listed, making manually fixing these not possible. Could a bot be created to handle this? --Gonnym (talk) 06:18, 12 April 2019 (UTC)

@Gonnym: I might be able to help, but can you give some examples of the specific edits that would need to be made (ideally with diffs) and how to screen for those? Thanks, --DannyS712 (talk) 06:26, 12 April 2019 (UTC)
Pages in this category use a template that uses Module:InfoboxImage in a {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{alt|}}}}} style that pass to the |image= field an image syntax in the format |image=File:Example.jpg. However, as per usual when dealing with templates, the exact parameters used and their names will differ between the templates. So for example:
  • {{Infobox television}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|upright={{{image_upright|1.13}}}<!-- 1.13 is the most common size used in TV articles. -->|alt={{{image_alt|{{{alt|}}}}}}}}
  • {{Infobox television season}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|{{{imagesize|}}}}}}|sizedefault=frameless|upright={{{image_upright|1}}}|alt={{{image_alt|{{{alt|}}}}}}}}
  • {{Infobox television episode}} has {{#invoke:InfoboxImage|InfoboxImage|image={{{image|}}}|size={{{image_size|}}}|sizedefault=frameless|alt={{{alt|}}}}}

Also, an image isn't the only value that can be passed in |image=File:Example.jpg, but it sometimes is combined with an image size and caption, which will need to be extracted and passed through the correct parameters. --Gonnym (talk) 06:37, 12 April 2019 (UTC)

@Gonnym: okay, now it looks way more complicated. Maybe 1 infobox at a time. Can you provide some diffs for a few different types of cases with an infobox of your choice? Thanks, --DannyS712 (talk) 06:41, 12 April 2019 (UTC)
  • The West Wing (season 3) ({{Infobox television season}}) has image=[[File:West Wing S3 DVD.jpg|250px]]. Instead it should be, |image=West Wing S3 DVD.jpg and |image_size=250px (it can also be without "px" as the module does that automatically).
  • Red Dwarf X has image=[[File:Red Dwarf X logo.jpg|alt=Logo for the tenth series of ''Red Dwarf''|250px]]. Instead it should be, |image=Red Dwarf X logo.jpg, |image_size=250px and |image_alt=Logo for the tenth series of Red Dwarf.
For a better systematic approach though, maybe it would be better finding out what the top faulty templates are, and create a mapping of what parameters the templates use and their names. If the bot can check the template name and know what parameters to use, this should speed things up.--Gonnym (talk) 07:00, 12 April 2019 (UTC)
@Gonnym: And now I'm completely lost. I don't think I'm the right bot op to help with this, sorry. --DannyS712 (talk) 07:02, 12 April 2019 (UTC)
I think someone could start with {{Infobox election}}, which appears to have roughly 11,000 articles in the error category. Here's a sample edit. Basically, for this template, you need to remove the initial brackets and the "File:" part of the image parameter value, then move the pixel specification (which may come in a variety of forms, like "x150px" or "150x150px") to the next line to a new |image#_size= parameter. The number "#" needs to match the image# parameter, e.g. |image2= gets |image2_size=. Drop me a line if this is confusing; I feel like it's a lot to explain in a short paragraph.
This may be a good mini-project to discuss at length at Category talk:Pages using deprecated image syntax. – Jonesey95 (talk) 07:59, 12 April 2019 (UTC)
In many cases, the |image_size=250px (or equivalent) may simply be omitted, because most infoboxes are set up to use a default size where none has been set (example). In my opinion, falling back to the default is preferable since it gives a consistent look between articles. --Redrose64 🌹 (talk) 12:46, 12 April 2019 (UTC)
Mostly true, but unfortunately, that is not the case at {{Infobox election}}, as you can see in this before-and-after comparison. – Jonesey95 (talk) 13:15, 12 April 2019 (UTC)
It appears that Number 57 (talk · contribs) is against the proposal. --Redrose64 🌹 (talk) 13:44, 12 April 2019 (UTC)
I guess I was pinged because of this edit? I don't really understand what is being discussed here, but removing the image size parameters like this edit means that the images in the infobox are different sizes – is this because there is no default size for this infobox, or the default size is for a single dimension (and not all photos have the same aspect ratio)? Can the default size be set to 150x150 (which is the most commonly used size)? Cheers, Number 57 13:52, 12 April 2019 (UTC)
{{Infobox election}} has a default size of 50px for |flag_image=, a 300px for |map_image#= and no default for |image#= which defaults then to frameless (which I'm not sure what it is). If there is a correct size that the template should use, then the template should probably be edited to handle it. --Gonnym (talk) 14:02, 12 April 2019 (UTC)
(edit conflict) @Number 57: If you use the |image1=[[File:Soleiman Eskandari.jpg|150x150px]] format it puts the page into Category:Pages using deprecated image syntax, because the parameter is intended for a bare filename and nothing else, as in |image1=Soleiman Eskandari.jpg. --Redrose64 🌹 (talk) 14:05, 12 April 2019 (UTC)
OK. I have no problem with using some other way to get matching image sizes, but if it is added as a default, it needs to be a two-dimensional, otherwise it ends up in a bit of a mess where images have different aspect ratios. Number 57 14:07, 12 April 2019 (UTC)
Redrose64: your edit, like my edit that I linked above (and self-reverted) resulted in image sizes that look bad. Either the template needs to be modified, or the image sizes need to be preserved in template parameter values within the article, but removing them changes the image rendering in a negative way in that article (and presumably others). – Jonesey95 (talk) 17:00, 12 April 2019 (UTC)

"Accidents and incidents"Edit

All of our articles and categories on transport "accidents and incidents" use that phrasing, as opposed to "incidents and accidents" (which is a line from "You Can Call Me Al"). However, there are a lot of section heads that are "== Incidents and accidents". I would like a bot to search articles for the phrasing "== Incidents and accidents ==" and replace it with "== Accidents and incidents ==". Can that be done?--Mike Selinker (talk) 19:13, 20 April 2019 (UTC)

Why? What difference does it make? I fail to see how a Paul Simon song should influence our choice of phrasing. --Redrose64 🌹 (talk) 22:27, 20 April 2019 (UTC)
That's just a random thing that might be causing some users to think that's the right format. The reason is that every article title and every category title uses the phrasing "Accidents and incidents" (there are hundreds of these). Only some articles' section heads use a different format. It's just about being consistent, which you can either value or not at your discretion.--Mike Selinker (talk) 22:59, 20 April 2019 (UTC)
There are wikilinks like [[Aigle Azur#Incidents and accidents|Aigle Azur Flight 258]] (found in Flight 258) that would break and need fixing. I see somewhere less than 2000 cases overall in section headers (though that search might be improved, a regex version is timing out). Is this something you can do with AWB? It would be better for the person running AWB to be the one with an interest in the change, otherwise the bot operator has to get community consensus etc.. which is involved and time consuming and no guarantee there would be consensus. Could also try Wikipedia:AutoWikiBrowser/Tasks. -- GreenC 14:52, 30 April 2019 (UTC)
@Mike Selinker: since the thread is old. -- GreenC 14:54, 30 April 2019 (UTC)
I can certainly try. Thanks!--Mike Selinker (talk) 14:57, 30 April 2019 (UTC)

Request for one-time run to tag pages with bare references with Template:Cleanup bare URLsEdit

This is what I see to be a rather uncontroversial request which I have been doing manually for about a month or so now. In order to better identify pages that use bare URL(s) in reference(s) in an effort to get the URLs fixed, I am requesting that the {{Cleanup bare URLs}} tag be added to all pages by a bot which meet the following conditions:

  1. Has at least one instance of a <ref> tag immediately followed by http: and/or https:, followed by any combination of keystrokes and a </ref> closing tag when there are no instances of spaces between the <ref> and </ref> tags (underscores are okay).
    (In such aforementioned instances, the reference tags should not be enclosed inside a citation template.)
  2. There is currently not a transclusion of {{Cleanup bare URLs}} or any of its redirects on that page
  3. The page is in the "(article)" namespace

...From my experiences recently with tagging these pages, tagging the pages with the aforementioned parameters will avoid most, if not all, false positives.

I am requesting this run only once so that it doesn't need constant checks, and this should adequately provide an assessment on how many pages need reference url correction. Steel1943 (talk) 17:54, 22 April 2019 (UTC)

@Derek R Bullamore, MarnetteD, and Meatsgains: Pinging editors who I know either do work on or have worked on correcting pages tagged with {{Cleanup bare URLs}} in the past to make them aware of this discussion, and to see if there are any concerns or issues I'm not seeing at the moment. Steel1943 (talk) 17:58, 22 April 2019 (UTC)
  • Is there a group or individual who cleans up after these tags? Wouldn’t a report suffice? –xenotalk 18:30, 22 April 2019 (UTC)
    • @Xeno: The individuals that I'm aware of are pinged in the aforementioned comment. And unfortunately, a report would not suffice since a report does not place the respective pages in appropriate cleanup categories that the aforementioned editors monitor. In addition, the report may become outdated, whereas in theory, the tags on these pages should not since they tend to get removed once the bare reference urls on the pages are resolved; once the tag gets removed, then, of course, the page gets removed from the appropriate cleanup category. Steel1943 (talk) 18:52, 22 April 2019 (UTC)
  • I don't think this is a good idea. Bare URLs are so common and constantly being added it would tag a significant percent of the entire project. There is also context, like an article with 400 refs and someone adds a single bare URL, a banner would be overkill. If a bot were to do this it should probably search out the egregious cases like an article with > 50% bare citations. Reports can work if you do it right, see this report I recently created. It has documentation, regenerated automatically every X days, linked from other pages, etc.. -- GreenC 19:05, 22 April 2019 (UTC)
Further thought, a report could categorize pages by percentage of bare links so you can better allocate your time on which pages to fix and how. -- GreenC 19:07, 22 April 2019 (UTC)
The original proposal is likely to unearth tens of thousands of articles so affected - given the very small number of editors who work on the {bare URLs} cases, this might generate more problems than that small gang could possibly manage. The latter amendment(s) seem more feasible, but nevertheless we could still "dig up more snakes than we can kill", to borrow an old Texas expression. (This despite the fact that I am from the North of England !) I think a "dummy run" may be better, to get a true perspective of numbers. - Derek R Bullamore (talk) 19:26, 22 April 2019 (UTC)
  • I think that what Derek R Bullamore states may be a good starting point: Before (or in lieu of) a bot performs this task, is it possible for a bot or script to get a count of how many pages fall under the parameters I stated at the beginning of this thread? (I guess this goes somewhat in line with the "report" inquiry Xeno stated above.) Steel1943 (talk) 19:48, 22 April 2019 (UTC)
@MZMcBride: do you still make these kind of reports? –xenotalk 21:52, 22 April 2019 (UTC)
In previous effort, at least one bot has run which expanded the linked URL to at least include a title. I would guess a BRFA for that effort would succeed. I would see that as greatly preferential to any taggings. --Izno (talk) 00:23, 23 April 2019 (UTC)
Ideally this would be done manually or semi-manually (with assist of tools) as expanding citations is basically impossible to do well fully automated. CitationBot is a start as is refTool (hope those names are right). Those tools took years and they are still not reliable enough to be full auto. We could add a title and call it a day but not ideal. -- GreenC 00:50, 23 April 2019 (UTC)
I echo DRb's post though I would up the guestimate to more than a million articles that would need work. Years ago I tried to put a dent into "External links modified" (example Talk:War and Peace (film series)#External links modified) task and wound up being overwhelmed by the fact that more articles were being added than those that I had checked each time the bot did a new run. Now there was a time when edit-a-thons were arranged around tasks like this but I haven't seen one of those in years. GreenC's idea of limiting it to > 50% bare citations might be a workable solution. MarnetteD|Talk 01:18, 23 April 2019 (UTC)
Agreed - I think we begin with > 50% bare URLs to start and if we can manage to stay on top of those pages, then we can incrementally decrease the percentage. Meatsgains(talk) 02:48, 23 April 2019 (UTC)
  • As I was staring at this discussion thinking of a way to simplify this task, I can up with an idea for a way to update this proposal. How about something along these lines: Rather than being a one-time run bot task, the bot runs at certain intervals (such as once every couple days), and stops tagging pages when the respective cleanup category has a set number of pages tagged maximum (such as 75–100 pages)? This will keep the backlog manageable, but still keep bringing the pages with bare ref url issues to light. Steel1943 (talk) 14:54, 23 April 2019 (UTC)

I believe GreenC could do a fast scan (a little bit offtopic, but could that awk solution work with .bz2 files?). For lvwiki scan, I use such regex (more or less the same conditions as OP asked for) which works pretty well: <ref>\s*\[?\s*(https?:\/\/[^][<>\s"]+)\s*\]?\s*<\/ref>. For actually fixing those URLs, we can use this tool. Can be used both manually and with bot (it has pretty nice API). --Edgars2007 (talk/contribs) 15:36, 23 April 2019 (UTC)

I recently made a bot that looks for articles that need {{unreferenced}} and this is basically the same thing other than a change to the core regex statement, which User:Edgars2007 just helpfully provided. So this could be up and running quickly. It runs on Toolforge and uses the API to download each of 5.5M articles sequentially. The only question is which method: > 50%, or max size of the tracking category, or maybe both (anything over 50% is exempted from the category max size). The mixed method has the advantage of filling up the category with the worst cases foremost and lesser cases will only make it there once the worst cases are fixed. -- GreenC 17:51, 23 April 2019 (UTC)

Fine as far as it goes. I am still concerned about the small band of editors that mend such bare links, being potentially swamped by the sheer number of cases unearthed. Let the opening of the can of worms begin ! - Derek R Bullamore (talk) 19:59, 24 April 2019 (UTC)
Derek R Bullamore, I thought about this some more and think it would be easiest, at least initially, to limit the category to some number of entries (1000) and run weekly while working its way through the 5.5m articles ie. if the first article doesn't have a bare link when it is checked, it won't be checked again until all the others have been checked. If the category is being cleared by editors rapidly it can always be adjusted to run more frequent. -- GreenC 21:29, 24 April 2019 (UTC)
GreenC From my experience 1000 is still a massive number of articles and would take much more than a week to clean up. The situation would quickly become like the example I gave above. Don't forget other editors will still be adding bare url tags to articles so the number will exceed 1000 per 7 days. As far as I know there are only two or three editors who check and work on these regularly. We appreciate having a few days where there are only one or two in the category so we can focus on other editing. I can see a situation where we burnout and abandon the work completely. I would suggest a smaller number like 200 at most. Another possibility is to run 1000 but then not do another run until those have been finished. I probably should have mentioned this earlier but there is a wide range of problems to fix with the bare urls - some are easy and take a few seconds. Others are labor intensive and can require days to finish. For example I am currently working on Ordinance (India). Neither refill or reflinks could format these and I am having to do them one at a time. Now these are just my thoughts and others may feel differently. MarnetteD|Talk 21:52, 24 April 2019 (UTC)

MarnetteD, yes understand what you are saying. Was thinking, what about an 'on demand' system where you can specify when to add more, and how many to add - and it only works if the category is mostly empty, and maxes at 200 (or less). This is more technically challenging as it would require some kind of basic authentication to prevent abuse, but I have an idea how to do it. It would be done all on-Wiki similar to a bot stop page. This gives participants the freedom to fill the queue whenever they are ready, and it could keep a log page. Would that be useful? -- GreenC 19:19, 25 April 2019 (UTC)

That sounds good GreenC. It sure seems to address my concerns. If other editors are adding a batch of bare url tags the bot wouldn't be piling up more on top of those. Thanks for the suggestion. MarnetteD|Talk 20:17, 25 April 2019 (UTC)
Yeah, it looks a good idea - the best so far - particularly if it can be made to operate successfully. Bring it on. - Derek R Bullamore (talk) 22:08, 25 April 2019 (UTC)
Ok this will be new code and I'm finishing some other projects. Will be in touch. -- GreenC 22:52, 26 April 2019 (UTC)
  • @GreenC: Would you be able to send me a ping when this proposed idea has come to fruition? I am not really following this discussion (other than the fact my original proposal was shot down), so I'm just curious how this will look when complete and running. Steel1943 (talk) 21:45, 4 May 2019 (UTC)
@Steel1943: yes no problem. -- GreenC 23:15, 4 May 2019 (UTC)

European Challenge botEdit

Hi, I would like to request for a bot to add the Template:WPEUR10k, to all articles that appears in the list of created articles at Wikipedia:The 2500 Challenge (Nordic) and Wikipedia:The 10,000 Challenge. I think it would be very helpful so all the articles recieved the template tag. I suggest this as there are literally thousands of articles in need of the tag.--BabbaQ (talk) 13:37, 26 April 2019 (UTC)

If there aren't any objections I will do this task, however there might already be a bot operator that can run this task without requesting approval. Kadane (talk) 22:32, 26 April 2019 (UTC)
Thanks. It would be really appreciated.BabbaQ (talk) 23:04, 26 April 2019 (UTC)

Automating new redirect patrolling for uncontroversial redirectsEdit

I recently started patrolling newly created redirects and have realized that certain common types of redirects could be approved through an automated process where a bot would just have to parse the target article and carry out some trivial string manipulation to determine if the redirect is appropriate. A working list of such uncontroversial redirects:

  1. Redirects of the form Foo (disambiguation) --> Foo, where Foo is a dab page
  2. Redirects where the only difference is the placement of a year (e.g. Norwich County elections, 1996 --> 1996 Norwich County elections)
  3. Redirects from alternative capitalizations
  4. Redirects between different English spelling standards (e.g. Capitalisation --> Capitalization)
  5. Redirects where the only difference is the use of diacritics
  6. Redirects where the redirect title is included in bold text in the lead of the target, or in a section heading
  7. Redirects from titles with disambiguators to targets of the same name without a disambiguator where the disambiguator content is present in the lead of the target
  8. Redirects from alternative phrasings of names for biographies (e.g. Carol Winifred Giles ––> Carol W. Giles). This would also require the bot to search for possible clashes with other similarly named individuals

Potentially more controversial tasks could include automated RfD nomination for clearly unnecessary redirects, such as redirects with specific patterns of incorrect spacing. I also think it would be a good idea to include an attack filter, so that if a redirect contains profanity or other potentially attackish content the bot will not automatically patrol them even if it appears to meet the above criteria. I anticipate that if this bot were to be implemented, it would cut necessary human work for the redirect backlog by more than half. I've never written a Wikipedia bot before, but I am a software engineer so I anticipate that if people think that this is a good idea I could do a lot of the coding myself, but obviously the idea needs to be workshopped first. There's also potential extensions that could be written, such as detecting common abbreviations or alternate titles (e.g. USSR space program --> Soviet space program, OTAN --> NATO) signed, Rosguill talk 22:17, 28 April 2019 (UTC)

@Rosguill: automated RfD would be a dangerous idea. Also, if they have incorrect spacing R3 may apply. --DannyS712 (talk) 22:18, 28 April 2019 (UTC)
DannyS712, in particular I was thinking of examples where the incorrect spacing is specifically in relation to a disambiguator, which seem to be routinely deleted at RfD. signed, Rosguill talk 22:20, 28 April 2019 (UTC)
@Rosguill: in that case, G6 may apply. Just wanted to point that out, I'm not very active at RfD so I'll defer to you --DannyS712 (talk) 22:21, 28 April 2019 (UTC)
DannyS712 at any rate I'm much more interested in implementing the uncontroversial approvals, which are a much larger portion of created redirects. signed, Rosguill talk 22:25, 28 April 2019 (UTC)

Circling back to thisEdit

If there's not going to be any further discussion here, is there anywhere else I should post or things I should do before implementing this bot? The Help:Creating a bot has a flowchart including the steps for writing a specification and making a proposal for the bot, but it's not clear to me which forums I should be using for that (or if the above discussion was sufficient). An additional concern is that while I believe that from a technical perspective this shouldn't be a terribly difficult bot to implement, I would need an admin to give the bot NPP permissions in order to run the bot. signed, Rosguill talk 23:04, 5 May 2019 (UTC)

@Rosguill: if any admin is willing to give DannyS712 test NPP rights, I'll try to work on a user script to easily patrol such pages. Any patrolling I do before any bot approval will be triggered manually, and I agree to be accountable for it in the same manner as when I myself patrol new pages. --DannyS712 (talk) 23:27, 5 May 2019 (UTC)
NPP is thisaway. Primefac (talk) 01:43, 6 May 2019 (UTC)
I'd prefer that you conduct testing and development on testwiki. — JJMC89(T·C) 02:19, 6 May 2019 (UTC)
@JJMC89, Primefac, and Rosguill: I have a working demo set up - see User:DannyS712 test/redirects.json for a list of pages that the bot would automatically patrol. Out of the most recent 3000 unreviewed redirects, it would patrol 96 - those that end in (disambiguation) and point to the same page without (disambiguation), and those that point to other capitalizations/acronyms. I'll note that NO automatic patrols have been performed, just a list has been made. Thoughts? --DannyS712 (talk) 23:37, 12 May 2019 (UTC)
DannyS712, I'm a bit surprised that so few are caught by it, but I guess it's still more useful than nothing and a good starting point in case we want to try implementing some of the other suggested cases. It seems like it's also catching redirects that differ only by the inclusion of diacritics, which you didn't mention in your comment. signed, Rosguill talk 00:21, 13 May 2019 (UTC)
@Rosguill: yes, sorry - I meant accents (or diacritics), not acronyms. --DannyS712 (talk) 00:23, 13 May 2019 (UTC)
  BRFA filed --DannyS712 (talk) 21:29, 14 May 2019 (UTC)

WikiProject taggingEdit

Could someone help with tagging categories and pages with {{WikiProject Television}} banner? Ideally all categories and pages under Category:Television and Category:WikiProject Television except the ones listed below. Pages in the following categories should not tagged (but the categories themselves should):

Note that some of the sub-categories are all also placed in other category trees. There might be some false-positives in the list, but those can be later manually fixed when found out. But currently there are a lot of categories and pages missing the tag, which means they don't show up in the project alert section. Also note that some might have a redirect template, so if possible don't add 2 templates to the same one (ideally it should replaced by the standard version, but I know that cosmetic changes are an issue). --Gonnym (talk) 09:33, 4 May 2019 (UTC)

@Gonnym: Sure, I can do this. Just to be clear, add tags to category talk and talk namespaces for all pages that are under the 2 categories you requested, except for pages in the categories listed to avoid. --DannyS712 (talk) 18:39, 4 May 2019 (UTC)
Yes, add the WikiProject tag above to all talk pages of the category, article, template, module, Wikipedia and file namespaces of all pages and categories listed under the two top category trees (so all sub-categories, not just the ones directly in the top category), except for the pages listed in the categories I've listed above (but do tag those categories and sub-categories) as those categories probably have a lot of pages that shouldn't be tagged. And also don't tag if it is already tagged by a redirect template (so there won't be two of the same tags).--Gonnym (talk) 19:06, 4 May 2019 (UTC)
Please don't ask for "all sub-categories", this has caused much trouble in the past. We prefer an explicit list of categories. --Redrose64 🌹 (talk) 21:46, 4 May 2019 (UTC)
Complete list:
To display all subcategories click on the "►":
Television(36 C, 3 P)


To display all subcategories click on the "►":
WikiProject Television(9 C, 19 P)

Exclude the pages in the categories listed in the exclusion section and all pages in the categories in

To display all subcategories click on the "►":
Television people(9 C)


To display all subcategories click on the "►":
Works about television(11 C, 1 P)


Hope the above list is sufficient. --Gonnym (talk) 13:56, 5 May 2019 (UTC)
All you did was create JavaScript boxes that will allow someone to look at the current subcategories with a lot of clicking. You didn't actually list or review them. At a glance, within the category tree you point to I see several categories for media properties that include TV shows, so you wind up with their subcategories for non-TV media (films, books, comics, video games, etc) and subcategories for characters not limited to those that appeared on TV. The same goes for major media companies, you would wind up with completely unrelated articles such as Version 7 Unix being tagged (Category:Bell Labs Unices is 5 levels down from Category:AT&T, which is in Category:Cable television companies of the United States). Although probably the most insane example is that Category:Video is just a few levels down from Category:Television by several paths, and from there you get Category:Film itself, Category:Video gaming itself, and so on. Anomie 17:02, 5 May 2019 (UTC)
There is no real way to review them all, and asking someone to review thousands of categories is just the same as saying no. As I stated above I'm sure there are false positives, but so what? Anyone spotting an incorrect tag can just revert it. As this isn't a reader-facing change and it isn't even done on the "main" page but on the secondary talk page, the amount of disruption an incorrect tag does is very minimal, while the gain of having thousands of un-tagged pages is great, as those pages now appear in the article alerts. That said, both your issues can be easily solvable by excluding Category:Television companies and its sub-categories and Category:Video. --Gonnym (talk) 19:14, 5 May 2019 (UTC)
Maybe you could just add that the category name needs to contain the word "television". Or could just look at all categories with television in the title. I've made a list of them at User:WOSlinker/TVCats. -- WOSlinker (talk) 22:06, 5 May 2019 (UTC)

Wikimedia Airplane Type Tagging BotEdit

I do a fair bit of category tagging on Wikimedia Commons and know that Google's image recognition tools are pretty good these days at recognizing specific types of airplanes in images. When I say they're pretty good I mean they can not just recognize a 737 vs. a 777, which is a good start but they recognize the difference between a 777-300ER vs. a 777-200 which is even better. Would it be possible to develop an airplane bot that only looks though commons:Category:Media needing categories and adds airplane name tags when it finds images that contain recognizable airplanes? I would also like the bot to place a tag requesting human verification for any tagged images. Hopefully this will help in some way to reduce the backlog of around 1,000,000 images needing categories.Monopoly31121993(2) (talk) 11:25, 6 May 2019 (UTC)

@Monopoly31121993(2): There is no such category as Category:Media needing categories. --Redrose64 🌹 (talk) 19:15, 6 May 2019 (UTC)
Commons:Category:Media_needing_categories. -- GreenC 19:29, 6 May 2019 (UTC)
In that case, this should be brought up at c:Commons:Bots/Work requests rather than here. Anomie 21:06, 6 May 2019 (UTC)

Do you know how to gain access to a Google AI-driven API? I thought Google donated something like that to Wikipedia but can't remember the details. -- GreenC 19:31, 6 May 2019 (UTC)

I know that Google Cloud Vision has an API but something Google donated to Wikipedia sounds promising and probably more helpful. Does anyone have any knowledge about Google image recognition donations?Monopoly31121993(2) (talk) 20:01, 6 May 2019 (UTC)
Press release for Cloud Vision donation. Finding where and how to access.. I don't know where to find it or access. -- GreenC 21:41, 6 May 2019 (UTC)

Astana ---> Nur-SultanEdit

Please change all occurrences of "Astana" in all articles to new name "Nur-Sultan". Also please move all articles with "Astana" to "Nur-Sultan". Thanks! --Patriccck (talk) 18:16, 7 May 2019 (UTC)

@Patriccck: The first one would probably fail WP:CONTEXTBOT. Regarding the second one, bots don't normally move pages. --Redrose64 🌹 (talk) 19:33, 7 May 2019 (UTC)

Bot to make a mass nom of subcategories in a treeEdit

Is it possible for a bot to nominate all the subcategories in the tree Category:Screenplays by writer for a rename based on Wikipedia:Categories for discussion/Log/2019 May 10#Category:Screenplays by writer. There is about a thousand of them! I guess each one needs to be tagged with {{subst:CFR||Category:Screenplays by writer}}, and then added to the list at the nom. Is this feasible? --woodensuperman 15:23, 10 May 2019 (UTC)

@BrownHairedGirl: ? --Izno (talk) 15:53, 10 May 2019 (UTC)
@Woodensuperman and Izno: technically, I could do this quite easily.
But I won't do it for a proposal to rename to "Category:Films by writer". Many films are based on books or plays, so "Category:Films by writer" is ambiguous: it could refer either to the writer of the original work, or to the writer of the screenplay.
I suggest that Woodensuperman should withdraw the CFD nomination, and open a discussion at WT:FILM about possible options ... and only once one or more options have been clarified consider opening a mass nomination. --BrownHairedGirl (talk) • (contribs) 16:03, 10 May 2019 (UTC)
@DannyS712:, that's one for you, I think? Headbomb {t · c · p · b} 23:46, 12 May 2019 (UTC)
@Headbomb: yes, but since BHG suggested that the nom be withdrawn and a discussion opened first, I was going to wait and see what woodensuperman says before chiming in --DannyS712 (talk) 23:47, 12 May 2019 (UTC)
@DannyS712: I don't intend to withdraw the nom. I think a sensible discussion can be had at CFD. --woodensuperman 11:30, 13 May 2019 (UTC)
@Woodensuperman: I just finished my bot trial, so I can't do this run, sorry --DannyS712 (talk) 21:13, 14 May 2019 (UTC)

Find mountain articles with dead external linksEdit

Would like a bot that could search all the articles listed under Category:WikiProject Mountains articles and its children categories that are also listed in Category:Articles with dead external links? Or maybe there's an existing tool that can do this? RedWolf (talk) 21:19, 16 May 2019 (UTC)

@RedWolf:   Doing... with petscan --DannyS712 (talk) 21:24, 16 May 2019 (UTC)
@RedWolf: here is a list of all of the articles listed in the categories. But, petscan isn't loading the intersection with the dead externals, so I don't have the actual result you wanted yet --DannyS712 (talk) 21:38, 16 May 2019 (UTC)
@RedWolf: There are 317 pages: [3] or [4] both generate the list you want, but they take a few minutes to run --DannyS712 (talk) 21:43, 16 May 2019 (UTC)
There we go, petscan, I had run it couple times in the distant past, just couldn't remember the name of it. I'm going to add a sub-page to the WP to hold the results and write up a description of how to re-generate it. Thanks for your help. RedWolf (talk) 22:03, 16 May 2019 (UTC)
I've copied the query results onto Wikipedia:WikiProject Mountains/Articles with dead external links with the URL to re-generate it. RedWolf (talk) 03:09, 17 May 2019 (UTC)
WP:Petscan used to be called WP:Catscan (short for "category scanner"); it was renamed as a play on the word "cat" to "pet" when it started doing more than just basic category scanning and intersections. --Izno (talk) 02:35, 17 May 2019 (UTC)
Ah ok, catscan I remember. :) I was wondering about the "pet" prefix. Thanks for the info. RedWolf (talk) 03:06, 17 May 2019 (UTC) -> nativeplanttrust.orgEdit

The New England Wild Flower Society [5] changed its name and web presence to the Native Plant Trust [6]. And in the process broke most of its old URLs. Only insecure http requests to the old web site get an HTTP 301 redirect. https links time out. I suspect a firewall misconfiguration on their end, but I emailed about the problem and it hasn't been fixed.

I am requesting a bot find all the instances of (http or https) and rewrite to (https only, optionally only if that new URL returns a 2xx or 3xx status code).

I don't have a count of edits to make. Here is a sample page: Vaccinium caesariense. As I write this, reference 2 links to (a timeout error). It should link to

Vox Sciurorum (talk) 17:51, 17 May 2019 (UTC)

It looks like this URL pattern appears in only 114 pages in all namespaces, so someone with AWB should be able to make quick work of it. – Jonesey95 (talk) 19:14, 17 May 2019 (UTC)
URLs are difficult. There are archive URLs where the old URL is part of the path and changing the path breaks the archive URL; or where the new link doesn't work and the old link is converted to an archive URL. Or where converting to a new link can replace an existing archive URL. Then making sure {{dead link}} exists if needed. It is quite complex. Everything should be checked and tested. There are bots designed for making URL changes see WP:URLREQ. -- GreenC 21:51, 17 May 2019 (UTC)
I didn't know about that page. I'll repost the request there. This request can be marked closed. Vox Sciurorum (talk) 22:40, 17 May 2019 (UTC)
There is also a template Template:Go Botany (edit | talk | history | links | watch | logs) that could be used specifically for links to"Vaccinium caesariense". Go Botany. New England Wildflower Society. Probably not worth the effort to make the bot rewrite the links, though. Vox Sciurorum (talk) 19:55, 17 May 2019 (UTC)
All of the links in main space have been updated, but they are inconsistently formatted and many of them still say "New England Wild Flower Society". I recommend doing a search for that string, or for and replacing the various citation formats with the {{Go Botany}} template, where appropriate. I didn't do anything to the 24 pages outside of article space, since most of them are sandboxes and maintenance pages. – Jonesey95 (talk) 07:45, 18 May 2019 (UTC)