Open main menu

Wikipedia:Bot requests

This is a page for requesting tasks to be done by bots per the bot policy. This is an appropriate place to put ideas for uncontroversial bot tasks, to get early feedback on ideas for bot tasks (controversial or not), and to seek bot operators for bot tasks. Consensus-building discussions requiring large community input (such as request for comments) should normally be held at WP:VPPROP or other relevant pages (such as a WikiProject's talk page).

You can check the "Commonly Requested Bots" box above to see if a suitable bot already exists for the task you have in mind. If you have a question about a particular bot, contact the bot operator directly via their talk page or the bot's talk page. If a bot is acting improperly, follow the guidance outlined in WP:BOTISSUE. For broader issues and general discussion about bots, see the bot noticeboard.

Before making a request, please see the list of frequently denied bots, either because they are too complicated to program, or do not have consensus from the Wikipedia community. If you are requesting that a template (such as a WikiProject banner) is added to all pages in a particular category, please be careful to check the category tree for any unwanted subcategories. It is best to give a complete list of categories that should be worked through individually, rather than one category to be analyzed recursively (see example difference).

Note to bot operators: The {{BOTREQ}} template can be used to give common responses, and make it easier to keep track of the task's current status. If you complete a request, note that you did with {{BOTREQ|done}}, and archive the request after a few days (WP:1CA is useful here).


Please add your bot requests to the bottom of this page.
Make a new request


Creating redirects from values in a list of episodes articleEdit

Hello and thanks in advance for your time. Is it possible for a bot to take a list of list-articles, with articles such as List of House episodes, go to the episode section of an article from that list, and then get all values in the "title" column? Basically the name of each episode. If that is a yes, can the bot then check if an article at that name is present or not? Finally creating a redirect based on that article name. So for example:

  1. Bot gets a list of articles;
  2. It goes to the first article in the list - List of Arrow episodes;
  3. Goes to the episode section - List of Arrow episodes#Episodes;
  4. Goes over the episode list. At episode #2 gets the title "Honor Thy Father";
  5. Checks if Honor Thy Father is an article;
  6. If article (or redirect) present then create a redirect at "title (TV show)" (as: "Honor Thy Father (Arrow)"), if not then output to list as "title" (as: "Honor Thy Father").


My goal is to be able to create episode redirects fast and easy, so trying to figure out how best to do it, as manually this is taking me a very long time (there are a few more steps, but would like to know if the general idea is even possible). --Gonnym (talk) 08:01, 24 October 2018 (UTC)

More likely would be for the bot to look for the {{Episode list}} templates in the wikitext, rather than trying to scrape the HTML. But first you'd need a consensus at WP:VPR or the like establishing that the community actually wants all these redirects. Anomie 11:06, 24 October 2018 (UTC)
Are you sure I need to get a consensus for something that seems to already have consensus? Category:Redirected episode articles lists over 13k redirects and redirected episodes have their own redirect template. --Gonnym (talk) 11:24, 24 October 2018 (UTC)
Mass-creating of stuff by bots tends to be more controversial than humans doing it. Anomie 11:42, 24 October 2018 (UTC)
@Gonnym:It is part of the bot policy Wikipedia:Bot_policy#Mass_article_creation, unless it's just a few pages. {{Episode list}} is in 11923 articles. This could be many, many thousands of new redirects. Ronhjones  (Talk) 17:49, 24 October 2018 (UTC)
Redirect creation is typically much less controversial than full article creation, but that is a LOT of redirects and it would be nice to not have to update them every 2 weeks because someone though 'wouldn't it be nice if...' or 'could we do this instead...'. Feedback from WT:AST would be useful, since they have created a crap ton of systematic redirects, and devised templates like {{NASTRO comment}}. Headbomb {t · c · p · b} 01:29, 8 December 2018 (UTC)

Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAUEdit

Centralize the ~1400+ instances of references (Template:Cite journal ...) to the "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU" by replacing them with a single template (named e.g. Template:R:LunarNomenclature). The contents of the latter should be:
{{cite journal |last1=Menzel |first1=Donald H. |authorlink1=Donald Howard Menzel |last2=Minnaert |first2=Marcel |authorlink2=Marcel Minnaert |last3=Levin |first3=Boris J. |last4=Dollfus |first4=Audouin |authorlink4=Audouin Dollfus |last5=Bell |first5=Barbara |title=Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU |doi=10.1007/BF00171763 |journal=Space Science Reviews |volume=12 |issue=2 |pages=136–186 |date=1971 |bibcode=1971SSRv...12..136M |ref=harv }}
yielding:
Menzel, Donald H.; Minnaert, Marcel; Levin, Boris J.; Dollfus, Audouin; Bell, Barbara (1971). "Report on Lunar Nomenclature by the Working Group of Commission 17 of the IAU". Space Science Reviews. 12 (2): 136–186. Bibcode:1971SSRv...12..136M. doi:10.1007/BF00171763.
Urhixidur (talk) 14:43, 27 October 2018 (UTC)

What would be the purpose. We the community and external parties have tools and bots designed to work with CS1|2 ({{cite journal}}). So many things run on this system. Shortcut templates can create more problems then they solve. -- GreenC 06:00, 9 December 2018 (UTC)

Removing the venue parameter from Template:Infobox album when it doesn't applyEdit

I originally raised this at Template talk:Infobox album#Including the venue parameter for studio albums when substituting last month. There have been several users (one of whom, most notably, has been Zackmann08) transcluding thousands of uses of Template:Infobox album on albums, and for studio albums, inserting the unnecessary parameter |venue=. This parameter is not needed for the vast majority of studio albums as they were recorded in studios, not live venues. The template explicitly states in bold to use this parameter for live albums—so then it has no use being included for other types of albums. I, and I have noticed other users doing so as well, often remove this parameter upon discovering it has been added to articles because it has does not apply to them. So I'm requesting if a bot can remove the venue parameter from uses of Template:Infobox album on articles where the infobox already has its |type= defined as "studio" (or "album", as this is often used by users who don't know to write "studio"). It's not a big deal if it is removed anyway—if it's needed for a type of album, it can be restored as necessary. But these cases are few and far between, not for the vast majority where |venue= has been added by users just because they're automatically transcluding a template without much consideration for what those albums actually are. Thanks. Ss112 02:39, 6 November 2018 (UTC)

So your solution to "thousands of unneeded edits" is a bot that will perform thousands more edits that ABSOLUTELY are not needed?? The parameter is blank and therefore not being used so there isn't any problem with it... You are looking for a solution where there is no problem. Most infoboxes have parameters that are only to be used in certain situations. As long as those parameters are left blank, there isn't a problem. As I said when you first brought this up with me (and I note that you mentioned my username in this post but didn't link to me so I wouldn't be notified), this isn't a problem at all. If you are so worried about venues being added for other types of albums, then add a tracking category. If the type param is not live and a venue is provided, place the page in the tracking category. But having a bot remove an unused parameter is just a waste of everyone's time.
  Denied per WP:COSMETICBOT --Zackmann (Talk to me/What I been doing) 05:26, 6 November 2018 (UTC)
BAG note: @Zackmann08: You are not a BAG member, and have no authority to approve or deny bots. Do not claim otherwise. Headbomb {t · c · p · b} 13:20, 14 November 2018 (UTC)
@Headbomb: I didn't realize I needed to be a BAG member to approve or deny. Had this been a formal BRFA I wouldn't have commented that. I felt that given that it was a bot request and clearly violated WP:COSMETICBOT it was safe for me to comment that. I have learned something and appreciate your comment. Note that I have struck my comment above. --Zackmann (Talk to me/What I been doing) 17:13, 14 November 2018 (UTC)
@Zackmann08: Thanks. Approved/denied is very specific to the BRFA process, much like you wouldn't comment say "Accepted" when it comes to an ARBCOM case, when the only people who can do that are ARBCOM members (and only by a majority vote). Your general objection is noted though. Headbomb {t · c · p · b} 17:18, 14 November 2018 (UTC)
@Headbomb: Learn something new every day! Out of curiosity, how does one become a member of WP:BAG? Perhaps we can discuss on my talk page? --Zackmann (Talk to me/What I been doing) 17:23, 14 November 2018 (UTC)
The process is outlined in the bot policy at at Wikipedia:Bot policy#Bot Approvals Group. Headbomb {t · c · p · b} 17:26, 14 November 2018 (UTC)
@Zackmann08: Did you think I thought you wouldn't see this? I already knew you were a regular here when Jonesey95 suggested I put in a request here. I don't feel the need to tag users upon every mention of their username, so I didn't care whether you saw it or not. Otherwise it seems like you're implying I had bad intentions by "noting" I didn't notify you. So then I must say it seems a little telling that you would deny this because you think I've attempted to rag on you without notifying you. Maybe others have a different view. Why don't you let them comment and deny the request or offer their opinions, since I'm so obviously complaining about you just racking up your edit count without consideration for the unnecessary parameters you're adding all over the place? Not that it really needs to be said, but you are one user. Your view that it isn't a problem doesn't mean nobody else thinks it isn't a problem. The tracking category is an absolutely pointless venture, because evidently I want the pointless parameters to be removed, not to track instances of it for...what reason exactly? Maybe I can get somebody to knock up a script to do it, since I don't think this request page is the be-all and end-all and that all semi-automated tasks must go through here. Ss112 13:16, 6 November 2018 (UTC)
Also I don't know if you're attempting to direct quote me or paraphrase what you thought I was saying, but I never said they were "thousands of unneeded edits". I never said substituting the template to update its parameters was "unneeded". It is needed (although I thought we got bots to do this and get it done quicker, instead of users). But along with that has come thousands of insertions of |venue= in instances where it doesn't apply, and even where it has previously already been removed. Ss112 13:26, 6 November 2018 (UTC)

@Ss112: I'm not sure a bot is needed for this exactly. Or at least for what you requested exactly. The template could easily be updated to throw an error / put problem articles in a category if |venue= is set when |type=Studio/whatever. A bot that pre-emptively removes an empty |venue= likely wouldn't be approved without consensus to show this task was desired, although removal of an empty parameter under certain condition (e.g. substantive edits are made) likely would be. Headbomb {t · c · p · b} 13:27, 14 November 2018 (UTC)

If the rendered page output is not affected then it is a cosmetic change. There are thousands of infoboxes with blank parameters, I see no reason for this task. Ronhjones  (Talk) 21:18, 26 November 2018 (UTC)

spectator.co.ukEdit

There are about 1000 mainspace links to Spectator, most are broken. They changed URL schemes without redirects. The pages still exist at a new URL. Example:

There's no obvious way to program this, but posting if anyone has ideas. -- GreenC 06:27, 7 November 2018 (UTC)

I actually does not have much knowledge about the wikipedia bots. When I checked two to three links, the things that needs to be done from a reader's point of view is:

1)Identify the link which is identified as broken.
2)Remove the words "-.thtml" from the last portion of the link.
3)Add the month number and year number before the last section of url which needs to be separated by commas. This year and month number is the number on which the article appeared. If the month is only one digit, you need to add a zero before the month number.Adithyak1997 (talk) 10:40, 7 November 2018 (UTC)

The idea is to automate the conversion since it's 1000+ links. A bot wouldn't know which month. In the second example it is "letters-201" vs "letters" thus "-210" is also an unknown. If there was a way to find the redirected URL, such as though archive.org or some other way. Or volunteers to manually fix them. -- GreenC 20:14, 7 November 2018 (UTC)
One could also just write an e-mail to spectator.co.uk with the old urls and kindly ask them to give a mapping to the new urls. Then a bot could replace those links. -- seth (talk) 11:13, 10 November 2018 (UTC)
@Lustiger seth:. Do you want to give it a try? Narrowed it down to 552 dead links (User:GreenC/data/spectator). I've tried asking these things before and never had success so maybe someone else would have better luck. If they provide a mapping, I'll make the changes. -- GreenC 17:42, 10 November 2018 (UTC)
E-mail with links to special:linksearch/http://www.spectator.co.uk, User:GreenC/data/spectator, and to this discussion sent. If I get an answer, where shall I place the list? -- seth (talk) 10:04, 11 November 2018 (UTC)
Thanks! In data/spectator -- GreenC 16:30, 11 November 2018 (UTC)
Hi!
2018-11-11 10:02: mail sent to spectator digitalhelp@... (probably this was the wrong address, because they only look after subscriptions).
2018-11-11 10:12: first (automatic) answer: "You will receive a reply from one of our customer service team members within 48hrs."
2018-11-13 01:48: second answer: "I am awaiting further information regarding your enquiry and I will contact you as soon as this information has been received."
2018-11-14 01:42: third answer: "We would request you to email editor@... for further information." (deleted e-mail address)
2018-11-14 19:54: second try (mailed to editor@...)
2018-11-14 19:54: forth answer: "I'm afraid that due to the number of them received at this address it’s not possible to send a personal response to each one. To help your email find its way to the right home and to answer some questions:
  • If you are writing a letter for publication, please send it to letters@....
  • Please send article pitches and submissions to pitches@....
  • If you are having problems with your subscription, please email customerhelp@... [...]. For problems with the website, our digital paywall, our apps or the Kindle edition of the magazine, our FAQ page is here – and if that doesn’t answer your question please email digital@....
  • If the matter is urgent, please call our switchboard on 020 [...]."
2018-11-14 20:06: third try (mailed to digital@...)
iow: this may take some time. -- seth (talk) 20:10, 14 November 2018 (UTC)
Well, I don't think, I'll get an answer. :-( -- seth (talk) 23:37, 25 December 2018 (UTC)

College football schedule conversionsEdit

I'd like have a bot update the templates used to render college football schedule tables. Three old templates—Template:CFB Schedule Start, Template:CFB Schedule Entry, and Template:CFB Schedule End—which were developed in 2006, are to be replaced with two newer, module-based templates—Template:CFB schedule and Template:CFB schedule entry. The old templates remain on nearly 12,000 articles. The new templates were coded by User:Frietjes, who has also developed a process for converting the old templates to the new:

add {{subst:#invoke:CFB schedule/convert|subst| at the top of the table, before the {{CFB Schedule Start}} and }} to the bottom after the {{CFB Schedule End}}.

The development and use of these new templates has been much discussed in the last year at Wikipedia talk:WikiProject College football and has a consensus of support.

Thanks, Jweiss11 (talk) 00:32, 8 November 2018 (UTC)

We also need to add the optional "Source" column that was approved as part of the new template. Cbl62 (talk) 03:13, 19 November 2018 (UTC)
@Cbl62: This is irrelevant to the conversion process at stake here. Template:CFB schedule entry services the source column, although the template documentation does not reflect that. Jweiss11 (talk) 04:52, 19 November 2018 (UTC)
While we're doing the conversion, it makes sense to get everything working properly. Others have noted that there is a glitch in using the "Source" column in the named parameters version of the template. Whether the glitch in documentation or in core functionality, it should be remedied so that the "Source" column can be added. Cbl62 (talk) 10:38, 19 November 2018 (UTC)
@Cbl62: What is the glitch with the "Source" column in the named parameters version of the template? You can describe it or show an example? Jweiss11 (talk) 14:38, 19 November 2018 (UTC)
The "glitch" is that people have expressed a concern that they have difficulty adding a "Source" column to the new named parameters chart. See discussion here: Wikipedia talk:WikiProject College football#2018 Nebraska score links. I have yet to see a version of the new named parameters chart that includes a source column. Can you show an example where it has been done? And is there a reason it is not included in the template documentation? (By way of contrast, in the unnamed parameters version, the Source column is included in the template documentation as an optional add-on, see, e.g., 1921 New Mexico Lobos football team.) Cbl62 (talk) 15:00, 19 November 2018 (UTC) See also 2018 Michigan Wolverines football team where sources are presented in each line of the template but no "Source" column has been generated. Cbl62 (talk) 15:06, 19 November 2018 (UTC)
This is not a glitch. The is simply user habit. The person to ask about the template documentation is User:Frietjes, as she is the editor who wrote it. The inline citations at 2018 Michigan Wolverines football team could be easily moved to the source column if one so wanted. Jweiss11 (talk) 16:00, 19 November 2018 (UTC)
the source parameter is demonstrated in example 3. feel free to add this to the blank example at the top of the documentation, along with other missing parameters, like overtime, etc. Frietjes (talk) 16:11, 19 November 2018 (UTC)
Excellent. Thanks, Frietjes! Cbl62 (talk) 22:22, 19 November 2018 (UTC)

───────────────────────── @BU Rob13: would you be available to take on this bot request? Thanks, Jweiss11 (talk) 03:16, 4 December 2018 (UTC)

@Jweiss11: Sorry, but not really. I'm about to take an extended break from Wikipedia, most likely. ~ Rob13Talk 04:00, 4 December 2018 (UTC)
I'm only skimming this but it might be a good candidate for PrimeBOT's Task 30. Primefac (talk) 15:21, 4 December 2018 (UTC)
@Primefac: Could you actually leave this for now? I've been trying to get a technically-minded friend interested in Wikipedia for a bit, and this may interest her. I'm reaching out to see if she'd be interested in jumping in and creating a bot. ~ Rob13Talk 23:44, 7 December 2018 (UTC)
Sure thing. Primefac (talk) 16:23, 9 December 2018 (UTC)
@BU Rob13: any word from your friend about whether she is interested in taking this on? Thanks and happy holidays, Jweiss11 (talk) 21:15, 25 December 2018 (UTC)
Sadly, a non-starter. She took a look around and ultimately decided she wasn't interested in the culture after seeing a talk page discussion gone bad. Which is fair, to be honest. Primefac, all yours. Thanks for holding off. ~ Rob13Talk 02:20, 26 December 2018 (UTC)
@Primefac: are you still available to take this on? Jweiss11 (talk) 04:32, 8 January 2019 (UTC)

Unreferenced articlesEdit

Could a bot please identify articles that are not currently tagged as unreferenced but seem not to have references? Thanks for looking at this, Boleyn (talk) 19:12, 10 November 2018 (UTC)

Why do I get the feeling that this might be WP:CONTEXTBOT? --Redrose64 🌹 (talk) 23:42, 11 November 2018 (UTC)
Hi, Redrose64, I'm not sure I was clear enough, by identify the articles I meant generate a list of articles, similar to Wikipedia:Mistagged unreferenced articles cleanup. Thanks, Boleyn (talk) 18:18, 12 November 2018 (UTC)
Boleyn, I like this idea. Will take it up. If/when something is ready I'll post at Wikipedia talk:WikiProject Unreferenced articles or if any questions arise. -- GreenC 05:06, 2 December 2018 (UTC)
Bot now in beta. Initial test results. Followup at Wikipedia talk:WikiProject Unreferenced articles. -- GreenC 01:22, 17 December 2018 (UTC)

  BRFA filed -- GreenC 04:07, 31 December 2018 (UTC)

Tag with Template:R from unnecessary disambiguationEdit

The task is rather simple. Find all pages with Foobar (barfoo). If they redirect to Foobar, tag those with {{R from unnecessary disambiguation}}. This should be case-sensitive (e.g. Foobar (barfoo)FOOBAR should be left alone).

Could probably be done with AWB to add/streamline other redirect tags if they exist. Headbomb {t · c · p · b} 13:15, 14 November 2018 (UTC)

How would you find these pages? Via a database dump and regular expressions I assume? --TheSandDoctor Talk 07:24, 1 December 2018 (UTC)
@TheSandDoctor: via a dump scan yes. Or some kind of 'intitle' search. Headbomb {t · c · p · b} 05:08, 6 December 2018 (UTC)
@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 20:20, 19 December 2018 (UTC)
@Headbomb: No, sorry. I had forgotten about this. You are only anticipating pages like your Footer example above, right? What I mean is: Joe (some text) redirecting to Joe would be tagged with {{R from unnecessary disambiguation}}? Or am I getting this completely wrong/missing something? --TheSandDoctor Talk 20:33, 19 December 2018 (UTC)
Not sure what you mean by my Footer example, but basically if you have Foobar (whatever)Foobar, then tag Foobar (whatever) with {{R from unnecessary disambiguation}}. Nothing else. Headbomb {t · c · p · b} 21:26, 19 December 2018 (UTC)
@Headbomb: That would've been autocorrect being sneaky. Foobar is what I meant (did it again writing this) and that does clarify it for me. I will work on this tonight or tomorrow. --TheSandDoctor Talk 00:13, 20 December 2018 (UTC)

@TheSandDoctor: any updates on this? Headbomb {t · c · p · b} 08:24, 13 January 2019 (UTC)

Bot to improve names of media sources in referencesEdit

Many references on Wikipedia point to large media organizations such as the New York Times. However, the names are often abbreviated, not italicized, and/or missing wikilinks to the media organization. I'd like to propose a bot that could go to an article like this one and automatically replace "NY Times" with "New York Times". Other large media organizations (e.g. BBC, Washington Post, and so on) could fairly easily be added, I imagine. - Sdkb (talk) 04:43, 19 November 2018 (UTC)

  • I would be wary of WP:CONTEXTBOT. For instance, NYT can refer to a supplement of the Helsingin Sanomat#Format (in addition to the New York Times), and maybe is the main use of Finland-related pages. TigraanClick here to contact me 13:40, 20 November 2018 (UTC)
    • @Tigraan:That's a good point. I think it'd be fairly easy to work around that sort of issue, though — before having any bot make any change to a reference, have it check that the URL goes to the expected website. So in the case of the New York Times, if a reference with "NYT" didn't also contain the URL nytimes.com, it wouldn't make the replacement. There might still be some limitations, but given that the bot is already operating only within the limited domain of a specific field of the citation template, I think there's a fairly low risk that it'd make errors. - Sdkb (talk) 10:52, 25 November 2018 (UTC)
  • I should add that part of the reason I think this is important is that, in addition to just standardizing content, it'd allow people to more easily check whether a source used in a reference is likely to be reliable. - Sdkb (talk) 22:01, 25 November 2018 (UTC)
@Sdkb: This is significantly harder than it seems, as most bots are. Wikipedia is one giant exception - the long tail of unexpected gotchas is very long, particular on formatting issues. Another problem is agencies (AP, UPI, Reuters). Often times the NYT is running an agency story. The cite should use NYT in the |work= and the agency in the |agency= but often the agency ends up in the |work= field, so the bot couldn't blindly make changes without some considerable room for error. I have a sense of what needs to be done: extract every cite on Enwiki with a |url= containing nytimes.com, extract every |work= from those and create a unique list, manually remove from the list anything that shouldn't belong like Reuters etc.., then the bot keys off that list before making live changes, it knows what is safe to change (anything in the list). It's just a hell of a job in terms of time and resources considering all the sites to be processed and manual checks involved. See also Wikipedia:Bots/Dictionary#Cosmetic_edit "the term cosmetic edit is often used to encompass all edits of such little value that the community deems them to not be worth making in bulk" .. this is probably a borderline case, though I have no opinion which side of the border it falls other people might during the BRFA. -- GreenC 16:53, 26 November 2018 (UTC)
@GreenC: Thanks for the thought you're putting into considering this idea; I appreciate it. One way the bot could work to avoid that issue is to not key off of URLs, but rather off of the abbreviations. As in, it'd be triggered by the "NYT" in either the work or agency field, and then use the URL just as a confirmation to double check. That way, errors users have made in the citation fields would remain, but at least the format would be improved and no new errors would be introduced. - Sdkb (talk) 08:17, 27 November 2018 (UTC)
Right that's basically what I was saying also. But to get all the possible abbreviations requires scanning the system because the variety of abbreviations is unknowable ahead of time. Unless pick a few that might be common, but it would miss a lot. -- GreenC 14:54, 27 November 2018 (UTC)
Well, for NYT at the least, citations with a |url=https://www.nytimes.com/... could be safely assumed to be referring to the New York Times. Headbomb {t · c · p · b} 01:20, 8 December 2018 (UTC)
Yeah, I'm not too worried about comprehensiveness for now; I'd mainly just like to see the bot get off the ground and able to handle the two or three most common abbreviation for maybe half a dozen really big newspapers. From there, I imagine, a framework will be in place that'd then allow the bot to expand to other papers or abbreviations over time. - Sdkb (talk) 07:01, 12 December 2018 (UTC)
Conversation here seems to have died down. Is there anything I can do to move the proposal forward? - Sdkb (talk) 21:42, 14 January 2019 (UTC)
I am not against this idea totally but the bot would have to be a very good one for this to be a net positive and not end up creating more work. Emir of Wikipedia (talk) 22:18, 14 January 2019 (UTC)

Remind me botEdit

Hi, it would be wonderful if we had a bot that looked for uses of a template called {{remindme}} or something similar (with a time parameter, such as 12 hours, 1 year, etc. etc.) and duly dropped a message on your own talk page at the designated time with a link to the page on which you put the remindme tag. It would only send such reminders to the person who posted the edit containing the template in the first place. Kind of like the functionality of such bots on reddit, I guess. Fish+Karate 13:11, 20 November 2018 (UTC)

  • That looks like it should go through BRFA smoothly. There seems to be some use case. It looks simple enough, so I would volunteer to code it, but the only way I can imagine to make it work is by monitoring Special:RecentChanges (or the API equivalent) for additions of the template, and that looks extremely inefficient; beards grayer than mine might have a better idea. TigraanClick here to contact me 13:33, 20 November 2018 (UTC)
Monitor the backlinks (whatlinkshere) for the template, maintain a database of diffs to that backlinks list each time the bot runs via cron. New additions will show up. I wrote a ready-made tool Backlinks Watchlist. -- GreenC 14:20, 20 November 2018 (UTC)
Would it even need to maintain a database? Just go hourly (or some period) through a populated category and if it is time, notify and then change the template to {{remind me|notified = yes}} to disable the category (and also change the text of the template to something like "This user was reminded of this discussion on Fooember 24, 2078."). Galobtter (pingó mió) 14:35, 20 November 2018 (UTC)
Backlinks or category are pretty much the same from user and bot PoV, I think (maybe it is a different story on the servers though). Maybe a small advantage to cat, because it can be more easily reviewed by humans.
In either case the point of maintaining the database would be to limit the scans. If the template gets some traction, and a million user each places a thousand reminders asking for a reminder in 3018, scanning every still-active template every time could get inefficient (as the category is populated with lots of reminders that you need to scan every time). For a first version though, we do not care; if bad stuff happens, it will be easy enough to put a limit on templates left by users (either limit active templates per user, or how far in the future you can set reminders).
If none else comes around it, I will try to draft the specs this weekend. Fish and karate, please whip me if you see nothing next Monday. I would ask the bot to do it, but it does not exist yet. The trickiest part will probably be who can ask a reminder for whom (I would probably say that only User:X can ask for a notification to User:X, to avoid abuse of the tool, which then needs a bit of checking of who put the template on the page). TigraanClick here to contact me 17:36, 21 November 2018 (UTC)
You might be interested in m:Community Wishlist Survey 2019/Notifications/Article reminders. Anomie 03:21, 21 November 2018 (UTC)
That is interesting, I think the bot I'm envisioning is more general than that, you could place the template anywhere and it'll ping you to go back there after a set time has elapsed (potentially could also put a specific datetime). Tigraan I definitely think only user X could ask for a reminder for user X, otherwise it would be open to abuse. A throttle of no more than Y reminders per day (or Z open reminders overall) may also be a good idea. Fish+Karate 09:20, 22 November 2018 (UTC)

Basic spec, policy questions to be answeredEdit

OK, so the basic use is as follows:

User:Alice places a template (to be created, let's call it {{remind me}}) inside a thread of which they wish to be reminded. The user specifies the date/time at which the reminder should be given as an argument of the template (either as "on Monday 7th" or "in three days" - syntax to be discussed later). At the given date, a bot "notifies" Alice.

On a policy level, I see a few questions:

  1. What kind of notification?
  2. Can Alice ask for Bob to be notified?
  3. Should we rate limit (and if so how?)
  4. Where, if anywhere, should we get consensus for all that?

Depending on the choice for each of those, this will change the amount of technical work needed, but as far as I can tell, those questions entirely define the next steps (coding/testing/approval request etc.). Please discuss here if I missed something, but below to answer the questions. TigraanClick here to contact me 13:36, 25 November 2018 (UTC)

Discussion on the specEdit

I made a separate section for this because I am almost sure of the questions that need asking but less sure of the answers they should get. What follows is my $0.02 for each:

  1. The simplest would be a user talk page message or a ping from the page from where the notification originates, but maybe WP:ECHO can allow better stuff. A user talk message is easy to code (read: I know how to do it), but it might lead to some clutter.
  2. After thinking over it, that is not obviously a bad thing that it is technically feasible (we can certainly decide it is against policy to do it or restrict the conditions, the question is whether this should be technically impossible). Alice has to post something to cause the bot to annoy Bob, so it is fairly similar to pings, which none would call to be terminated because of the potential for abuse. On the other hand, surely it would be OK and have some use case that one user can notify their own sockpuppet (e.g. I notify myself from my bot account). The only real problems I can imagine for cross-user postings is "privilege escalation" stuff:
    1. A user could cause the bot to notify users who have a protected talk page (at a protection level that the bot can access but not the user)
    2. A user could place many such templates in a single edit, causing multiple notices to be sent in a short time (while not being caught by rate limits on the servers)
  3. I do not think there is any legitimate-use reason for restricting the number of notifications. There might be a technical reason of avoiding to have large amounts of pending notifications depending on how the bot works (see previous discussion) or counter+vandalism reasons (e.g. allowing not only self-notifications, but only X pending notifications originating from a single user at a given time, so that a spam-notifier vandal cannot get far). If we do not allow cross-user notifications, I think we can go without rate limiting until the performance becomes an issue.
  4. The bot request page is not watched a lot, but I am not sure where else it can go. Maybe worth cross-posting to WP:VPP?

(Ping: Fish and karate) TigraanClick here to contact me 13:36, 25 November 2018 (UTC)

Answering the set of questions (this is as I see the bot working - and note when it comes to this kind of thing I'm a vision man, not details!)
  1. What kind of notification?
    A message on your talk page ("Hi (user name), here's the reminder you asked for - {link)") People could, I guess, if they prefer, have the bot post to a defined sub page (I would see this as a "phase 2" development). A small, unobtrusive ping might also work but that requires an edit and I can see a busy thread that lots of people are interested in being peppered with these pings being an unpopular choice. Keeping it to the user's own talk page is less imposing on other users.
  2. Can Alice ask for Bob to be notified?
    No. Alice can only ask for Alice to be notified. Let's keep it simple, at least initially.
  3. Should we rate limit (and if so how?)
    Initially I think it's not a terrible idea to ensure the capability is there in the code to throttle it in case it starts causing (as yet unforeseen) issues. The bot can just refuse to provide more than X reminders a day if issues arise.
  4. Where, if anywhere, should we get consensus for all that?
    Wikipedia:Bots/Requests for approval to sign off the bot, presumably. I think WP:VPP for suggestions would also be good.
A note to say thank you, Tigraan; I appreciate the thought and effort you're putting into this. Fish+Karate 09:23, 26 November 2018 (UTC)
@Fish and karate: About 4: if we go to BRFA with the whole agreement of two of us, they are going to tell us to get consensus that the task is useful somewhere else. Per the guide at WP:BRFA: If your task could be controversial (e.g. (...) most bots posting messages on user talk pages), seek consensus for the task in the appropriate forums. (...) Link to this discussion from your request for approval. Again, VPP is the catch-all, but that's because I have no other idea. Maybe a link from the talk page of WP:PING as well, since the functionality is closely related.
(Oh, and save your thanks for after the bot sees the light of day.) TigraanClick here to contact me 15:31, 26 November 2018 (UTC)
Tigraan, I would absolutely ask for a well-advertised discussion with consensus for this bot, if I came across the BRFA. I think it's an idea I would use (I do on reddit!), but I could see it becoming unintentionally disruptive (Let's say - 50 people use the "remindme" template on a popular arbcom case or RFA). I'm not sure what the best way to address that would be. SQLQuery me! 22:37, 3 December 2018 (UTC)
I see the point, perhaps this will remain a pipe dream. If someone can think of a way to work around that, that would be welcome. Fish+Karate 15:07, 6 December 2018 (UTC)
Since m:Community Wishlist Survey 2019/Notifications/Article reminders was #8 on the wishlist, hopefully it gets implemented in a way that works more like the watchlist: click a button, and it sends you a notification when the time is up without having to put a template on the page for everyone else to be annoyed by. Anomie 02:03, 4 December 2018 (UTC)

| pushpin_map = CzechiaEdit

Please make this COMMONNAME change:

before

after

| pushpin_map = Czechia Prague Central

| pushpin_map = Czech Republic Prague Central

| pushpin_map = Czechia Prague Charles Bridge

| pushpin_map = Czech Republic Prague Charles Bridge

| pushpin_map = Czechia

| pushpin_map = Czech Republic

Number of spaces (or Tabs) may vary

Thanks Chrzwzcz (talk) 12:54, 8 December 2018 (UTC)

The location map modules in question, e.g. Module:Location map/data/Czechia Prague Central, appear to be the module equivalent of redirects, so as far as I can tell, these edits would be cosmetic edits (no effect on the rendered page). Chrzwzcz, is there a consensus to delete these redirects? Also, is there a reason that you are requesting only these three and not the other eight or so maps that start with "Module:Location map/data/Czechia"? – Jonesey95 (talk) 13:29, 8 December 2018 (UTC)
Czechia is not a commonname (CZech Republic talk page), even such cosmetic "invisible" occurrences are not welcome. Other 8 are not used (no more, I made some single changes and some renaming myself). All would be unified as "Module:Location map/data/Czech Republic"*. I am not asking for deleting that 11 Czechia redirect pages, but for change of the links to them. Chrzwzcz (talk) 14:07, 8 December 2018 (UTC)
Your request appears to be in conflict with this RFC. – Jonesey95 (talk) 14:33, 8 December 2018 (UTC)
Yeah, but another discussion clearly stated what COMMONNAME still is (not Czechia), and Czechia mentions are still being deleted (rewritten to Czech Republic). In other words Czechia is not allowed in totally random articles, basically it is OK only when citing the source word by word (like ISO standards, UN list or EU document). Chrzwzcz (talk) 15:04, 8 December 2018 (UTC)
It's clear to me that WP:COSMETICBOT and WP:NOTBROKEN both apply here. --Redrose64 🌹 (talk) 17:22, 8 December 2018 (UTC)

Bypassing redirects in election navboxesEdit

Recently, consensus was reached to move all election and referendum articles to have the year at the front. A bot, TheSandBot, was created to move the articles (approximately 35000) to the new titles. However, the bot did not change navboxes to use the new format. Per WP:BRINT, redirects from navigational templates should be bypassed to allow readers to see which page they are on in the template. This is a lot of simple work which would have to be doe by humans, if a bot were not created. Danski454 (talk) 15:15, 9 December 2018 (UTC)

@Danski454: Number 57 has volunteered to do what is necessary, but it is also worth noting that there is another bot already doing this task (the name just escapes me at the moment). --TheSandDoctor Talk 07:48, 11 December 2018 (UTC)
@Danski454: As you can see from my contributions, I'm about midway through this task. I'm not sure it could be done by a bot as there are a few oddities that need checking individually. Cheers, Number 57 08:34, 11 December 2018 (UTC)

Creating redirect to main userpage from subpages.Edit

 Y Done - If anyone wants User:RF1 Bot to run for you aswell let me know.

I'd like to have a bot that once a month creates a page at this months subpage for Talk Archives like here User:RhinosF1/Archives_2018/10_(October) And redirects to my main user page.

How would it be coded?

Happy to run it semi-automatic and monitored. Would not run outside my mainspace.

RhinosF1 (talk) 14:43, 16 December 2018 (UTC)

Changed URL to wikilink to avoid mobile link. Primefac (talk) 14:44, 16 December 2018 (UTC)
I personally see zero reason for a bot to do this - you don't need a User: page that corresponds to a User_talk: page, especially when it's a user subpage that is just a redirect to the userpage itself. Primefac (talk) 14:46, 16 December 2018 (UTC)
@Primefac: Again, I like having my userspace like this, I'm pretty decent at python but am not sure how to use APIs to do this or where to start. If somebody could give me some example foundation code that would be excellent. RhinosF1 (talk) 15:03, 16 December 2018 (UTC)
I'm not saying you can't set up your userspace like this, I'm saying that there's not much of a reason to do so. If there's a consensus that says a user can create a bot that will edit once a month so that a pointless redirect can be created, then it might pass WP:BOTREQUIRE. This, I suppose, is the purpose of this thread, but if there is consensus against this task (based on this thread) then chances are you won't be able to get your bot. Primefac (talk) 16:05, 16 December 2018 (UTC)
Not sure why this is being done either, but couldn't RhinosF1 simply manually create the redirects ahead of time for the next year, no bot required. -- GreenC 16:14, 16 December 2018 (UTC)
If you are decent at python you should look into mw:API:Client code#Python. That said, your bot would need to be approved at WP:BRFA. No comment here about the socio-political feasibility. --Izno (talk) 16:15, 16 December 2018 (UTC)
Note WP:BOTUSERSPACE: "any bot or automated editing process that affects only the operator's or their own userspace (user pages, user talk pages, user's module sandbox pages and subpages thereof), and which are not otherwise disruptive, may be run without prior approval." Chances are that one redirect per month isn't going to be disrupting things. Anomie 02:38, 17 December 2018 (UTC)
I thought I had commented earlier, but I agree with Anomie that this is most likely covered by WP:BOTUSERSPACE and therefore wouldn't be a problem. --TheSandDoctor Talk 17:34, 17 December 2018 (UTC)
Thanks for your support, I've never used APIs before, does anyone have any example code for creating a page with a redirect. RhinosF1 (talk) 17:48, 17 December 2018 (UTC)
@RhinosF1: What programming language(s) are you familiar with? I do most of my work here in Python. If you give me specific details of what it is to do etc, I will happily do it for you (if you want) and then you can learn off of the code. For the next couple of weeks, I have a decent amount of free time on my hands. --TheSandDoctor Talk 18:14, 17 December 2018 (UTC)

─────────────────────────

I use python 2.7 at home RhinosF1 (talk) 18:18, 17 December 2018 (UTC)
@RhinosF1: Then I would recommend checking out mwclient as that should cover your needs, but it is only in 3.0+ if I recall correctly. As I said above, if you want I can make it for you and then link you the code for future reference. You could also check out my repositories as they are all relevant (particularly this one). --TheSandDoctor Talk 18:37, 17 December 2018 (UTC)
If you're happy to, making it would be great as I've never done anything like it before. RhinosF1 (talk) 19:00, 17 December 2018 (UTC)
@RhinosF1: Just want to make sure that this is clear before going ahead with anything: So every month you want it to create User:RhinosF1/Archives_YEAR/MO_NUM_(MO_NAME), which redirects to your user page? --TheSandDoctor Talk 19:23, 17 December 2018 (UTC)
Nearly, just like https://en.m.wikipedia.org/wiki/User:RhinosF1/Archives2018/12_(December)
That User:RhinosF1/ArchivesYYYY/MM_(MONTH) redirecting to User:RhinosF1
Thanks,
RhinosF1 (talk) 19:31, 17 December 2018 (UTC)
I've just create an account for it, User:RF1_Bot. Use it's sandbox if you want and any other subpages you need for source code etc. feel free to create RhinosF1 (talk) 19:50, 17 December 2018 (UTC)
@RhinosF1: Here, though you are going to need to install mwclient via pip. If you would rather that I run it, you may email me the bot account's login info. That said, if you do choose that option, though I would never attempt anything, I would strongly recommend making sure that its password is unique for best security practice purposes. This is critical with an account acting as a bot, regardless of not being flagged. I have commented where the code itself needs to be changed in order to function and produce the desired range. It would be simpler just to make a year or two's worth at once, and that is how the code has been set up. --TheSandDoctor Talk 01:12, 18 December 2018 (UTC)
I'm happy running it myself, thanks for the help. Am I definitely safe to run without BRFA approval?
RhinosF1 (talk) 06:36, 18 December 2018 (UTC)
@RhinosF1: From WP:BOTUSERSPACE: "In addition, any bot or automated editing process that affects only the operator's or their own userspace (user pages, user talk pages, user's module sandbox pages and subpages thereof), and which are not otherwise disruptive, may be run without prior approval.". You are good so long as you don't go crazy running it tons. You will also need to update the call_home method to reflect your bot's username and whether or not you wish for such a method. If you have any questions about the code or operating of a bot, please feel free to let me know (if so, ping please). --TheSandDoctor Talk 08:45, 18 December 2018 (UTC)

It's showing an error :AssertUserFailedError: By default, mwclient protects you from accidentally editing without being logged in. If you actually want to edit without logging in, you can set force_login on the Site object to False. RhinosF1 (talk) 15:58, 18 December 2018 (UTC)

@RhinosF1: Because you need to change the login details in credentials.txt to those of your bot instead of "BOT" and "PASS". --TheSandDoctor Talk 16:37, 18 December 2018 (UTC)
@TheSandDoctor:I have RhinosF1 (talk) 16:42, 18 December 2018 (UTC)
 
Hello. Please check your email; you've got mail!
It may take a few minutes from the time the email is sent for it to show up in your inbox. You can remove this notice at any time by removing the {{You've got mail}} or {{ygm}} template.
@RhinosF1: Oh, yeah. Sorry. Where it has try: pass in main(), remove "pass" and uncomment the site.login bit. It should then work. When I was testing, didn't want to actually edit so I removed that and forgot to put it back before pushing. --TheSandDoctor Talk 16:58, 18 December 2018 (UTC)
Check the repo for what I mean, I have pushed the correct version. Just take that snippet and replace the line in yours. --TheSandDoctor Talk 17:00, 18 December 2018 (UTC)
That has just worked in test. Had to add a delay to stop rate limiting but apart from that fine. RhinosF1 (talk) 17:08, 18 December 2018 (UTC)
@RhinosF1: Awesome! I'm glad I could help. As for the rate limiting, that is something that happens for non-bot flagged accounts. I am glad that you were able to add a delay easily. --TheSandDoctor Talk 21:25, 18 December 2018 (UTC)
@TheSandDoctor:For a 'bot' flag, do I need to go through BRfA RhinosF1 (talk) 21:27, 18 December 2018 (UTC)
@RhinosF1: Yes and you would also need a valid reason for the flag (ie moving ~40 thousand pages, creating ~40 thousand redirects). --TheSandDoctor Talk 21:53, 18 December 2018 (UTC)

Automatic US congressional Election Result Updating bot ?Edit

It's surprising that given how long Wikipedia has been around and how easily automatable the task is that no bot exists to automatically update US congressional district pages which are almost uniformly a mess. Theirs exists no template for how to present results with some going in reverse chronological order than other pages and there being zero consistency in presentation many pages haven't been updated since 2014.

Going through and manually editing all 435 pages would be extremely tedious so the most logical solution is to create a bot dedicated to the task, which can not only update the pages but fix them.

The quality of the result section of congressional pages is abysmal and is easy to fix, simply create a standard congressional district page standard for displaying the results and then create a bot to automatically generate election templates and them to the page following the template. I'm a bit of newb so I don't know how exactly we would go around agreeing to a standard page but I'm sure there is a process.

I would be open to coding the bot myself if somebody more experienced with them is willing to offer help/assistance.

Some example of poor quality pages:

-- — Preceding unsigned comment added by Zubin12 (talkcontribs)

This might be better done as a Lua template with data files anyone can edit and template options anyone can modify. But the data still has to be entered so there is no savings of labor or guarantee of staying up to date, unless someone made a bot to pull data from external sources into the Lua tables. The benefit would be consistent display. The downsides would be a system working outside normal wikisource which creates other complications. All this is possible but not simple. -- GreenC 16:26, 18 December 2018 (UTC)
Given that their are API's that allow one to automatically access election information, it would seem pretty simple to create template and have the script iterate over every district. Are their any good places to learn more about Lua templates on Wikipedia ?Zubin12 (talk) 01:16, 19 December 2018 (UTC)
Wikipedia:Lua is the start. You cannot access external APIs from Lua. --Izno (talk) 02:21, 19 December 2018 (UTC)
If Lua doesn't allow external API's then what exactly is wrong with using a pywikibot ? Zubin12 (talk) 03:15, 19 December 2018 (UTC)
(edit conflict) Not possible to pull external API data via Lua (except from Wikidata). It would require a bot to get the external API data then update a Lua data file. Similar to Module:Calendar date which reads from the data file Module:Calendar date/Events. GreenC 02:25, 19 December 2018 (UTC)
This kind of problem might be fixable using Wikidata or commons:Commons:Data tables. --Izno (talk) 02:21, 19 December 2018 (UTC)
You might mean Template:Wikidata list which could work but still need to populate Wikidata somehow. -- GreenC 02:29, 19 December 2018 (UTC)
No, I don't. But yes, the work would be in populating Wikidata (for the former suggestion). --Izno (talk) 02:32, 19 December 2018 (UTC)
Ok then I don't know what you mean, commons:Commons:Data tables link doesn't work. -- GreenC 02:38, 19 December 2018 (UTC)
I'm not sure where exactly Izno was trying to link to, but mw:Help:Tabular Data may be relevant. Anomie 03:29, 19 December 2018 (UTC)
That's the one. --Izno (talk) 03:49, 19 December 2018 (UTC)
I don't understand, are you saying that the data should first be uploaded to wikidate and then used by a bot? Zubin12 (talk) 03:15, 19 December 2018 (UTC)
A bot would first put the data somewhere convenient (Wikidata/Commons) and then we could include those data using a template here. --Izno (talk) 03:49, 19 December 2018 (UTC)
To summarize a number of ways to store and access the data:
In all three cases the data would require a bot keep it in sync with the remote API. -- GreenC 03:53, 19 December 2018 (UTC)
Thanks, It would seem like using tabular data would make the most sense given that it's easy to find election results stored in a CSV format. How exactly would one go about uploading the data?And do I need to do anything further to get permission? Zubin12 (talk) 05:32, 19 December 2018 (UTC)
I've never used .tab on Commons before but agree it is probably best option because it's universal available to all wiki languages, and it's easy to import data compared to Wikidata or Lua tables. Would encourage developing a bot to keep the data up to date, automatically, otherwise it will depend on manual updates, is error prone. A bot will require a bot flag (bot permission). See Commons:Commons:Bots. -- GreenC 19:03, 19 December 2018 (UTC)

Getting a list of data from "lblN" parameters of Template:Infobox characterEdit

I'm wondering if someone can help me out with a bot that would go over the articles listed in Category:Articles using Infobox character with multiple unlabeled fields, get the text of the "lblN" parameters (|lbl1=, |lbl2=, etc.) and output it to a list/table so that I can see what text is being used and how many times? If this can be combined with the unknown fields used at articles listed at Category:Pages using infobox character with unknown parameters that would be even better. Is this possible? Thanks. --Gonnym (talk) 10:56, 20 December 2018 (UTC)

If you can persuade someone to add TemplateData to the template's documentation page, the next monthly report will list all of the parameters in use and their values. – Jonesey95 (talk) 11:58, 20 December 2018 (UTC)
@Jonesey95: I've added it now, but looking at the report for Infobox television episode it seems that when there are more than 50 unique values, it doesn't list them. Am I not looking in the right place? --Gonnym (talk) 14:46, 20 December 2018 (UTC)
I don't know of an easy way to get those values, but if you return here in early January after the report is generated, you'll be able to supply a list of articles for someone to analyze for this parameter. – Jonesey95 (talk) 23:46, 20 December 2018 (UTC)
But I already supplied a list of articles - those in Category:Articles using Infobox character with multiple unlabeled fields. --Gonnym (talk) 13:43, 21 December 2018 (UTC)
My mistake. Sorry about that. – Jonesey95 (talk) 14:06, 21 December 2018 (UTC)
 Y Done Hi @Gonnym: I wrote a custom module to pull the data for the first part of your request using AWB. The data and the custom module script are here. I picked up 35 labels supported by the template. Please click Edit source and copy the data to a text file. Ganeshk (talk) 03:07, 8 January 2019 (UTC)
Thank you very much Ganeshk! --Gonnym (talk) 19:39, 8 January 2019 (UTC)

Bot to convert Template:Fb cl2 team transclusions to use Module:Sports tableEdit

I have tried my hand at creating a bot for this but it is super complicated. I've got a script that I'm running but have yet to get the results reliable enough to be able to use it as a bot. Right now I basically use it to just expedite the process. Basically I copy the table into my code, run it and then copy the results back into the browser. The issue is that I have to manually adjust each result before saving. The need for this/decision to make this change is all covered in this TFD. If anyone is willing to take this on, please let me know? I'd be very eager to work with you and help in any way I can. --Zackmann (Talk to me/What I been doing) 19:42, 20 December 2018 (UTC)

I am more than happy to help, but am going to need some specifics on what exactly needs doing? How are you converting them? --TheSandDoctor Talk 10:25, 24 December 2018 (UTC)
@Zackmann08: Oops, forgot ping. --TheSandDoctor Talk 10:27, 24 December 2018 (UTC)
TheSandDoctor, Bot may use User:Frietjes/fb.js to convert Template:Fb cl2 team transclusions to use Module:Sports table Hhkohh (talk) 13:37, 3 January 2019 (UTC)
TheSandDoctor, your bot can convert them if you are willing to do the following converting because Frietjes did not develop script which can convert them into Module:Sports results
Plastikspork said they were doing something with this. Galobtter (pingó mió) 10:56, 24 December 2018 (UTC)
Frietjes has a script that does it, but it requires human input in the process. I would rather have the tables converted consistently, with meaningful team abbreviations, than some generic AAA, BBB, CCC, etc. The script requires human input to help determine the abbreviations. Thanks! Plastikspork ―Œ(talk) 13:04, 25 December 2018 (UTC)
Plastikspork, maybe team abbreviations input use T1, T2, T3 and so on and competition abbreviations input use C1, C2, C3 and so on in order to support bot task Hhkohh (talk) 13:22, 3 January 2019 (UTC)
@Plastikspork, Hhkohh, and Galobtter: Is there a table of abbreviations? I am not very familiar with the sport. --TheSandDoctor Talk 18:17, 3 January 2019 (UTC)
TheSandDoctor, I do not find it. So I have asked it in WT:FOOTY and fb script can provide default abbreviations but need adjust if necessary Hhkohh (talk) 18:49, 3 January 2019 (UTC)
@Hhkohh: I have tried using the userscript, but it produces no difference? --TheSandDoctor Talk 18:51, 3 January 2019 (UTC)
Why not write code to check the first three letters of the team name, capitalise, check to see if that abbreviation already exists in the list, if it does look at the first 3+n letters until you get a distinct hit? Do the abbreviations have to be only three letters? There's no master abbreviation table, and I've never used one when using the new template. SportingFlyer talk 18:51, 3 January 2019 (UTC)
TheSandDoctor, which article? Hhkohh (talk) 18:53, 3 January 2019 (UTC)
@Hhkohh: 2003–04 Rangers F.C. season, 1901–02 East Stirlingshire F.C. season, 1903–04 East Stirlingshire F.C. season....literally every article I have tried it on (there are a couple more I forget) --TheSandDoctor Talk 18:56, 3 January 2019 (UTC)
─────────────────TheSandDoctor, but why I can? [1] You need click convert fb button (under page information button) in edit page. Then the browser will show input box for you Hhkohh (talk) 19:05, 3 January 2019 (UTC)
@Hhkohh: I'm not sure. For me I click through all of the boxes and then it just refreshes to "No difference" and the edit was not logged. --TheSandDoctor Talk 19:14, 3 January 2019 (UTC)
Pinging Frietjes Hhkohh (talk) 19:24, 3 January 2019 (UTC)
TheSandDoctor, try this one. I have converted it successfully but I did not save in order to let you practice Hhkohh (talk) 02:31, 4 January 2019 (UTC)
@Hhkohh: Tried in Chrome: nothing. Firefox? Nothing. Not sure why it doesnt work for me. --TheSandDoctor Talk 05:36, 4 January 2019 (UTC)
TheSandDoctor, I run fb script on my mobile phone on Safari browser Hhkohh (talk) 14:37, 4 January 2019 (UTC)
for team season articles, where possible, we should transclude the tables from main season article. I have been working on the {{fb cl team 2pts}} tables, and once that is done, I will go back to the 3pts tables. Frietjes (talk) 19:26, 7 January 2019 (UTC)
@Zackmann08:, please stop converting as Frietjes is taking care of it, thanks Hhkohh (talk) 08:47, 9 January 2019 (UTC)

MOS:ACCESS#Text / MOS:FONTSIZE complianceEdit

Hi. MOS:ACCESS#Text / MOS:FONTSIZE are clear. We are to "avoid using smaller font sizes in elements that already use a smaller font size, such as infoboxes, navboxes and reference sections." However, many infoboxes use {{small}} or the html code, especially around degrees earned (here's one example I corrected yesterday). I used AWB to remove small font from many U.S. politician infoboxes of presidents, senators, and governors, but there are so many more articles that have them. Here's an example for a TV station. I've noticed many movies and TV shows have small text in the infobox as well. Since I cannot calculate how many articles violate this particular rule of MOS, I would like someone to automate a bot to remove small text from infoboxes of all kinds. – Muboshgu (talk) 22:04, 20 December 2018 (UTC)

At least on my screen, your edit had no effect, because as far as I know, there is some sort of CSS style that limits infobox font size to a minimum of 85%. I am pretty sure I just saw that described the other day, but my searches for it have turned up nothing. Maybe someone like TheDJ would know.
If I am correct, that means that edits to remove small templates and tags from infoboxes would be cosmetic edits, which are generally frowned upon. However, there are a heck of a lot of unclosed <small>...</small> tags within infoboxes, along with small tags wrapping multiple lines, both of which cause Linter errors, so it may be possible to get a bot approved to remove tags as long as fixing Linter errors is in the bot's scope. I welcome corrections on the four things I got wrong in these four sentences. – Jonesey95 (talk) 23:58, 20 December 2018 (UTC)
It's not "cosmetic". It's an accessibility issue. In this version, the BS, MS, and JD in the infobox are smaller than 85%. – Muboshgu (talk) 05:47, 21 December 2018 (UTC)
FWIW, Firefox's Inspector tells me that "BS" in that version is exactly 85%. – Jonesey95 (talk) 10:29, 21 December 2018 (UTC)
Odd. That was not the assessment of User:Dreamy Jazz. [2] – Muboshgu (talk) 20:42, 22 December 2018 (UTC)
Fascinating. I just looked at the two revisions of Brian Bosma in Chrome while not logged in, and I definitely see a size difference in the "BS" and "JD" characters. So these would not be cosmetic edits after all, at least for some viewers using some browsers. (I have struck some of my previous comments.) – Jonesey95 (talk) 21:59, 22 December 2018 (UTC)
P.S. I found the reference to the small template sizing text at 85% at Template:Small. It looks like I may have misinterpreted that note. – Jonesey95 (talk) 01:42, 23 December 2018 (UTC)

───────────────────────── @Jonesey95 and Muboshgu: Hello. Although the 85% font-size is defined, the computed value of the font-size is below 11.9px (it is 10.4667px). This is because font-size percentages work based on the parent container, not the document (see 1 under percentages). In this case the infobox has already decreased the font-size to 88% of the document, the font-size computed from the {{small}} tag will be 74.8% smaller than the rest of the document (0.88 * 0.85 = 0.748). This is the case in Firefox, Chrome, Edge (10.4px), Opera and Internet Explorer. This behaviour is the standard and so will be experienced in all browsers. Dreamy Jazz 🎷 talk to me | my contributions 10:46, 23 December 2018 (UTC)

Yes, here's a demo of what happens when percentages get enclosed by other percentages: Text Text Text Text Text . That goes to five levels, each being 95% of the enclosing element. --Redrose64 🌹 (talk) 12:42, 23 December 2018 (UTC)
That is helpful. I discovered that I have set my Firefox preferences to prevent the font size from going below 11 pt, which enforces MOS for me. But in Chrome, which I have left unconfigured, that text gets smaller. By all means, let's remove instances of <small>...</small> and {{small}} (and its size-reducing siblings) from infoboxes, both in Template space and in article space. – Jonesey95 (talk) 14:31, 23 December 2018 (UTC)
Yes, let's. Thanks for that clarification Jonesey95. – Muboshgu (talk) 15:46, 23 December 2018 (UTC)

Category:Pages using infobox bridge with unknown parametersEdit

  Resolved

Category:Pages using infobox bridge with unknown parameters has at 'L' probably 2500 articles which have obsolete parameters. Could they be removed? They are |lat= |long= |map_cue= and |map_text=. I can then deal with the proper errors. Twiceuponatime (talk) 11:11, 24 December 2018 (UTC)

Picking an article at random from the "L" section, I found Folly Bridge, which has four unsupported parameters, all of which are empty. Removing them would be a cosmetic edit except for the removal of the hidden category, which could more easily be accomplished by setting "ignoreblank = y" in the unknown parameter check. Blank unsupported parameters do no harm and are usually ignored in the error check. I don't see a discussion on the template's talk page that resulted in the non-standard removal of "ignoreblank = y"; I recommend that it be reinstated so that the tracking category shows only actual errors. – Jonesey95 (talk) 21:34, 24 December 2018 (UTC)
For the record it's |ignoreblank=1, but I've made that change since it's likely uncontroversial. For whatever reason the TemplateData tracking actually shows no invalid params in use, which I don't think I've ever seen in an infobox. My bot does have clearance to remove invalid params from template usage, but only after a discussion determines there are simply too many to remove manually. Primefac (talk) 22:24, 24 December 2018 (UTC)
The category is empty now. It looks like the TemplateData report was correct. – Jonesey95 (talk) 00:40, 26 December 2018 (UTC)

Direct links to election/referendum articlesEdit

Recently, consensus was reached to move all the articles on elections and referendums to have the year at the front (e.g.: "United States Senate elections, 2018" was moved to "2018 United States Senate elections"; see Wikipedia talk:Naming conventions (government and legislation)/Archive 2#Proposed change to election/referendum naming format, issue resolved on 20 November 2018). This left us with a huge number of redirects, sometimes double redirects. I was wondering if there is a chance that a bot fixes all those links. --Checco (talk) 09:38, 28 December 2018 (UTC)

  •   Comment: This post might be of use. RhinosF1 (talk) 09:48, 28 December 2018 (UTC)
  • There are several bots that fix double redirects, we need not give any special instructions to these bots since the double redirs will be detected automatically. "Fixing" a single redirect is against both WP:NOTBROKEN and WP:COSMETICBOT. --Redrose64 🌹 (talk) 21:55, 28 December 2018 (UTC)

Section sizesEdit

Please can someone add {{Section sizes}} to the talk pages of ~6300 articles that are longer than 150,000 bytes (per Special:LongPages), like in this edit?

The location is not critical, but I would suggest giving preference to putting it immediately after the last Wikiproject template, whee possible. Omit pages that already have the template. Andy Mabbett (Pigsonthewing); Talk to Andy; Andy's edits 15:59, 29 December 2018 (UTC)

A reasonable request, but I think it might need some sort of consensus to implement. Is there a WikiProject interested in using this (very-recently-created) template in order to improve Wikipedia? Primefac (talk) 15:13, 30 December 2018 (UTC)
Wikipedia:Village_pump_(technical)#Analysing_long_articles (started by Andy). There was another thread on long articles recently but it must be archived as I can't find it, it was a call to arms on how to deal with breaking them up. -- GreenC 15:23, 30 December 2018 (UTC)
Cool. Primefac (talk) 15:42, 30 December 2018 (UTC)
Created a Village Pump (proposal) at Primefac's request for more discussion. -- GreenC 19:26, 3 January 2019 (UTC)

Add Template:reflist-talkEdit

Check 5.7 million mainspace talk pages for sections that would benefit from a {{reflist-talk}}.

Example edit.

Scope: for each talk page, extract each 2-level section. For each section, check for existence of reference tags ie. <ref></ref>. If exist, check for existence of {{reflist-talk}} or <references/>. If none exist, add {{reflist-talk}} at end of section (optionally in a 3rd-level subsection called "References").

-- GreenC 16:27, 1 January 2019 (UTC)

A more determinate method is search the HTML for <ol class="references"> - this will always exist if there is <ref></ref> somewhere in the page, regardless of existence of {{reflist-talk}} or <references/> and it will account for things like <!-- <ref></ref> --> -- GreenC 16:36, 1 January 2019 (UTC)
@GreenC: Not true: it's also present in pages with anautogenerated reflist, such as the previous version. --Redrose64 🌹 (talk) 20:08, 1 January 2019 (UTC)
Yeah I know. It will always exist if there is a ref, regardless of the existence of <references/> or its equiv. -- GreenC 20:16, 1 January 2019 (UTC)

I ran a script. In 2000 Talk pages it found 11 cases:

Extrapolated it would be about 29,000 pages are like this. -- GreenC 19:43, 1 January 2019 (UTC)

  BRFA filed -- GreenC 20:02, 1 January 2019 (UTC)

WikiProject Soil TaggingEdit

The request is to have {{WikiProject Soil}} added to the article talk pages in 39 categories. Project notification posted. Much appreciated:

requested: -- Paleorthid (talk) 23:10, 6 January 2019 (UTC)

@Paleorthid:   Doing... --DannyS712 (talk) 02:31, 8 January 2019 (UTC)
@Paleorthid: See   BRFA filed --DannyS712 (talk) 02:35, 8 January 2019 (UTC) (change to template 01:23, 10 January 2019 (UTC))

Auto-archive IP warningsEdit

I imagine it's fairly confusing for IP users to have to scroll through lots of old warnings from previous users of their IP before getting to their actual message. We have Template:Old IP warnings top (and its partner), but it's rarely used—thoughts on writing a bot to automatically apply it to everything more than a yearish ago? Gaelan 💬✏️ 16:21, 10 January 2019 (UTC)

Technically feasible and is a good idea, IMO. Needs wider community input beyond BOTREQ. -- GreenC 17:09, 10 January 2019 (UTC)
Brought it to WP:VPR. Gaelan 💬✏️ 19:50, 11 January 2019 (UTC)

Short descriptions: find & replaceEdit

From WP:WikiProject Short descriptions#Which articles have a short description on Wikipedia?:

... about 400 are using the SHORTDESC magic word. These should be converted to the standard {{Short description}} template for ease of maintenance.

The task in question consists of finding each article containing

{{SHORTDESC:<xyz>}}

and replacing this code with

{{Short description|<xyz>}}

There are in fact 327 of these at the moment. Would any bot operator like to undertake this? With thanks: Bhunacat10 (talk), 00:03, 12 January 2019 (UTC)

This is an easy search-replace task, but it should be automated IMO, once a month or something. I don't mind adding a cron job on Toolforge unless there is a better idea, or more logical place to do so with an existing tool. -- GreenC 00:20, 12 January 2019 (UTC)
@GreenC and Bhunacat10: I'd like to take a crack at it with awb once my current bot request is processed. Would that be okay? --DannyS712 (talk) 02:15, 12 January 2019 (UTC)
Why not use your Python skills and setup a cronjob on Toolforge so it runs forever. AWB will fix them today but in a year there will be more again. -- GreenC 03:04, 12 January 2019 (UTC)
@GreenC: I don't know how to use toolforge or what a cronjob is. For now I would use AWB on a ~weekly basis (if approved), and would then devote the time to learning toolforge? --DannyS712 (talk) 04:47, 12 January 2019 (UTC)
@DannyS712: A cron job is a computer task executed automatically at a set time, usually at regular intervals (which may range from once per minute up to once per year). This is useful for running periodic maintenance tasks or generating reports, particularly if each edition of the report needs to cover exactly the same time period as the previous ones (a business might use a cron job to start off a daily sales report each night at 00:01, or a weekly report every Sunday at 18:00, etc.). In contrast to tasks initiated by a logged-in user (who would need to log in, set the task off, wait for it to complete, and log off again), they're instead run by something behind the scenes, known as "cron", so that the user who wants the job done can go home, and arrive the next morning knowing that it will have been done for them.
WP:Toolforge is the name given to some of the Wikimedia servers that are dedicated to running maintenance tasks and the like. --Redrose64 🌹 (talk) 16:18, 12 January 2019 (UTC)
@Redrose64: Could I do it as a user initiated task while separately figuring out how to do it with toolforge? --DannyS712 (talk) 17:06, 12 January 2019 (UTC)
You might want to use the 327 as test data at BRFA. The nature of wikipedia is the data holds unexpected surprises and the more test data you have to work with the better, when developing a bot. For example, a bot would ignore cases involving nowiki, <!-- comments -->, <pre>pre </pre>. (like in this post). That's just off the top of my head. -- GreenC 20:26, 12 January 2019 (UTC)
@GreenC: At some point ill learn github and try to figure out how to use toolforge, but for now I don't have the time. I'd like to do a bot run with awb for this task, but if someone else wants to make a tool that does this automatically then fine. --DannyS712 (talk) 23:22, 12 January 2019 (UTC)
OK I will do it then. It would help me in creating the bot to have the dataset available to learn and test from, not previously fixed by an AWB regex search-replace. If AWB is the kind of work you seek, try Wikipedia:AutoWikiBrowser/Tasks - it is the AWB equiv of BOTREQ. There are unresolved AWB requests in the archives of that board. Definitely try Toolforge, a unix shell account. Github not required though they recommend it eventually. -- GreenC 00:32, 13 January 2019 (UTC)

The USA isn't in AsiaEdit

WP:NRHP maintains lists of historic sites throughout the USA, using a template that (among other things) displays each site's geocoordinates. Problem is, occasionally someone omits the minus sign, leaving a site in the wrong part of the world; in this old revision of National Register of Historic Places listings in Maury County, Tennessee, the coords for Zion Presbyterian Church (|lon=87.145) placed it in western China.

Could someone run through all pages whose title begins with "National Register of Historic Places listings in" and log all of the entries with coordinates placing them in the Eastern or Southern Hemispheres? Please do not fix them at this point, since there are a few sites that really are in the Eastern Hemisphere (you'll find a couple at National Register of Historic Places listings in Aleutians West Census Area, Alaska, for example), and at least National Register of Historic Places listings in American Samoa has some Southern Hemisphere locations. Presumably the bot could create a page in its userspace noting each list with potential problems and mentioning the names of the sites on each list with the offending coords; a human could easily run through this list and remove false positives, like the Aleutians and American Samoa.

Thank you. Nyttend (talk) 21:07, 13 January 2019 (UTC)

You could also try some regex searches like hastemplate:"NRHP row" insource:/lon *= *[0-9]/ and hastemplate:"NRHP row" insource:/lat *= *-/. PrimeHunter (talk) 23:06, 14 January 2019 (UTC)

Remove living=yes, etc from talkpage of articles listed at Wikipedia:Database reports/Potential biographies of dead people (3)Edit

Hi bot people. I was wondering whether it might be appropriate/worthwhile/a good idea to get a bot to remove "living=yes", "living=y", "blp=yes", "blp=y", etc from the talkpages of the articles listed at Wikipedia:Database reports/Potential biographies of dead people (3). I recognize that automating such a process might result in a few errors, but I think that would be a reasonable tradeoff compared to how tedious it would be for humans to check and update all 968 articles in the list one by one. (And hopefully, for those few(?) articles where an error does occur, someone watching the article will fix it). I spot-checked a random sample of articles in the list, and for every one I checked, it would have been appropriate to remove the "living=yes", etc from the talkpage, i.e. the article had a sourced date of death. To minimize potential errors, I would suggest the bot skips any articles which cover multiple people, e.g. ones with "and" or "&" in the title and Dionne quintuplets, Clarke brothers, etc. Thoughts? DH85868993 (talk) 12:53, 15 January 2019 (UTC)

That last might not be easy to bot-automate. Though, if instead of a bot we get a script, it would be possible to quickly deal with any multiples before running it. Adam Cuerden (talk)Has about 6.3% of all FPs 13:03, 15 January 2019 (UTC)