Wikipedia talk:Notability (academics)

Active discussions
Miscellany for deletion This miscellaneous page was nominated for deletion on 7 February 2006. The result of the discussion was keep. An archived record of this discussion can be found here.
WikiProject Policy and Guidelines  (Defunct)
This page is within the scope of WikiProject Policy and Guidelines, a project which is currently considered to be defunct.
Emblem-important.svg See WP:PROPOSAL for Wikipedia's procedural policy on the creation of new guidelines and policies. See how to contribute to Wikipedia guidance for recommendations regarding the creation and updating of policy and guideline pages.

This discussion was begun at Wikipedia:Votes for deletion/Nicholas J. Hopper, where the early history of the discussion can be found.


See Wikipedia:Notability (academics)/Precedents for a collection of related AfD debates and related information from the early and pre- history of this guideline (2005-2006) and Wikipedia:WikiProject_Deletion_sorting/Academics_and_educators/archive, Wikipedia:WikiProject_Deletion_sorting/Academics_and_educators/archive 2 for lists of all sorted deletions regarding academics since 2007.


Quantifiable metric for WP:NACADEMICEdit

Hello, Is it possible to have some sort of quantifiable metrics that can be used for WP:PROF. My articles keep getting declined for unclear reasons. Some reviewers say I should not list publications, others say I should list a few, some other say list many more, or list citations along with papers. Some say having 5 edited books is notable, and others say something else. Some mentioned having 1 to 2 references in each paragraph of short 2-3 lines sentences even though all info came from a single profile page of the university website.

Is it possible to have a set measure of the following (or some other criteria that you can propose) for full-professors in scientific fields to be notable?

1. How many journal-publications are needed to be notable? (depends on the field but for medicine I would say >100?)

2. What impact factor journals should be considered notable? (depends on the field but for medicine I would say >25?)

3. How many citations per publication are needed? (depends on the field but for medicine I would say >100?)

4. How many edited books are needed? (depends on the field but for medicine I would say >3?)

5. How many book chapters are needed? (depends on the field but for medicine I would say >3?)

6. How many conference presentations are needed (if relevant)?

7. How many years the person should be teaching at a university?

8. How many subjects the person should be teaching at a university?

9. How much should be the minimum H-index? (depends on the field but for medicine I would say >45?)

10. How many patents (if relevant)?

Thanks Earthianyogi (talk) 17:34, 23 July 2020 (UTC)

Most, but not all, of those points are irrelevant. In general those that are about other people noticing the work rather than about how hard the subject has worked count. If we could come up with fixed numbers then we would have no need for deletion discussions, which would be great if we could do it but is well beyond the current state of artificial intelligence. Phil Bridger (talk) 18:52, 23 July 2020 (UTC)
When you say most are irrelevant, it would worth picking up the relevant ones and discussing them further. The idea is to avoid too much discussion, if possible. It may be possible to achieve it for certain categories (if not all), for example, WP:NACADEMIC within medicine or engineering? Earthianyogi (talk) 18:59, 23 July 2020 (UTC)
No, this is all irrelevant. I understand your frustration but, as a new editor, you should have read the archived talk before asking a question in search of definitive answers you'll never get. Academia is too broad to have a single set of criteria; this has been discussed. Our goal writing the encyclopedia is gathering together enough source material upon which we base the article. We all agree that the news media will cover people who win Nobel Prizes, but after that it becomes uncertain. We don't use an arbitrary criterion like h-index because the historians that write about academics don't necessarily use h-index as a driver of topic selection. The reason why Wikipedia covers athletes so much more than academics is that fans create the market for sport media, which provide sources. Nobody cares about professors at universities. Chris Troutman (talk) 19:11, 23 July 2020 (UTC)
Sure, I am new to Wiki and have read a few talk-pages but not the old-archives. Of course, academia is broad, and I am not asking for a single set of criteria for all, I am just proposing categories according to the fields so that historians can have a separate threshold, etc. I agree nobody cares about professors at universities, but they help shape the future of society. In one case, I also read that a person's profile on Wiki was rejected, and that person later got a Nobel Prize. Some also said it might be easy for others to game the system using fixed-matrix, but if you keep the criteria high enough, it may be tricky. With time, things have evolved, and with such experienced editors on Wiki who have dedicated so many hours of their life, it may be worth trying to consider coming up with a metric. The idea is simple, but if you want to ignore it bec I am new to Wiki, it is fine.
"Our goal writing the encyclopedia is gathering together enough source material upon which we base the article."- Is precisely the problem, I feel. Earthianyogi (talk) 19:29, 23 July 2020 (UTC)
How would we write an article without sources? The sources tell us what we're writing. We're not rewarding entities that garner coverage. We need facts so we can write an article. How could we write about a person if we don't have an independent source that tells us about them? Maybe you have a conflict of interest here. Chris Troutman (talk) 20:06, 23 July 2020 (UTC)

I have no COI. Please do not rush to any conclusion too soon. I have not been paid to write any article on Wiki, which many reviewers assumed previously, may be because they are paid to do so. I saw your COI in a few articles. I think there was some misunderstanding, as I agree that citations are necessary, but defining how many are enough is a problem. Earthianyogi (talk) 20:13, 23 July 2020 (UTC)

In my view, PROF works a bit differently than GNG. I see no problem with a substub that meets PROF #2, 3, 5, 6, 8, because at the very least the article can instantly make it clear why the subject is important. PROF #1, 4, 7 are more coverage-based, but because a lot of an academic's notability derives from their work rather than their biography, it is OK to devote a large portion of text to summarizing their work and have a smaller section on others' commentary on it, as opposed to a politician or sportsperson where almost all the content will be based on reporters talking about things the person did. -- King of ♥ 21:27, 23 July 2020 (UTC)
Unfortunately this means quantifying what is enough is going to be harder. For GNG the definition of "significant coverage" is not a fixed number of sources or words but rather "whatever it takes to write a reasonably sized article on the subject". For PROF we can always write a summary of an academic's work using their own papers, so the test is purely "should we have an article" rather than the "can we have an article" used for most other subjects. -- King of ♥ 21:31, 23 July 2020 (UTC)
I, too, am skeptical that we could provide the kinds of concrete numbers for which you are asking. This is an incredibly challenging task even for experts who work at colleges and universities and evaluate faculty members and researchers as part of their full-time job. There are just too many differences across academic disciplines for there to be agreement on these quantitative metrics. I am not an expert on promotion and tenure standards - my expertise is in U.S. higher education as a discipline but not this specific topic - but my sense is that colleges and universities typically leave it to individual programs and departments to try to come up with these kinds of metrics that can be applied in annual appraisal as well as promotion and tenure decisions. But once decisions have to be made outside of those units, especially in tenure and promotion cases where many different committees and individuals vote on each case, these metrics are not applied as there is simply way too much variation between disciplines (and often within disciplines, too) for this to be practical.
For example, there is not even agreement between disciplines about what kinds of publications are most meaningful. Many humanities disciplines place a lot of value on single-author books ("monographs"). Many science and social science disciplines place a lot of value on journal articles and peer-reviewed books. And other disciplines, particularly engineering and computer science, place a lot of value on conference papers and patents. So the most basic question of "How many publications and of what kind?" has many different answers.
It might also be instructive to look at the efforts that have tried to rigorously answer some of those questions. There are some ranking systems and databases, some of which are proprietary (e.g., Academic Analytics), that apply algorithms to databases to rank faculty members or institutions. They're very contentious with widespread accusations and suspicion of faulty algorithms, woefully incomplete databases, and broken epistemological foundations.
It would be very nice if we had clear answers to your questions. But if the experts who have dedicated their lives to this and are paid to do this full-time can't answer these questions then I'm extremely skeptical that a small group of volunteers, many of whom are likely (very well-meaning, intelligent, informed, and hard-working!) amateurs, can answer them. We stumble along as best we can with fuzzy standards that are sometimes unsatisfactory and ill-applied. And we often reflect the biases and shortcomings of the broader cultures and histories in which we are situated.
Please don't think that they're bad questions! We need to be prodded and encouraged to improve our standards and our practices. And we need to be self-reflective and critical of our standards and practices. So please keep asking questions, even if we can't answer them. ElKevbo (talk) 21:33, 23 July 2020 (UTC)
Thanks for the encouraging answers. I think Wiki works in a completely different way than I thought until 5 minutes ago. I am just made to realise that citations are not important to satisfy the criteria-1 of WP:PROF for notability. See the following:

-https://en.wikipedia.org/wiki/Wikipedia:Teahouse#Draft_talk%3AKawal_Rhode

-https://en.wikipedia.org/wiki/User_talk:Earthianyogi#Your_articles_on_academics

-https://en.wikipedia.org/wiki/Draft_talk:Kawal_Rhode

Thanks Earthianyogi (talk) 22:00, 23 July 2020 (UTC)

Don't believe what Troutman says about academic notability. Troutman's claim on draft talk that we cannot use citation counts and instead must base WP:PROF#C1 notability purely on independent sources telling us that the research is of high impact is very far from how WP:PROF#C1 works in practice. Troutman appears to prefer either eliminating our academic notability standards altogether in favor of GNG or (as in this discussion) pretending they are based on GNG when they are not, and as a result tilting the balance towards only having articles on celebrity publicity-hound academics. Troutman's "Nobody cares about professors" above may be projection, but it is telling. Kawal Rhode clearly passes WP:PROF#C1, purely based on the high citation counts in his Google Scholar profile and calibration for his field of research (noting that it is a field where journals and citations are more important than books and book reviews, and where high citation counts should be expected, but nevertheless observing that his citation counts are high). Most full professors in the UK system would also likely pass. —David Eppstein (talk) 22:21, 23 July 2020 (UTC)
Thank you, David Eppstein, that is what I thought a bit of mix-up between WP:GNG and WP:PROF. However, it is surprising that many others agree with him. I am been told by "David notMD" that all the articles I have created could be potentially be deleted, as they do not show notability, even though I think that they cover more than one criterion for WP:PROF. I am not sure what to do? Should I continue to contribute to Wikipedia or stop creating more articles? Also see this: https://en.wikipedia.org/wiki/User_talk:Earthianyogi#Your_articles_on_academics . Earthianyogi (talk) 22:28, 23 July 2020 (UTC)
Citation counts are an objective measure of the impact that the work of a scholar has had on the scholarly community. The number of citations needed to obtain notability varies from subject to subject and is detremined by consensus. Typically a thousand are required, compared to the handful of sources required to meet WP:GNG. Xxanthippe (talk) 22:39, 23 July 2020 (UTC).
Thanks for your response. Do you mean a thousand citation counts for one paper, or for all papers combined of the subject in question? Where is this number coming from, and why do we not use H-index? Of course, for historians, we can have other criteria for other fields. Earthianyogi (talk) 22:57, 23 July 2020 (UTC)
All citations combined. H-index or total sum of citations are strongly correlated with each other, but the important thing to keep in mind about either of these measures of citation (or the variant I more commonly consider for these sorts of evaluations, number of publications with >100 citations) is that different fields will have different typical numbers, so the evaluation should be calibrated for that. Which is problematic, because we don't all know the norms for all fields or even what the boundaries between different fields with different norms are. It's also important to pay attention to authorship of the highly-cited works because for instance a paper with 10,000 cites and 200 authors should probably count less towards notability than a paper with 1000 cites and 2 authors. So at the end of the day it's more subjective than we'd like, just as GNG-based evaluations that hinge on how routine, local, or independent certain coverage is can be more subjective than the GNG-proponents pretend. Despite a certain level of subjectivity I think this is all usually better than evaluating people by how effective their employer's publicity department is. —David Eppstein (talk) 23:24, 23 July 2020 (UTC)
Thank you, I had almost lost hope in citations! Total 1000 is doable for each professor. I want to add that as "David notMD" has advised me that he may nominate all articles written by me for deletion, as they are week on notability criteria. I feel that these are thousands of articles on Wikipedia, which I can find that does not meet the notability criteria, are a list of institutes, one-line articles for artists, and many other non-sense articles. Is it possible for me to nominate these as part of the cleaning drive? How can we do that? Earthianyogi (talk) 23:30, 23 July 2020 (UTC)
Hello,

I agree David, but we have to accept that we will never be perfect. Some degree of uncertainty will prevail and will be covered by exceptions, no matter what method we use.

Also, is there a Wiki authority, who can help set these standards, or all aspects of Wiki, including preparing new guidelines, have been left for the community consensus?

Is there any weight-age to editors/reviewers who have COI in other articles, have accepted payments for creating articles on Wikipedia, or have created a handful of articles on it, or never had a COI? I assume that editor/reviews who do not have a COI may be stepping on the toes of those (un/knowingly) who may have COI, and we may never be able to come to a consensus.

Earthianyogi (talk) 23:44, 23 July 2020 (UTC)

These matters have been debated over the years in the archives of this talk page. Xxanthippe (talk) 00:34, 24 July 2020 (UTC).
There are no special authorities here when it comes to establishing or changing policies and practices. A handful of volunteers have been elected to positions of trust where they can block editors, protect articles, and perform other technical tasks. But they don't have any more authority in creating or changing anything; they just help ensure things run smoothly.
I don't quite understand your question about COIs. Editors who edit with undisclosed COIs are in violation of our policies, especially if they're being paid to promote the subject(s). Editors with a COI who abide by our policies are held in the same regard as other editors; we all have COIs, after all, it's just a question of whether we manage them appropriately. ElKevbo (talk) 00:31, 24 July 2020 (UTC)


  • As I have argued long ago, any full professor in a major research university is intrinsically notable; they have been judged to be so by more more competent people than us. The essence of the present guideline is what amounts to the same thing, having a significant influence of the filed. How this is measured depends on the field, the country, and the time period. For convenience, this discussion needs to be limited to the period of modern state-subsidezed research (1950-present), Western Europe and the United States and other countries with a similarly well developed meaurable system accessible to us ,

In the sciences, influence is obtained in only one way: by publishing peer reviewed papers (or in somefield, likeengineering, peer-reviewed conference papers, and ot-- and othe r special cases). One does not obtain inflluence by publishing any number of mediocre papers., andwhat the h facotr measures is exactly that: Consider two people

A, with publication with citation counts 500, 400, 300, 200, 150, 50, 40, 30 , 25, 20, 15, 15, 14, 14, 13, 12
B, with publications with citation counts, 30, 29, 25, 22, 21, 20, 19, 18, 18, 17, 17, 16, 15, 14, 13, 12,

The h-factor of each of them is 14. The total humber of paper each has published is 16. Only one of them is notable. The actual numbers that are relevant depend upon the publication and citation density in the field. The more papers people write, the more they will have to cite. The custom of the field determines how many apers ofthe possibly elevant ones a person actually does cite. I could write a very long essay explaining the factors that go into this, and books have indeed been written on it, But basically, it is only important work that is highly noticed by one's peers that makes someone notable in science . Other questions brought up here are interesting, but secondary, I suggest a look at some of my archived discussions at User talk:DGG/Archive 0.5 -- my talk page archive for academic things and people for a discussion of some of them.easured depends on the field, the country, and the time period. DGG ( talk ) 00:47, 24 July 2020 (UTC).

What may be worth considering is an entity different and less subjective metric. On Polish Wikipedua the rule is simple: anyone who has a habilitation is automatically notable, regardless of their citation count or such. Given that we are super inclusive for sports people, and celebrities (IIRC the statistics say that like 30-50% of Wikipedia biographies are sports people, don't they?), I think there is a systemic bias against academics (because they don't get much coverage). I think we should lower our requirements of notability for academics, and say that anyone with associate professor degree and/or habilitaiton and/or equivalents is notable. WP:NOTPAPER, and we need to counteract the bias favoring the sports people; I have tried to tighten the notability policies for them but it is impossible, die hard fans will always prevent that. So the only solution is to lower the notability criteria for other 'important' people. --Piotr Konieczny aka Prokonsul Piotrus| reply here 05:37, 24 July 2020 (UTC)

an habilation essentially is the equivalent of having completed a post-doctoral fellowship, isn't it? DGG ( talk ) 00:25, 25 July 2020 (UTC)
Hmmm, as I haven't done either I am not sure. From a very simple perspective, habilitation always seemed to me like a 'second PhD', as far as amount of work and the final result, through I think it is much more common in Europe than getting a second PhD is in countries with no habilitation requirement. Ping User:Pundit? --Piotr Konieczny aka Prokonsul Piotrus| reply here 03:57, 26 July 2020 (UTC)
Habilitation DEFINITELY is not equivalent to a post-doctoral fellowship. A postdoc is done after a Ph.D., for 1-3 years typically. Habilitation is done 6-9 years after a Ph.D. typically, it requires substantial publication record (typically, a book plus 5-6 articles in prime journals, but depends largely on the field), and the closest equivalent is tenure review / associate professorship promotion. In my experience the resemblance is quite close. Pundit|utter 21:15, 27 July 2020 (UTC)


  • about books: To meet wp:author usually takes at least two successful books-- successful in the sense of having substantial critical attention from reliable sources which have coverage/ In those fields of the academic world such as humanities or history where books count much more than journals the criteria depends to a considerable extent over which publisher it is: basically it has to be one of the major academic publishers -- which comprise the University presses plus the very few of the publishers that deal with serious academic books. With the editorship counts depends on the degree of involvement if the editor merely collects articles the degree of involvement can be very little. It also depends upon field the major editors of the most important textbooks in law and medicine can be notable on that basis. Pamphlets are not books even though published by the University press. Government documents are usually not books-- but there are exceptions and the individual titles have to be looked at. DGG ( talk ) 00:19, 25 July 2020 (UTC)


Thank you for all your replies. I have been recently told that all the academic profiles I have created on Wiki may be up for deletion (please see my user page and talk page)

  • David notMD: Many/most of the articles you have created about academics suffer the same weaknesses at the one for Rhodes. Even though most have been accepted, in my opinion they do not confirm notability, and if I was in a mean mood I would nominate all of them for deletion. How much a professor was awarded in grants, how many grad student degrees they oversaw, their articles being cited - none of that conveys notability. Academics doing what is expected of academics is not Wikipedia notability.


despite these guidelines

  • “Many scientists, researchers, philosophers and other scholars (collectively referred to as "academics" for convenience) are notably influential in the world of ideas without their biographies being the subject of secondary sources.“
  • “Some academics may not meet any of these criteria, but may still be notable for their academic work. It is very difficult to make clear requirements in terms of number/quality of publications. The criteria, in practice, vary greatly by field and are determined by precedent and consensus. Also, this guideline sets the bar fairly low, which is natural; to a degree, academics live in the public arena, trying to influence others with their ideas. It is natural that successful ones should be considered notable.”

I have read discussion archives (though not all of them, I confess) that had many ideas with great potential. I noticed two schools of thought on Wiki notability.

Briefly, one idea (WP:GNG) considers a subject notable when the subject is significantly covered (not just name mentions) by others in media, provided that the sources are considered independent & reliable. This criterion may be biased towards professions in the entertainment business, sports and the high echelons of power. Some of the contributors that follow this “school of thought” do no consider a subject’s authored publications/books and/or the respective number of citations as the primary criteria for Wiki notability. Others consider criterion-1 of WP:NACADEMIC to be subjective and raise questions about how many citations are needed to be notable despite the guidelines, as mentioned above.

Conversely, the other idea (WP:NACADEMIC) considers a subject notable on the basis of the citations of their work (criterion-1) by other academics. Since research papers are task (not people) oriented, the researcher's name only gets mentioned once (at the top). This criterion is slanted towards academics to some extent.

Are any of these ideas purely unbiased? Arguably, no. This may be the main reason why two separate guidelines exist based on community consensus, WP:GNG and WP:NACADEMIC. I am not going to question these fundamental ideas as a community consensus has been reached. I will only try to focus on criterion-1 of WP:NACADEMIC, which may help a way to quantify the citation criterion and describes that citation criteria may vary according to field/area of expertise.

However, I feel that it may be possible to define a quantitative metric by focusing on, out of the many factors involved, only the most relevant within the context of Wiki. The idea is similar to that used in biology, where mathematical models focus on a handful of essential parameters affecting a process (out of 15-30 parameters) to simplify the problem accepting a degree of associated limitation. For simplicity, I am going to ignore all the other factors, like country, race, gender, background, area of research, the era of research, impact, etc., for two main reasons. First, when an academic article is deleted, these items are not taken into consideration, so why should they be considered for article creation. Second, I am not trying to question the guidelines that already exist; however, I only wish to add some quantitative metrics in the hope that it will lead to less bias.

I feel that two main problems arise when trying to address criterion-1 of WP:NACADEMIC: I will try to address each of these points one by one.


1. How many citations of an article is considered notable? I found a list of the most cited articles in the world. The article on the 1st place has 305,000 citations (Google shows 218,578 citations). The article on 10th place has 40,289 citations. It shows a sharp decline in the citation numbers among the most cited papers of all time. https://www.genscript.com/top-100-most-cited-publications.html

May be it is possible to from a list with mean/median citations of the top 100 papers in various fields to get the citation threshold for the notability criteria for each area/field. A short list is presented below.

Based on this and other suggestions, would it be worth saying that an academic with 1,000 citations based on 3 of their most-cited papers can be considered notable for academics in SCIENCE (these numbers may be different for engineering, history, etc.)?


2. How do we transform/extend this notable “citation number” taking into account first/second/third authorship, number of co-authors, or number of years ago the paper was published? Do we need a separate metric for each? Another way to look at notability is to modify the number of citations based on the number of authors and the number of years after publications. So how do we calculate this number for each author?

a. Take the 3 most cited papers of an author.

b. Divide these citations for each paper with the total number of years they have been published.

c. Add these three values in the previous step and divide by a unique number of authors in these 3 papers to get Wiki Notability Score (WNS).

For example, consider the three most cited papers for Oliver H. Lowry’s (who has the highest citations counts in the world for one of his paper) as follows:

GoogleScholar Authors Citations Year Years till today Citation/year/paper
Protein Measurement with FolinPhenol Reagent OH Lowry, NJ Rosbrough, AL Farr, Randall RJ 222,940 1951 2020-1951=69 222,940/69=3,231
Estimation of proteins by Folin phenol reagent OH Lowry, NJ Rosenbrough, AL Farr, RJ Randall 15,945 1951 2020-1951=69 15,945/69=231
A flexible system of enzymatic analysis Oliver H Lowry, Janet V Passonneau 4,204 1972 2020-1971=49 4,204/49=86
- Unique authors=5 - - - Total=3,548

Now, divide 3,548/5=709 is the WNS for Oliver H Lowry. This is the highest WNS that any author can have today. It assumes all authors on 3 papers had an equal contribution.


I repeated the procedure for the Toby J Gibson, who is an author of the 10th most cited paper “Clustal W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice.”

GoogleScholar Authors Citations Year Years till today Citation/year/paper
CLUSTAL W: improving the sensitivity of progressive multiple sequence alignment through sequence weighting, position-specific gap penalties and weight matrix choice JD Thompson, DG Higgins, TJ Gibson 62,536 1994 2020-1994=26 62,536/26=2,405
The CLUSTAL_X windows interface: flexible strategies for multiple sequence alignment aided by quality analysis tools JD Thompson, TJ Gibson, F Plewniak, F Jeanmougin, DG Higgins 41,980 1997 2020-1997=23 41,980/23=1,825
Clustal W and Clustal X version 2.0 Mark A Larkin, Gordon Blackshields, Nigel P Brown, R Chenna, Paul A McGettigan, Hamish McWilliam, Franck Valentin, Iain M Wallace, Andreas Wilm, Rodrigo Lopez, Julie Dawn Thompson, Toby J Gibson, Desmond G Higgins 25,471 2007 2020-2007=13 25,471/13=1,959
- Unique authors=13 - - - Total=4,024

Now, divide 4,024/13=323 to get the WNS for Toby J Gibson. It assumes all authors on these papers had an equal contribution.


Based on the methods described in point 1 or point 2, can we come up with a threshold value of WNS for an academic to be notable? Will these numbers may be different for engineering, history, etc.? Please take the above with a pinch of salt, as it has many limitations, just like any other method, but it can be taken as a first step in the quantitative direction. So as an academic, what is your Wiki Notability Score (WNS)?

Thank you Earthianyogi (talk) 20:56, 25 July 2020 (UTC)

"Science" is far too broad a set of areas to have a single numeric rule for citation counts that can be accurate for all of it. I don't think there is any getting around the need for both some amount of subjectivity and some amount of subject-specific expertise in these judgements. —David Eppstein (talk) 21:14, 25 July 2020 (UTC)
It has long been accepted (read the archives of this page) that citation patterns vary significantly from subject to subject. The practice is to compare like with like i.e. physicists with physicists, philosophers with philosophers but never physicists with philosophers. There are also differences within the sub-fields of both physics and philosophy so, as with all editing of Wikipedia, knowledge and experience of the topic are helpful to make useful edits. Xxanthippe (talk) 22:25, 25 July 2020 (UTC).
I think what has long been accepted is that citation counting is completely useless in arts subjects, and varies significantly from subject to subject in the sciences and social sciences. Botanists should never be compared with physiologists for example. So the answer to Earthianyogi's question "can we come up with a threshold value of WNS for an academic to be notable?" is no, we can't. Johnbod (talk) 00:05, 26 July 2020 (UTC)
What is your point? Both botany and physiology are sciences. Xxanthippe (talk) 00:17, 26 July 2020 (UTC).
EXACTLY - in fact both branches of biology; "never physicists with philosophers" was a BAD example, because citation counts are no use at all in philosophy, but botanists and physiologists is a GOOD one. All clear now I hope? Johnbod (talk)
The claim that citations cannot be used for philosophers is made without evidence. Philosophers like Isaiah Berlin[1], A. J. Ayer[2], Peter Singer [3], [4], Roger Scruton [5] show stunning citation data which effortlessly surpass any notability criterion. Xxanthippe (talk) 03:36, 26 July 2020 (UTC).
All these have/had very extensive and successful careers as public intellectuals, which most philosphers don't. Singer has been much involved in "applied philosophy" and gets cited a lot by medics etc. Again untypical. David Eppstein below is right. I expect much of the differences between the groups he describes relates to sub-fields of the subject. Johnbod (talk) 15:35, 26 July 2020 (UTC)
The citations that I have listed above are from Google scholar, generally from reputable academic sources. These philosophers do not get their citations from Tabloid or Social media. Similar values can be found on other citation data bases like Scopus or Publons. Xxanthippe (talk) 00:41, 27 July 2020 (UTC).
and your point is? If you've ever looked at one, you'll find tabloids don't cover public intellectuals very thoroughly. Johnbod (talk) 02:58, 27 July 2020 (UTC)
My personal impression is that the standards for excellence in philosophy are mysterious and a little cryptic. Some philosophers write journal articles with lots of citations. Some philosophers write well-reviewed books. But some other philosophers do neither of those things and nevertheless get described as being top philosophers by other philosophers. I don't understand why. —David Eppstein (talk) 03:54, 26 July 2020 (UTC)
If they are little noted they are not notable by Wikipedia standards. Xxanthippe (talk) 04:09, 26 July 2020 (UTC).
If the other philosophers write strong enough letters of recommendation for them they can obtain distinguished professorships and be notable by WP:PROF#C5. —David Eppstein (talk) 04:21, 26 July 2020 (UTC)
That's fine then. Xxanthippe (talk) 04:54, 26 July 2020 (UTC).

I mean WNS for botany, a different WNS for physiology, another WNS for Dentistry, and so on within various areas of sciences (sorry, it was not clear I point 2). I agree 100% compare physicists with physicists, philosophers with philosophers (I thought it is undeniable, but maybe not).

The idea is not to contradict or oppose or cut-down on any existing guidelines but to support WP:PROF#C1. All current exceptions, all other criteria 1-8, and subject-specific expertise in these judgement would still hold. The academics who are extensively cited will always remain; the ones who are not well cited could fall in the existing exception categories; therefore, this metric will only support WP:PROF#C1. Arts is a well-known exception and will be dealt with the way it is currently done.

Indeed, it may be best to leave the criteria-1 vague, as it works in favour of academics and would help keep the bar low.

All the best. Earthianyogi (talk) 08:37, 26 July 2020 (UTC)

The "WNS" proposal is quantitative, but it is not any less subjective than what we already do. For example, the procedure says, Take the 3 most cited papers of an author. Why 3? Why not 4 or 5 — just because 3 "sounds like enough"? In some fields, a scientist's top-cited papers are likely to be reports by massive collaborations in which they participated. Getting a thousand citations because you were working at CERN when they found the Higgs means something different than getting a thousand citations because you and a couple colleagues introduced a new idea. Why assume that all authors contribute equally? I can tell you straightaway that's not rooted in fact. Trying to invent a metric is just pretending that subjectivity can be eliminated. XOR'easter (talk) 18:14, 26 July 2020 (UTC)
Quite. All that this proposal would do would be that argument would move from concrete examples of academics whose notability is in question to arguments about the general principles. The latter will never be resolved, because there are editors who think that it's more important to have sources about the subjects' favourite foods or their sexual partners than sources about their work. The way that Wikipedia has been so successful is that we get on with writing articles about specific subjects rather than try to settle such differences that will never be settled. Phil Bridger (talk) 18:28, 26 July 2020 (UTC)

I think there is some misunderstanding. I am not trying to eliminate all subjectivity; it is impractical to believe that any method can ever do that. The example mentioned about CERN/Higgs is just an exception, which would be covered within the current guidelines. This metric will hopefully address comments like these:

  • “Wikipedia hasn't determined if 300 is high impact. We don't have objective numbers. Maybe that's a lot; maybe it's not. Maybe it varies by field. I don't know. Ultimately, you think the subject is notable and you refuse to admit that N:PROF doesn't support your claim.”

But maybe it is not worth it, but it should not be rejected for any misunderstanding. As I said earlier, it may be best to leave the criteria-1 vague, as it works in favour of academics, would help keep the bar low, and that the differences can never be settled. Earthianyogi (talk) 10:39, 27 July 2020 (UTC)

Once again a discussion that completely ignores humanists, academic artists, and social scientists. There's no way to implement this until there's at least an attempt to cover academia, not just science. So, of course, oppose. -- Michael Scott Cuthbert (talk) 22:37, 29 August 2020 (UTC)
The procedure is to compare like with like: philosophers with with philosophers and physicists with physicists but never philosophers with physicists. So there is no problem with dealing with any academic discipline; one makes a comparison with peers. Xxanthippe (talk) 23:00, 29 August 2020 (UTC).
  • Comment/oppose. I must admit that as time passes I become less and less fond of using purely quantitative metrics like citation counts, h-index and the like. I prefer that we relied on them less and relied more on the sources that actually discuss the significance of the suject's work in detail. I do think that to the extent that we use some quantitative measures, we should stick to the ones that have been used in the literature, such as citation counts, h-index, g-index, etc, and not try to invent a completely new measure here such as this WNS. Creating anything like this WNS would be a necessarily ad-hoc endeavor with a huge amount of arbitrariness and unknown unintended consequences. As others have noted, there is a huge amount of variations between fields of study and even within specific broad disciplines in term of the speed of publication, citation rates and publication practices (journal articls vs conference proceedings vs books). IMO, the only reasonable approach is to have a case-specific discussion of what the various numbers mean for each academic, rather than to try to capture everybody's level notability by a single number. Nsk92 (talk) 23:25, 29 August 2020 (UTC)
  • Oppose and close this discussion. We're not going to invent our own metric for measuring and comparing citations especially one with no apparent basis in the available literature. ElKevbo (talk) 00:01, 30 August 2020 (UTC)

Question: NotabilityEdit

Hello. If the author's (Sanjay Chaudhary) research papers have been cited in number of other research works, can we consider him 'notable' ? If the author is an editor of a book published by reputed scholarly publication, can we consider him as 'notable' ? Thanks. --Gazal world (talk) 16:16, 13 September 2020 (UTC)

I do not think so. All established academics publish research papers, but that does not make them notable. I think a significant number is required. Similarly just one book that he edited, particularly as he is one of three editors, does not make him notable. --Bduke (talk) 22:40, 13 September 2020 (UTC)
I see. Thank you, Bduke. --Gazal world (talk) 22:53, 13 September 2020 (UTC)

Notability opinionEdit

Anyone with experience of Russian academia have an opinion whether Draft:Zayceva Tatyana Ivanovna is notable? I have no idea. (t · c) buidhe 13:34, 3 October 2020 (UTC)

SNG and GNGEdit

There is a discussion at Wikipedia talk:Notability on the relationship between SNGs and the GNG which might be of interest to editors who watch this page. Best, Barkeep49 (talk) 21:46, 7 November 2020 (UTC)

Return to the project page "Notability (academics)".