Style guidelines for writing about scientific miracles, famous one-time observations of them, and experiments trying to reproduce them.

(Started on the LK-99 talk page, for good-faith results. For the bad-faith version, see User:Sj/scifraud)

MOS:MIRACLE

edit

"And it is at this point that we have an entirely new physics..."

Reflecting on articles about various miracle-cure physics and science articles: we could use a style for tagging / talking about claimed advances that are not 'breakthroughs' in the sense of "biggest advance in years" but 'breakaways' in the sense of "sudden inexplicable advance surpassing anything in the history of the field". To my knowledge, these are almost always a mix of error, fraud, and wishful thinking. We have Category:Pseudoscience for projects that continue to publish such claims after many previous attempts have shown it not to work, but nothing for this category of initially-credible, technical, but aspirational and unverified or unverifiable discoveries.

Some characteristics of discoveries that fall into miracle territory and may deserve their own style (feel free to add to this list):

Miraculousness

  • No intermediate advances, or well-understood mechanisms for action.
  • Multiple dimensions of breakthrough at once
  • Improvements that are too good to be true: an order of magnitude improvement over the state of the art, all at once
  • + Improvements that are way, waaaay too good to be true: 3+ magnitudes of improvement

Supreme confidence

  • Reports of success quickly jump to the potential world-changing implications if initial results bear out and are refined. Secondary sources talk exclusively about future potential implications and not whether the report / claim is credible.
  • Limited formal or peer recognition of the work. Authors have detailed theories for why that is.

Fiddliness

  • Expensive systems replaced with something you can make in a machine shop, with lots of trial and error [here: "1000 trials" before getting a compound that works -- allowing for unwitting forking-path bias]
  • Samples or prototypes are hard to access, though people are invited to try recreating their own from descriptions.

The above assumes we're only using reliable sources, but working through how to summarize them and use them proportionately, in the period between self-publication and meaningful external review, when a frothy pop-science vibe will dominate all searches and people will develop an expectation that a miracle will work out and start extrapolating endlessly on what that might mean [with no knowledge about whether or not it ever happened once, is replicable, is understandable, means what the most starry-eyed think it means, &c.]

During that period, popular science outlets – often the primary sources for reporting on new self-announced discoveries – are not particularly reliable, and authors themselves [in interviews and their own preprints] are particularly conflicted: public interest paints them into a corner of defending their work even in aspects where they may have doubts.

  • Improving our RS evaluations: Quality of reporting for a given outlet can vary with point in the hype cycle. Do we need "generally solid reporting but credulous in area X" as a nuance in source assessments?
  • Improving proportionality (and filling a known recurring gap): Proportionality can be hard to assess when searches are flooded by hype and counter-hype. The most reliable and level-headed may simply not weigh in at all. Can we somehow help solicit calm reviews for this purpose? Perhaps usage on WP would inspire engagement from people rightly uninterested in a pop sci interview, as most outlets are not interested in a calm + proportionate assessment, but either hype or debunking hype (which is tiring).
  • Improving how we include + contextualize less-notable sources: For breaking news w/ limited options we often include not-so-RS sources, or sources reliable at identifying 'notable buzz' but not for any synth or even modestly close reading of other documents.
  • Couching claims + alleged observations in hypothetical language: Then there are many other stylistic choices for a style guide to cover: How and how persistently to couch the statements in a breaking-miracle article in terms of where + how it is claimed (is it like WP:UNIVERSE? like WP:ALLEGED?),
  • Improving category and navbox selction: What categories to include, what nav / infobox templates to use (when there are multiple options). For many topics, <other miracles> may be a better navbox than <field>.
    • For instance, EmDrive has a navbox at the bottom for Spacecraft Propulsion, but one for perpetual motion or pseudoscience would be more appropriate. Hot superconductors are theoretically possible, like cold fusion (and unlike water-fuelled cars, which get similar credulous attention from generally-reliable mainstream media every so often), but our failure to handle cold fusion hype proportionately in the past led to long-running edit wars.
    • For instance: We might categorize untested claims as something like "Category:Alleged breakthroughs" and not "Category:Breakthroughs". This avoids prematurely giving credence to an untested claim / making WP a less attractive battleground

Comments

edit

from Talk:LK-99

Is there an existing style guide for this? Good examples in other articles? Popular science journalism treats these alleged advances the same way they do any tangible, widely-observable advance (like the first GW solar tower or the first electric flying car); and then spends as many articles saying "scientists skeptical about X"... "X shown not to work after all". We should provide better context from the start, particularly in early days of hype and confusion.

Elusiveness of negative results

edit

Someone suggested we would know soon whether a miracle that many people are trying to replicate has "panned out or not". This may be true if there are a series of positive results. But if there are only negative results, this is unlikely, as negative claims are hard to prove. Some of the developments that might be expected even if results are persistently negative for a long time:

Inconclusive support

  • Through confirmation bias and multiple comparisons, a number of others will also find potentially weakly-supporting evidence of the desired outcome
    • One or two labs will report they have a partial replication but will not publish their results, perhaps "to double check that their results are correct". They may never publish anything, but that initial statement can keep hope alive for years.
    • Some replicators will report instances that they claim show weakly statistically significant support for the breakthrough claims, or at least for one signal. Rather than taking extra care to rule out sources of error, or making sure they can replicate their own experiment under a range of setups and initial conditions, they will publish early and note that as an intended followup. This followup may take years, or for various reasons [publication bias, shifting trends, distraction] may not happen, or not be reported as widely as the initial hopeful result.
    • A small community of enthusiasts will start doing casual replications and reporting their results, again without sparing too much thought for the implications of multiple comparisons.
    • Someone will produce an informal meta-study of results from these three groups that show any positive indicators for the hoped-for result. They will come up with theories about what those experimental setups had in common that "got it right", leading to another round of experiments.
  • Groups that find no evidence supporting this outcome will offer up hypotheses about what first-order changes (multiple comparisons again) might lead to future experiments that do support the outcome. This will be read as partial support ("group X found that change Y would be conducive to the miracle!" rather than "group X found no support for a miracle, but in contemplating endless variations, found one that they might be willing to try in a future miracle-test").
  • Groups that are convinced they have found something related to the miracle will file patents, publish unreviewed preprints, or otherwise self-publish highly detailed but messy or noisy documents. The existence of these documents, and the confidence of their authors, will be read as support.

Inconclusive disconfirmation

  • The most careful replication efforts will not succeed. But failure to confirm isn't confirmation of failure - maybe they didn't do it carefully enough! The most careful groups will also avoid overstating the implications of their results, so will not infer that the miracle is an illusion. Meanwhile the least careful groups will overstate implications, inferring that a hint of a potential miracle means that it likely is real.
    • Many groups w/ varying experimental precision will try to replicate the work. None will have obvious success, or will confirm non-miraculous explanations for early observations
  • The discoverers may come up with novel reasons, based on new unknown physics, why failures that describe their possible sources of error may indicate the miracle is close to being observed (if only those errors are overcome). Being sure of their initial discovery, they will redouble efforts to explore the space of similar experiments to confirm replicaiton.
    • Discoverers may continue patenting designs (with concomitant silence about undisclosed discoveries), and fundraising for further efforts.
    • They will update their methods to address specific arguments against their approach, to make even purer samples and more delicate instruments, and looking for smaller and smaller effects that could be claimed as statistically significant.
  • Confident replicators will start to derive motivation from the possible implications of success, devoting more and more time to the thought experiment of the personal and societal implications, and leaning on those large future benefits to justify large current investments in experiment.
    • They may start to cite the positive facets of inconclusive results from others, or informal meta-studies, or the optimistic projections of others, in explaining their confidence about the next-planned experiment.
  • Through citogenesis, even a long sequence of disconfirmations can be glossed as "the latest breakthrough, which could revolutionize society, that needs replication and further confirmation, is being subjected to its most careful test yet" without independent confirmation of method, underlying theory, or observation of the expected core results; and also without the discoverers even developing an unambiguous demonstration they can show to other experts in their field.

Feel free to add to the above. – SJ +

SJ's comment looks like it would be a valuable addition to the Reproducibility article, with a link from this one. Frank MacCrory (talk) 02:43, 3 August 2023 (UTC)

I reckon it's "reproducibility style on wikipedia" more than the encyclopedia article itself. Perhaps we need an MOS:REPRODUCIBILITY as part of the same style guide page as MOS:MIRACLE. – SJ +