Talk:Experimenter's bias

Latest comment: 8 years ago by Tim bates in topic Suggested merger

Untitled edit

turn us into== What type of experimenter's bias is this? ==

I've recently added a section on spiritual healing including a study published in 2001. This used experienced healers in the treatment (healing) group and "simulated healers" in the control group. Is there a type of bias caused by using "simulated healers" instead of experienced healers in the control group. What I'm looking for is the difference between assuming that the simulated healers cannot heal and knowing that they cannot heal.

Thanks,

Adrian-from-london (talk) 23:59, 21 September 2010 (UTC)Reply

I'm concerned about the entire article because it needs a clear description of what it is, what it means, what it will cover, etc. Specifically, the introduction needs more to set the structure for the article. I'll address what I perceive as issues and offer suggestions for others to examine. As mentioned the article's introduction is unclear…does not introduce the topic as well as a reader should have it explained. The entire topic should be defined and clarified. The introduction should also discuss why experimenter bias is a problem and cite at least one reputable, reliable source that verifies that it is indeed a significant problem. The introduction should explain what research fields have undue/unacceptable levels of experimenter bias (i.e.,medical, social/political/attitudinal surveys,psychological face-to-face studies,etc.). The introduction must discuss why it is a major problem,that is, if it is a significant problem. Does the bias of the experimenter result in poor medical treatment or false information being placed into the literature of the field? Needs to be clear.

The introduction of the article (and possibly other sections) should also discuss the following:

1) the consequences of experimenter bias (such as false and misleading results entering the field or the public domain),

2) causes and reasons for experimenter bias (untrained or poorly researchers, researchers with a conflict of interest,individual failures/flaws/errors/mistakes, etc.)

3) the methods used to control, compensate and reduce experimenter bias have been successful. For example, one method is to ensure that the researcher/experimenter has a period of supervised training in order to acquire the necessary skills and abilities to conduct valid, reliable research… such as a Masters Degree program or a Ph.D. program. A certified training program that develops research assistants for the research programs of the primary researcher would benefit. Some fields conduct research that’s supervised by a Ph.D. or M.D. but conducted by well trained researchers with less formal education. Research that is carefully designed, controlled and conducted will most likely be submitted for publication: once submitted it will have a formal, rigorous review by a panel of recognized peers/experts within the same field of the research. Most of the bias should be discovered before it is accepted for publication.

4)the consumers,readers and users of research findings are not members of the general public. The users are other researchers and other scientists. The research is certainly not written to be understood by the general public. The public does not have the trainng and expertise to read and understand the research. No one expects them to have the expertise. The findings must be released in clear concise ways...with much effort to ensure the limitations of the findings.

In addition, I do not recognize some of the terminology used to describe the research stages or the research processes. For example,the phrase "reading-up on the field” is not used in my field...it's called a "Literature Review". Perhaps, the introduction needs to establish the domain range. Also, I want to ask whether the term "Experimenter Bias" is the appropriate term to use. Does Expermenter Bias imply "within the person"? If so, the range of the article is restricted to what is within the person/experimenter...bias such as preconceived expectations, etc. Some of the topics seem to lie outside of the notion of "within the experimenter" such as sample bias, statistical analyses etc. What do others think?

Thanks Prof2long (talk) 10:01, 22 January 2011 (UTC)Reply

Previous article edit

Here is the entire article as it looked like before. The reason I removed it but decided to keep it on the discussion page is for the following reason: It is probably not any use to your average reader, but might be for someone with expert knowledge. --Spannerjam 15:09, 7 September 2013 (UTC)

In experimental science, experimenter's bias, also known as research bias, is subjective bias towards a result expected by the human experimenter. David Sackett,[1] in a review of biases in clinical studies, states that biases can occur in any one of seven stages of research:

  1. in reading-up on the field,
  2. in specifying and selecting the study sample,
  3. in executing the experimental manoeuvre (or exposure),
  4. in measuring exposures and outcomes,
  5. in analyzing the data,
  6. in interpreting the analysis, and
  7. in publishing the results.

The inability of a human being to be objective is the ultimate source of this bias. It occurs more often in sociological and medical sciences, where double blind techniques are often employed to combat the bias. But experimenter's bias can also be found in some physical sciences, for instance, where the experimenter rounds off measurements.

Classification of experimenter's biases edit

Modern electronic or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly designed analysis technique. Experimenter's bias was not well recognized until the 1950s and 60's, and then it was primarily in medical experiments and studies. Sackett (1979) catalogued 56 biases that can arise in sampling and measurement in clinical research, among the above-stated first six stages of research. These are as follows:

  1. In reading-up the field
    1. the biases of rhetoric
    2. the all's well literature bias
    3. one-sided reference bias
    4. positive results bias
    5. hot stuff bias
  2. In specifying and selecting the study sample
    1. popularity bias
    2. centripetal bias
    3. referral filter bias
    4. diagnostic access bias
    5. diagnostic suspicion bias
    6. unmasking (detection signal) bias
    7. mimicry bias
    8. previous opinion bias
    9. wrong sample size bias
    10. admission rate (Berkson) bias
    11. prevalence-incidence (Neyman) bias
    12. diagnostic vogue bias
    13. diagnostic purity bias
    14. procedure selection bias
    15. missing clinical data bias
    16. non-contemporaneous control bias
    17. starting time bias
    18. unacceptable disease bias
    19. migrator bias
    20. membership bias
    21. non-respondent bias
    22. volunteer bias
  3. In executing the experimental manoeuvre (or exposure)
    1. contamination bias
    2. withdrawal bias
    3. compliance bias
    4. therapeutic personality bias
    5. bogus control bias
  4. In measuring exposures and outcomes
    1. insensitive measure bias
    2. underlying cause bias (rumination bias)
    3. end-digit preference bias
    4. apprehension bias
    5. unacceptability bias
    6. obsequiousness bias
    7. expectation bias
    8. substitution game
    9. family information bias
    10. exposure suspicion bias
    11. recall bias
    12. attention bias
    13. instrument bias
  5. In analyzing the data
    1. post-hoc significance bias
    2. data dredging bias (looking for the pony)
    3. scale degradation bias
    4. tidying-up bias
    5. repeated peeks bias
  6. In interpreting the analysis
    1. mistaken identity bias
    2. cognitive dissonance bias
    3. magnitude bias
    4. significance bias
    5. correlation bias
    6. under-exhaustion bias

The effects of bias on experiments in the physical sciences have not always been fully recognized.

Statistical background edit

In principle, if a measurement has a resolution of  , then if the experimenter averages   independent measurements the average will have a resolution of   (this is the central limit theorem of statistics). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. This requires that the measurements be statistically independent; there are several reasons why they may not be. If independence is not satisfied, then the average may not actually be a better statistic but may merely reflect the correlations among the individual measurements and their non-independent nature.

The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). Experimenter bias is another potential cause of non-independence.

Biological and medical sciences edit

The complexity of living systems and the ethical impossibility of performing fully controlled experiments with certain species of animals and humans provide a rich, and difficult to control, source of experimental bias. The scientific knowledge about the phenomenon under study, and the systematic elimination of probable causes of bias, by detecting confounding factors, is the only way to isolate true cause-effect relationships. It is also in epidemiology that experimenter bias has been better studied than in other sciences.

A number of studies into Spiritual Healing illustrate how the design of the study can introduce experimenter bias into the results. A comparison of two studies illustrates that subtle differences in the design of the tests can adversely affect the results of one. The difference was due to the intended result: a positive or negative outcome rather than positive or neutral.

A 1995 paper[2] by Hodges & Scofield of spiritual healing used the growth rate of cress seeds as their independent variable in order to eliminate a placebo response or participant bias. The study reported positive results as the test results for each sample were consistent with the healers intention that healing should or should not occur. However the healer involved in the experiment was a personal acquaintance of the study authors raising the distinct possibility of experimenter bias. A randomized clinical trial,[3] published in 2001, investigated the efficacy of spiritual healing (both at a distance and face-to-face) on the treatment of chronic pain in 120 patients. Healers were observed by "simulated healers" who then mimicked the healers movements on a control group while silently counting backwards in fives - a neutral rather than should not heal intention. The study found a decrease in pain in all patient groups but "no statistically significant differences between healing and control groups ... it was concluded that a specific effect of face-to-face or distant healing on chronic pain could not be demonstrated."

Physical sciences edit

If the signal being measured is actually smaller than the rounding error and the data are over-averaged, a positive result for the measurement can be found in the data where none exists (i.e. a more precise experimental apparatus would conclusively show no such signal). If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded-off by a human who knows the sidereal time of the measurement, and if hundreds of measurements are averaged to extract a "signal" which is smaller than the apparatus' actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal time of the measurements, then even though the round-off is non-random it cannot introduce a spurious sidereal variation.

Social sciences edit

The experimenter may introduce cognitive bias into a study in several ways. First, in what is called the observer-expectancy effect, the experimenter may subtly communicate their expectations for the outcome of the study to the participants, causing them to alter their behavior to conform to those expectations. After the data are collected, bias may be introduced during data interpretation and analysis. For example, in deciding which variables to control in analysis, social scientists often face a trade-off between omitted-variable bias and post-treatment bias.[4]

Forensic sciences edit

Observer effects are rooted in the universal human tendency to interpret data in a manner consistent with one’s expectations.[5] This tendency is particularly likely to distort the results of a scientific test when the underlying data are ambiguous and the scientist is exposed to domain-irrelevant information that engages emotions or desires.[6] Despite impressions to the contrary, forensic DNA analysts often must resolve ambiguities, particularly when interpreting difficult evidence samples such as those that contain mixtures of DNA from two or more individuals, degraded or inhibited DNA, or limited quantities of DNA template. The full potential of forensic DNA testing can only be realized if observer effects are minimized.[7]

See also edit

References edit

  1. ^ Sackett, D. L. (1979). "Bias in analytic research". Journal of Chronic Diseases. 32 (1–2): 51–63. doi:10.1016/0021-9681(79)90012-2. PMID 447779.
  2. ^ Hodges, RD and Scofield, AM (1995). "Is spiritual healing a valid and effective therapy?". Journal of the Royal Society of Medicine. 88 (4): 203–207. PMC 1295164. PMID 7745566.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Abbot, NC, Harkness, EF, Stevinson, C, Marshall, FP, Conn, DA and Ernst, E. (2001). "Spiritual healing as a therapy for chronic pain: a randomized, clinical trial". Pain. 91 (1–2): 79–89. doi:10.1016/S0304-3959(00)00421-8. PMID 11240080.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  4. ^ King, Gary. "Post-Treatment Bias in Big Social Science Questions", accessed February 7, 2011.
  5. ^ Rosenthal, R. (1966). Experimenter Effects in Behavioral Research. NY: Appleton-Century-Crofts.
  6. ^ Risinger, D. M.; Saks, M. J.; Thompson, W. C.; Rosenthal, R. (2002). "The Daubert/Kumho Implications of Observer Effects in Forensic Science: Hidden Problems of Expectation and Suggestion". Calif. L. Rev. 90 (1): 1–56. doi:10.2307/3481305. JSTOR 3481305.
  7. ^ D. Krane, S. Ford, J. Gilder, K. Inman, A. Jamieson, R. Koppl, I. Kornfield, D. Risinger, N. Rudin, M. Taylor, W.C. Thompson (2008). "Sequential unmasking: A means of minimizing observer effects in forensic DNA interpretation". Journal of Forensic Sciences. 53 (4): 1006–1007. doi:10.1111/j.1556-4029.2008.00787.x. PMID 18638252.{{cite journal}}: CS1 maint: multiple names: authors list (link)

Suggested merger edit

The experimenter's bias and the Observer-expectancy effect seem to be the same or similar. Therefore I suggest they are in the same article. That way confusion can be avoided for Wikipedia's readers. --Spannerjam 19:20, 10 October 2013 (UTC)

Seconded. Bryanrutherford0 (talk) 16:56, 29 October 2013 (UTC)Reply
Observer-expectancy effect is a subset of observer bias, but is potentially an article in itself. But, while it is so small, it could divert to the appropriate heading in this page Tim bates (talk) 19:31, 20 July 2015 (UTC)Reply

Edit edit

I made a children's explanation of the seven steps.