Pseudoscience

From para.wiki
Jump to navigation Jump to search

Pseudoscience consists of statements, beliefs, or practices that are claimed to be both scientific and factual but are incompatible with the scientific method.[1]Template:Refn Pseudoscience is often characterized by contradictory, exaggerated or unfalsifiable claims; reliance on confirmation bias rather than rigorous attempts at refutation; lack of openness to evaluation by other experts; absence of systematic practices when developing hypotheses; and continued adherence long after the pseudoscientific hypotheses have been experimentally discredited. The term pseudoscience is considered pejorative,[2] because it suggests something is being presented as science inaccurately or even deceptively. Those described as practicing or advocating pseudoscience often dispute the characterization.[3] Being given the label does not refute it, but does suggest either misinformation or dishonesty.

Pseudoscience can be harmful. For example, pseudoscientific anti-vaccine activism and promotion of homeopathic remedies as alternative disease treatments can result in people forgoing important medical treatment with demonstrable health benefits.[4]

Etymology

The word pseudoscience is derived from the Greek root pseudo meaning false[5][6] and the English word science, from the Latin word scientia, meaning "knowledge". Although the term has been in use since at least the late 18th century (e.g., in 1796 by James Pettit Andrews in reference to alchemy[7][8]), the concept of pseudoscience as distinct from real or proper science seems to have become more widespread during the mid-19th century. Among the earliest uses of "pseudo-science" was in an 1844 article in the Northern Journal of Medicine, issue 387:

That opposite kind of innovation which pronounces what has been recognized as a branch of science, to have been a pseudo-science, composed merely of so-called facts, connected together by misapprehensions under the disguise of principles.

An earlier use of the term was in 1843 by the French physiologist François Magendie, that refers to phrenology as "a pseudo-science of the present day".[9][10] During the 20th century, the word was used pejoratively to describe explanations of phenomena which were claimed to be scientific, but which were not in fact supported by reliable experimental evidence.

Dismissing the separate issue of intentional fraud—such as the Fox sisters’ “rappings” in the 1850s (Abbott, 2012)—the pejorative label pseudoscience distinguishes the scientific ‘us’, at one extreme, from the pseudo-scientific ‘them’, at the other, and asserts that ‘our’ beliefs, practices, theories, etc., by contrast with that of ‘the others’, are scientific. There are four criteria:
     (a) the ‘pseudoscientific’ group asserts that its beliefs, practices, theories, etc., are ‘scientific’;
     (b) the ‘pseudoscientific’ group claims that its allegedly established facts are justified true beliefs;
     (c) the ‘pseudoscientific’ group asserts that its ‘established facts’ have been justified by genuine, rigorous, scientific method; and
     (d) this assertion is false or deceptive: “it is not simply that subsequent evidence overturns established conclusions, but rather that the conclusions were never warranted in the first place” (Blum, 1978, p.12 [Yeates' emphasis]; also, see Moll, 1902, pp.44-47).[11]

From time to time, however, the usage of the word occurred in a more formal, technical manner in response to a perceived threat to individual and institutional security in a social and cultural setting.[12]

Relationship to science

Pseudoscience is differentiated from science because – although it claims to be science – pseudoscience does not adhere to accepted scientific standards, such as the scientific method, falsifiability of claims, and Mertonian norms.

Scientific method

Main article: Scientific method.
The scientific method is a continuous cycle of hypothesis, prediction, testing and questioning.
A typical 19th-century phrenology chart: During the 1820s, phrenologists claimed the mind was located in areas of the brain, and were attacked for doubting that mind came from the nonmaterial soul. Their idea of reading "bumps" in the skull to predict personality traits was later discredited.[13] Phrenology was first termed a pseudoscience in 1843 and continues to be considered so.[9]

A number of basic principles are accepted by scientists as standards for determining whether a body of knowledge, method, or practice is scientific. Experimental results should be reproducible and verified by other researchers.[14] These principles are intended to ensure experiments can be reproduced measurably given the same conditions, allowing further investigation to determine whether a hypothesis or theory related to given phenomena is valid and reliable. Standards require the scientific method to be applied throughout, and bias to be controlled for or eliminated through randomization, fair sampling procedures, blinding of studies, and other methods. All gathered data, including the experimental or environmental conditions, are expected to be documented for scrutiny and made available for peer review, allowing further experiments or studies to be conducted to confirm or falsify results. Statistical quantification of significance, confidence, and error[15] are also important tools for the scientific method.

Falsifiability

Main article: Falsifiability.

During the mid-20th century, the philosopher Karl Popper emphasized the criterion of falsifiability to distinguish science from nonscience.[16] Statements, hypotheses, or theories have falsifiability or refutability if there is the inherent possibility that they can be proven false. That is, if it is possible to conceive of an observation or an argument which negates them. Popper used astrology and psychoanalysis as examples of pseudoscience and Einstein's theory of relativity as an example of science. He subdivided nonscience into philosophical, mathematical, mythological, religious and metaphysical formulations on one hand, and pseudoscientific formulations on the other.[17]

Another example which shows the distinct need for a claim to be falsifiable was stated in Carl Sagan's publication The Demon-Haunted World when he discusses an invisible dragon that he has in his garage. The point is made that there is no physical test to refute the claim of the presence of this dragon. Whatever test one thinks can be devised, there is a reason why it does not apply to the invisible dragon, so one can never prove that the initial claim is wrong. Sagan concludes; "Now, what's the difference between an invisible, incorporeal, floating dragon who spits heatless fire and no dragon at all?". He states that "your inability to invalidate my hypothesis is not at all the same thing as proving it true",[18] once again explaining that even if such a claim were true, it would be outside the realm of scientific inquiry.

Mertonian norms

Main article: Mertonian norms.

During 1942, Robert K. Merton identified a set of five "norms" which he characterized as what makes a real science. If any of the norms were violated, Merton considered the enterprise to be nonscience. These are not broadly accepted by the scientific community. His norms were:

  • Originality: The tests and research done must present something new to the scientific community.
  • Detachment: The scientists' reasons for practicing this science must be simply for the expansion of their knowledge. The scientists should not have personal reasons to expect certain results.
  • Universality: No person should be able to more easily obtain the information of a test than another person. Social class, religion, ethnicity, or any other personal factors should not be factors in someone's ability to receive or perform a type of science.
  • Skepticism: Scientific facts must not be based on faith. One should always question every case and argument and constantly check for errors or invalid claims.
  • Public accessibility: Any scientific knowledge one obtains should be made available to everyone. The results of any research should be published and shared with the scientific community.[19]

Refusal to acknowledge problems

During 1978, Paul Thagard proposed that pseudoscience is primarily distinguishable from science when it is less progressive than alternative theories over a long period of time, and its proponents fail to acknowledge or address problems with the theory.Thagard|1978|pp=223_ff-20|[20] In 1983, Mario Bunge suggested the categories of "belief fields" and "research fields" to help distinguish between pseudoscience and science, where the former is primarily personal and subjective and the latter involves a certain systematic method.[21] The 2018 book by Steven Novella, et al. The Skeptics' Guide to the Universe lists hostility to criticism as one of the major features of pseudoscience.[22]

Criticism of the term

Philosophers of science such as Paul Feyerabend argued that a distinction between science and nonscience is neither possible nor desirable.[23]Template:Refn Among the issues which can make the distinction difficult is variable rates of evolution among the theories and methods of science in response to new data.Template:Refn

Larry Laudan has suggested pseudoscience has no scientific meaning and is mostly used to describe our emotions: "If we would stand up and be counted on the side of reason, we ought to drop terms like 'pseudo-science' and 'unscientific' from our vocabulary; they are just hollow phrases which do only emotive work for us".[24] Likewise, Richard McNally states, "The term 'pseudoscience' has become little more than an inflammatory buzzword for quickly dismissing one's opponents in media sound-bites" and "When therapeutic entrepreneurs make claims on behalf of their interventions, we should not waste our time trying to determine whether their interventions qualify as pseudoscientific. Rather, we should ask them: How do you know that your intervention works? What is your evidence?"[25]

Alternative definition

For philosophers Silvio Funtowicz and Jerome R. Ravetz "pseudo-science may be defined as one where the uncertainty of its inputs must be suppressed, lest they render its outputs totally indeterminate". The definition, in the book Uncertainty and Quality in Science for Policy (p. 54),[26] alludes to the loss of craft skills in handling quantitative information, and to the bad practice of achieving precision in prediction (inference) only at the expenses of ignoring uncertainty in the input which was used to formulate the prediction. This use of the term is common among practitioners of post-normal science. Understood in this way, pseudoscience can be fought using good practices to assesses uncertainty in quantitative information, such as NUSAP and – in the case of mathematical modelling – sensitivity auditing.

The astrological signs of the zodiac

Indicators of possible pseudoscience

Homeopathic preparation Rhus toxicodendron, derived from poison ivy.

A topic, practice, or body of knowledge might reasonably be termed pseudoscientific when it is presented as consistent with the norms of scientific research, but it demonstrably fails to meet these norms.[1][27]

Use of vague, exaggerated or untestable claims

  • Assertion of scientific claims that are vague rather than precise, and that lack specific measurements.[28]
  • Assertion of a claim with little or no explanatory power.[29]
  • Failure to make use of operational definitions (i.e., publicly accessible definitions of the variables, terms, or objects of interest so that persons other than the definer can measure or test them independently)Template:Refn (See also: Reproducibility).
  • Failure to make reasonable use of the principle of parsimony, i.e., failing to seek an explanation that requires the fewest possible additional assumptions when multiple viable explanations are possible (see: Occam's razor).[30]
  • Use of obscurantist language, and use of apparently technical jargon in an effort to give claims the superficial trappings of science.
  • Lack of boundary conditions: Most well-supported scientific theories possess well-articulated limitations under which the predicted phenomena do and do not apply.[31]
  • Lack of effective controls, such as placebo and double-blind, in experimental design.
  • Lack of understanding of basic and established principles of physics and engineering.[32]

Over-reliance on confirmation rather than refutation

  • Assertions that do not allow the logical possibility that they can be shown to be false by observation or physical experiment (see also: Falsifiability).[16][33]
  • Assertion of claims that a theory predicts something that it has not been shown to predict.[34] Scientific claims that do not confer any predictive power are considered at best "conjectures", or at worst "pseudoscience" (e.g., ignoratio elenchi).[35]
  • Assertion that claims which have not been proven false must therefore be true, and vice versa (see: Argument from ignorance).[36]
  • Over-reliance on testimonial, anecdotal evidence, or personal experience: This evidence may be useful for the context of discovery (i.e., hypothesis generation), but should not be used in the context of justification (e.g., statistical hypothesis testing).[37]
  • Presentation of data that seems to support claims while suppressing or refusing to consider data that conflict with those claims.Thagard|1978|pp=227–228-38|[38] This is an example of selection bias, a distortion of evidence or data that arises from the way that the data are collected. It is sometimes referred to as the selection effect.
  • Promulgating to the status of facts excessive or untested claims that have been previously published elsewhere; an accumulation of such uncritical secondary reports, which do not otherwise contribute their own empirical investigation, is called the Woozle effect.[39]
  • Reversed burden of proof: science places the burden of proof on those making a claim, not on the critic. "Pseudoscientific" arguments may neglect this principle and demand that skeptics demonstrate beyond a reasonable doubt that a claim (e.g., an assertion regarding the efficacy of a novel therapeutic technique) is false. It is essentially impossible to prove a universal negative, so this tactic incorrectly places the burden of proof on the skeptic rather than on the claimant.[40]
  • Appeals to holism as opposed to reductionism: proponents of pseudoscientific claims, especially in organic medicine, alternative medicine, naturopathy and mental health, often resort to the "mantra of holism" to dismiss negative findings.[41]

Lack of openness to testing by other experts

  • Evasion of peer review before publicizing results (termed "science by press conference"):[40][42]Template:Refn Some proponents of ideas that contradict accepted scientific theories avoid subjecting their ideas to peer review, sometimes on the grounds that peer review is biased towards established paradigms, and sometimes on the grounds that assertions cannot be evaluated adequately using standard scientific methods. By remaining insulated from the peer review process, these proponents forgo the opportunity of corrective feedback from informed colleagues.[41]
  • Some agencies, institutions, and publications that fund scientific research require authors to share data so others can evaluate a paper independently. Failure to provide adequate information for other researchers to reproduce the claims contributes to a lack of openness.[43]
  • Appealing to the need for secrecy or proprietary knowledge when an independent review of data or methodology is requested.[43]
  • Substantive debate on the evidence by knowledgeable proponents of all viewpoints is not encouraged.[44]

Absence of progress

  • Failure to progress towards additional evidence of its claims.[33][Note 1] Terence Hines has identified astrology as a subject that has changed very little in the past two millennia.[31]Thagard|1978|pp=223_ff-20|[20]
  • Lack of self-correction: scientific research programmes make mistakes, but they tend to reduce these errors over time.[45] By contrast, ideas may be regarded as pseudoscientific because they have remained unaltered despite contradictory evidence. The work Scientists Confront Velikovsky (1976) Cornell University, also delves into these features in some detail, as does the work of Thomas Kuhn, e.g., The Structure of Scientific Revolutions (1962) which also discusses some of the items on the list of characteristics of pseudoscience.
  • Statistical significance of supporting experimental results does not improve over time and are usually close to the cutoff for statistical significance. Normally, experimental techniques improve or the experiments are repeated, and this gives ever stronger evidence. If statistical significance does not improve, this typically shows the experiments have just been repeated until a success occurs due to chance variations.

Personalization of issues

Use of misleading language

  • Creating scientific-sounding terms to persuade nonexperts to believe statements that may be false or meaningless: For example, a long-standing hoax refers to water by the rarely used formal name "dihydrogen monoxide" and describes it as the main constituent in most poisonous solutions to show how easily the general public can be misled.
  • Using established terms in idiosyncratic ways, thereby demonstrating unfamiliarity with mainstream work in the discipline.

Notes

  1. Cite error: Invalid <ref> tag; no text was provided for refs named fredb

References

  1. 1.0 1.1 Cover JA, Curd M, eds. (1998), Philosophy of Science: The Central Issues, pp. 1–82
  2. Frietsch, Ute (7 April 2015). "The boundaries of science / pseudoscience". European History Online (EGO). Archived from the original on 15 April 2017. Retrieved 15 April 2017. Unknown parameter |name-list-style= ignored (help); Unknown parameter |url-status= ignored (help)
  3. Hansson, Sven Ove (2008), "Science and Pseudoscience", Stanford Encyclopedia of Philosophy, Metaphysics Research Lab, Stanford University, Section 2: The "science" of pseudoscience Unknown parameter |name-list-style= ignored (help)
  4. Vyse, Stuart (10 July 2019). "What Should Become of a Monument to Pseudoscience?". Skeptical Inquirer. Center for Inquiry. Retrieved 1 December 2019.
  5. "pseudo", The Free Dictionary, Farlex, Inc., 2015
  6. "Online Etymology Dictionary". Douglas Harper. 2015.
  7. Template:OED
  8. Template:Harvp
  9. 9.0 9.1 Magendie F (1843). An Elementary Treatise on Human Physiology. John Revere (5th ed.). New York: Harper. p. 150.
  10. Lamont, Peter (2013). Extraordinary Beliefs: A Historical Approach to a Psychological Problem. Cambridge University Press. p. 58. ISBN 978-1107019331. When the eminent French physiologist, François Magendie, first coined the term ‘pseudo-science’ in 1843, he was referring to phrenology.
  11. Yeates (2018), p.42.
  12. Lua error in Module:Citation/CS1/Identifiers at line 47: attempt to index field 'wikibase' (a nil value).
  13. Bowler J (2003). Evolution: The History of an Idea (3rd ed.). University of California Press. p. 128. ISBN 978-0-520-23693-6.
  14. e.g. Template:Harvp
  15. Template:Harvp, especially Chapter 6, "Probability", and Chapter 7, "inductive Logic and Statistics"
  16. 16.0 16.1 Popper, Karl (1959). The Logic of Scientific Discovery. Routledge. ISBN 978-0-415-27844-7. Unknown parameter |name-list-style= ignored (help) The German version is currently in print by Mohr Siebeck (ISBN 3-16-148410-X).
  17. Template:Harvp
  18. Template:Harvp
  19. Casti, John L. (1990). Paradigms lost: tackling the unanswered mysteries of modern science (1st ed.). New York: Avon Books. pp. 51–52. ISBN 978-0-380-71165-9. Unknown parameter |name-list-style= ignored (help)
  20. Thagard|1978|pp=223_ff_20-0|20.0 Thagard|1978|pp=223_ff_20-1|20.1 Template:Harvp
  21. Template:Harvp
  22. Novella, Steven (2018). The Skeptics' Guide to the Universe: How to Know What's Really Real in a World Increasingly Full of Fake. Grand Central Publishing. p. 165.
  23. Feyerabend, Paul (1975). "Table of contents and final chapter". Against Method: Outline of an Anarchistic Theory of Knowledge. ISBN 978-0-86091-646-8. Unknown parameter |url-status= ignored (help)
  24. Laudan L (1996). "The demise of the demarcation problem". In Ruse M. But Is It Science?: The Philosophical Question in the Creation/Evolution Controversy. pp. 337–350.
  25. McNally RJ (2003). "Is the pseudoscience concept useful for clinical psychology?". The Scientific Review of Mental Health Practice. 2 (2). Archived from the original on 30 April 2010. Unknown parameter |url-status= ignored (help)
  26. Funtowicz S, Ravetz J (1990). Uncertainty and Quality in Science for Policy. Dordrecht: Kluwer Academic Publishers.
  27. Bunge 1983b.
  28. e.g. Template:Harvp (Probability, "Common Blunders").
  29. Cite error: Invalid <ref> tag; no text was provided for refs named Popper, Karl 1963
  30. Template:Harvp, "Parsimony and Efficiency"
  31. 31.0 31.1 Hines, Terence (1988). Pseudoscience and the Paranormal: A Critical Examination of the Evidence. Buffalo, NY: Prometheus Books. ISBN 978-0-87975-419-8. Unknown parameter |name-list-style= ignored (help)
  32. Donald E. Simanek. "What is science? What is pseudoscience?". Archived from the original on 25 April 2009.
  33. 33.0 33.1 Lakatos I (1970). "Falsification and the Methodology of Scientific Research Programmes". In Lakatos I, Musgrave A. Criticism and the Growth of Knowledge. pp. 91–195.
  34. e.g. Template:Harvp (Deductive Logic, "Fallacies"), and at 211 ff (Probability, "Common Blunders")
  35. Macmillan Encyclopedia of Philosophy Vol. 3, "Fallacies" 174 ff, esp. section on "Ignoratio elenchi"
  36. Macmillan Encyclopedia of Philosophy Vol 3, "Fallacies" 174 ff esp. 177–178
  37. Template:Harvp
  38. Thagard|1978|pp=227–228_38-0|↑ Cite error: Invalid <ref> tag; no text was provided for refs named harvp|Thagard|1978|pp=227–228
  39. Eileen Gambrill (1 May 2012). Critical Thinking in Clinical Practice: Improving the Quality of Judgments and Decisions (3rd ed.). John Wiley & Sons. p. 109. ISBN 978-0-470-90438-1.
  40. 40.0 40.1 Lilienfeld SO (2004). Science and Pseudoscience in Clinical Psychology Guildford Press ISBN 1-59385-070-0
  41. 41.0 41.1 Template:Harvp
  42. Gitanjali B (2001). "Peer review – process, perspectives and the path ahead" (PDF). Journal of Postgraduate Medicine. 47 (3): 210–14. PMID 11832629. Archived from the original (PDF) on 23 June 2006. Unknown parameter |url-status= ignored (help)
  43. 43.0 43.1 Template:Harvp
  44. Template:Harvp
  45. Template:Harvp
  46. 46.0 46.1 Template:Harvp

Bibliography

Further reading

External links

Template:Library resources box