THOMAS KUHN ON REVOLUTION AND PAUL FEYERABEND
ON ANARCHY

BOOK VI - Page 9

Summary

In summary, semantical analysis reveals that duality need not be expressed in classical terms by Bohr’s complementarity principle, because the semantics of the descriptive terms used for observation are not simple, wholistic, or unanalyzable, and because prior to testing the semantics of these terms cannot imply an alternative description to that set forth by the quantum theory, in order for testing to have the contingency that gives it its function as an empirical decision procedure in the practice of basic science.  Feyerabend was closer to the mark with the first of his two approaches to realism in microphysics set forth in his “Complementarity” (1958), and he might have retained universalism – universal quantification – in quantum theory had he ignored the reductionist program, and had he developed a metatheory of semantical description, and then appropriately modified his Thesis I.  With appropriate modification as described above, the application of Feyerabend’s Thesis I to the quantum theory need not imply deductivism, and he need not have opted for historical relativism and rejected universalism in the sense of universal quantification. 

The quantum theory with its quantum postulate, its duality thesis, and its indeterminacy relations has no need for Newtonian semantics, either before, during, or after any empirical test.  It is a universal theory with a univocal descriptive vocabulary, and it is not semantically unique in empirical science due to any internal incommensurability.  Had Feyerabend considered Heisenberg’s realistic philosophy of the quantum theory, he would probably not have been driven to advocate his incommensurability and historical relativist theses, in order to implement a realistic agenda for microphysics.  Then instead of speaking of the Galileo-Einstein tradition, he could have referenced the Galileo-Einstein-Heisenberg tradition including Heisenberg’s pluralism.

Incommensurability between theories

Consider further Feyerabend’s incommensurability thesis, which is central to his historical relativism.  Rejecting the naturalistic theory of the semantics of language including the language of observational description enables dispensing altogether with classical concepts in quantum theory, and thereby with incommensurability within the quantum theory.  But Feyerabend sees incommensurability in Bohr’s complementarity thesis only as a special case, a case that is intrinsic to a single theory due to the use of classical concepts.

Feyerabend also treats incommensurability as a relation between successive theories, and he maintained the existence of incommensurability even before he adopted Bohr’s interpretation of quantum theory.  In his earlier statements of the thesis he says that two theories are incommensurable, if they can have no common meaning, because there exists no general concept having an extension including instances described by both theories.  The two theories therefore cannot describe the same subject matter, and are therefore incommensurable. 

In Against Method he also referenced Whorf’s thesis of linguistic relativity to explain incommensurability in terms of covert resistances in the grammar of language.  There he maintains that these covert resistances in the grammar of an accepted theory not only lead scientists to oppose the truth of a new theory, but also lead the scientists to oppose the presumption that the new theory is an alternative to the older one.  He considers both the quantum theory and the relativity theory to be incommensurable in relation to their predecessor, Newtonian mechanics.  How­ever, he offers no evidence for his highly implausible historical thesis that the advocates of Newtonian physics had failed to recognize that either quantum theory or relativity theory is an alternative to Newtonian physics at the time of the proposal of these new theories.

Incommensurability as inexpressibility

Feyerabend furthermore maintains that since incommensurability is due to covert classifications and involves major conceptual changes, it is hardly ever possible to give an explicit definition of it.  He says that the phenomenon must be shown, and that one must be led up to it by being confronted with a variety of instances, so that one can judge for oneself.  Feyerabend’s concept of incommensurability suffers from the same obscurantism as Kuhn’s concept of paradigm.  Readers of Feyerabend must rely on his identification of which transitional episodes in the history of science are to be taken as involving incommensurability and which ones do not, just as Kuhn’s readers must rely on the latter’s identification of which transitional episodes are transitions to a new and incommensurable paradigm and which ones are merely further articulations of the same paradigm, as Shapere had complained.  Although the two philosophers do not hold exactly the same views on the nature of incommensurability, and while they disagree about Kuhn’s thesis of normal science, neither developed a metatheory of semantical description that would enable clear and unambiguous individuation theories and thus characterization of semantical continuity and discontinuity through scientific change.  Feyerabend’s recourse to the Wittgensteinian-like view that incommensurability cannot be defined but can only be shown, may reasonably be regarded as evasive in the absence of such a seman­tical metatheory.

Semantics of the eclipse experiment

The semantics of the Newtonian and relativity theories that Feyerabend says are incommensurable may be examined by considering their synthetic statements analytically for semantical analysis.  By way of example consider one of the more famous empirical tests of Einstein’s general theory of relativity, the test that had a formative influence on Popper.  Two British astronomers undertook this test known as the “eclipse experiment”, Sir Arthur Eddington of Cambridge University and Sir Frank Doyle of the Royal Greenwich Observatory.  The test consisted of measuring the gravitationally produced bending of starlight visible during an eclipse of the sun that occurred on May 29, 1919, and then comparing measurements of the visible stars’ positions with the different predictions made by Einstein’s general theory of relativity and by New­ton's celestial mechanics.  The test design included the use of telescopes and photographic equipment for recording the telescopic images of the stars.  Firstly reference photographs were made during ordinary night darkness of the stars that would be visible in the proximity of the eclipsed sun.  These photographs were used for comparison with photographs of the same stars made during the eclipse.  The reference photographs were made with the telescope at Oxford University several months prior to the eclipse, when these stars would be visible at night in England.

Then the astronomers journeyed to the island of Principe off the coast of West Africa, in order to be in the path of the total solar eclipse.   During the darkness produced by the eclipse they photographed the stars that were visible in the proximity of the sun’s disk.  They then had two sets of photographs: An earlier set displaying images of the stars unaffected by the gravitational effects of the sun, and a later set displaying images of the stars near the edge of the disk of the eclipsed sun and therefore produced by light rays affected by the sun’s gravitational influence.  The stars in both sets of photographs that are farthest from the sun in the eclipse photographs are deflected only negligibly in the eclipse photograph.  And since different telescopes were used for making the two sets of photographs, reference to these effectively undeflected star images was used to determine an overall magnification correction for the different telescopes.  And correction furthermore had to be made for distorting refraction due to atmospheric turbulence and heat gradients, because the atmospheric distortions are large enough to be comparable to the effect being measured.  But they are also random from photograph to photograph, and the correction can be made by averaging over the many photographs.  Such are the essentials of the design of the Eddington eclipse experiment. 

The test outcome is as follows: The amount of deflection calculated with the general theory of relativity is 1.75 arc seconds.  Eddington’s findings showed a deflection of 1.60 ± 0.31 arc seconds.  The error in these measurements is small enough to conclude that Einstein’s general theory is valid, and that the Newtonian celestial mechanics can no longer be considered valid.  Later more accurate experiments have reduced the error of measurement, thereby further validating the relativity hypothesis.  In this experiment the test-design statements include description of the optical and photographic equipment and of their functioning, of the conditions in which they were used, and of the photographs of the measured phenomenon made with these measurement instruments.  These statements have universal import, since they describe the repeatable experiment, and are presumed to be true characterizations of the experimental set up.  The theory statements are also universal, and each theory – Einsteinian relativistic physics and Newtonian classical physics – shares descriptive variables with the same set of test-design statements.  Since the test-design statements may be viewed as analytic statements, any descriptive variable occurring both in a test design statement and in both theories has univocal semantics with respect to the semantic values contributed by the test-design statements. This test-design semantics is shared by both theories, and it makes the theories semantically commensurable.

Feyerabend maintains that theories are incommensurable, because there is no concept that is general enough to include both the Euclidian concept of space occurring in Newton’s theory and the Reimannian concept occurring in Einstein’s theory.  In fact the common part of the meanings in the semantics of the descriptive terms common to the two theories and to the test-design statements, are not common meanings due to a more general geometrical concept.  There is a common meaning because the test-design statements are silent about the claims made by either theory, even as both the theories claim to reference the same instances that the test-design statements definitively describe.  Before the test this silence is due to the vagueness in the common part of the meaning of the terms shared by the theory statements and defined by the test-design statements.  In the case of the test design for Eddington’s eclipse experiment, it may be said that before the test the meanings contributed by the test-design statements are not properly called either Newtonian or Einsteinian.  For purposes of describing the experimental set up, their semantics have the status of Heisenberg’s “everyday” concepts that are silent about the relation between parallel lines at distances very much greater than those in the apparatus.

After the test is executed, the nonfalsification of the relativistic theory and the falsification of the Newtonian theory are known outcomes of the test.  This acceptance of the relativity theory is a pragmatic determination giving it the semantically defining status of analytic statements, and the statements of the theory – now a law – supply part of the semantics for each descriptive term common to the theory and the test-design statements.  This semantical contribution by the former theory to each of these common descriptive variables may be said to resolve some of the vagueness in the whole meaning complex associated with each of these common terms.  Thus the common terms no longer have Heisenberg’s “everyday” status, but have Einsteinian semantics.  No Newtonian semantics is involved.

The semantics supplied to these terms by their test-design statements is still vague, because all meanings are always vague, although less so than before the test outcome is known.  However, were the test-design statements subsequently derived logically from the nonfalsified relativity theory, then these common terms would receive still more Einsteinian semantic values and additional structure from the accepted relativity theory.  The laws constituting the universal test-design statements would have been made a logically derived extension of the nonfalsified relativity theory.  In this case formerly “everyday” concepts receive further resolution of their vagueness as descriptive terms in the test-design statements.  Thus regardless of whether or not the test-design statements describing the experimental set up can be logically derived from the relativity theory, no resolution of the “everyday” concepts to Newtonian concepts is involved either before, during, or after the test, except perhaps for the convinced advocates of the Newtonian theory before their accepting the latter theory’s falsification.  But after the test outcome falsifying the Newtonian theory, even the most convinced advocates of the Newtonian theory must accept the semantically controlling rôle of the test-design statements, or simply reject the test design.

Newtonian confusion

Nonetheless some physicists incorrectly refer to the concepts in the test-design statements for testing relativity theory as Newtonian concepts even after the nonfalsifying test outcome.  This error occurs because any relativistic effects in the test equipment are too small to be detected or measured, and therefore do not jeopardize the conclusiveness of the test.  For example two different telescopes were used in the Eddington eclipse experiment to produce sets of photographs, one used before the eclipse and another during the eclipse.  Since the resulting photographs had to be compared, a correction had to be made for dif­ferences in magnification.  But no correction was attempted for the different deflections of starlight inside the telescopes due to the different gravitational effects of the different masses of the different telescopes even by those who believed in the relativity theory, because such differential relativistic effects are not empirically detectable.  But this empirical underdetermination does not imply that the test-design statements ever affirmed the Newtonian theory.  For the test to have any contingency the test-design statements must be silent about the tested theory and any alternative to it.

Cultural relativism

In addition to Bohr’s complementarity thesis and his own incommensurability thesis, Feyerabend is also led to his radical historicism by his thesis that whether in philosophy of science or in any social science, cultural views and values including the criteria and research practices of empirical science are inseparable from historical conditions.  In its radical variant historicism precludes the validity of universals altogether saying that particular historical circumstances cannot supply identical initial conditions for universally quantified theories describing recurrent aspects of human social behavior.  The objection to historicism is firstly that concepts are inherently universal (or as Popper says, all terms are disposition terms).   And secondly that the hypothetical character of universally quantified empirical statements does not as such invalidate them.

Truth is relative to what is said, because it is a property of statements; statements about reality are more or less true and false, while reality just exists.  The scientific revolutions of the twentieth century led philosophers and specifically pragmatists to affirm relativized semantics, and therefore to affirm that meaning and belief are mutually conditioning.  Reality imposes a constraint – the empirical constraint – on this mutual conditioning in language that enables falsification.  In empirical science the locus of the falsification is by prior decision assigned to a proposed universally quantified hypothesis, i.e., a theory, because it is conditioned upon previously selected universal test-design statements.  Outside the limits of empirical underdetermination – measurement error and conceptual vagueness – truth conditioning imposed on universal statements linking initial conditions and test outcomes is not negotiable once test-design statements are formulated and accepted.  But falsifying experiences anomalous to our universal beliefs may force revisions of those universal empirical beliefs and therefore of their semantics.

Critique of Popper’s falsificationism

The evolution of thinking from Conant’s recognition of prejudice in science to Feyerabend’s counterinduction thesis has brought to light an important limitation in Popper’s falsificationist thesis of scientific criticism.   In this respect Feyerabend’s philosophy of science represents a development beyond Popper, even after discounting Feyerabend’s historicism.  Popper had correctly rejected the positivists’ naturalistic philosophy of the semantics of language, and maintained that every statement in science can be revised.  But the paradigmatic status he accorded to Eddington’s 1919-eclipse experiment as a crucial experiment had deflected Popper from exploring the implications of the artifactual semantics thesis, because he identified all semantical analysis with essentialism.  He saw that the decidability of a crucial experiment depends on the scientist “sticking to his problem”.  But he further maintained that the scientist should never redefine his problem by reconsidering any experiment’s test design after the test outcome has been a falsification of the proposed theory.  Such revisions in Popper’s view have no contributing function in the development of science, and are objectionable as ad hoc content-decreasing stratagems, i.e., merely evasions. 

But the prejudiced or tenacious response of a scientist to an apparently falsifying test outcome may have a contributing function in the development of science, as Feyerabend illustrates in his examination of Galileo’s arguments for the Copernican cosmology.  Use of the apparently falsified theory as a “detecting device” by letting his prejudicial belief in the heliocentric theory control the semantics of observational description, enabled Galileo to reconceptualize the sense stimuli and thus to reinterpret observations previously described with the equally prejudiced alternative semantics built into the Aristotelian cosmology.  This was also the strategy used by Heisenberg, when he reinterpreted the observational description of the electron tracks in the Wilson cloud chamber experiment with the semantics of his quantum theory pursuant to Einstein’s anticipation of Feyerabend’s Thesis I, i.e., that theory decides what the scientist can observe. 

As it happens, the cloud chamber experiment was not designed to decide between Newtonian and quantum mechanics.  The water droplets suggesting discontinuity in the condensation tracks are very large in comparison to the electron, and the produced effect admits easily to either interpretation.  But Heisenberg’s reconceptualization of the sense stimuli led him to develop his indeterminacy relations.  In the eclipse experiment in 1919 the counterinduction strategy could also have been used by tenacious Newtonians who chose to reject the Eddington’s findings.  Conceivably the artifactual status of the semantics of language permits the dissenting scientists to view the falsifying test outcome as a refutation of one or several of Eddington’s test-design statements rather than as a refutation of the Newtonian theory.  Or more precisely, what some scientists view as definitive test-design statements, others may decide to view as falsified theory.

Another historic example of counterinduction, of using an apparently falsified theory as a detecting device, is the discovery of the planet Neptune.  In 1821, when Uranus happened to pass Neptune in its orbit – an alignment that had not occurred since 1649 and was not to occur again until 1993 – Alexis Bouvard developed calculations predicting future positions of the planet Uranus using Newton’s celestial mechanics.  But observations of Uranus showed significant deviations from the predicted positions. 

A first possible response would have been to dismiss the deviations as measurement errors and preserve belief in Newton’s celestial mechanics. But astronomical measurements are repeatable, and the deviations were large enough that they were not dismissed as observational errors. They were recognized to be a new problem.

A second possible response would have been to give Newton’s celestial mechanics the hypothetical status of a theory, to view Newton’s law of gravitation as falsified by the anomalous observations of Uranus, and then attempt to revise Newtonian celestial mechanics.  But by then confidence in Newtonian celestial mechanics was very high, and no alternative to Newton’s physics had been proposed. Therefore there was great reluctance to reject Newtonian physics.

A third possible response, which was historically taken, was to preserve belief in the Newtonian celestial mechanics, propose a new auxiliary hypothesis of a gravitationally disturbing phenomenon, and then reinterpret the observations by supplementing the description of the deviations using the auxiliary hypothesis of the disturbing phenomenon.  Disturbing phenomena can “contaminate” even supposedly controlled laboratory experiments.  The auxiliary hypothesis changed the semantics of the test-design description with respect to what was observed.  In 1845 both John Couch Adams in England and Urbain Le Verrier in France independently using apparently falsified Newtonian physics as a detecting device made calculations of the positions of a disturbing postulated planet to guide future observations in order to detect the postulated disturbing body.  In September 1846 using Le Verrier’s calculations Johann Galle observed the postulated planet with the telescope at the Berlin Observatory.

Theory is language proposed for testing, and test design is language presumed for testing.  But here the status of the discourses was reversed.  In this third response the Newtonian gravitation law was not deemed a tested and falsified theory, but rather was presumed to be true and used for a new test design.  The new test-design language was actually given the relatively more hypothetical status of theory by supplementing it with the auxiliary hypothesis of the postulated planet characterizing the observed deviations in the positions of Uranus.  The nonfalsifying test outcome of this new hypothesis was Galle’s observational detection of the postulated planet, which Le Verrier named Neptune.

But counterinduction is after all just a discovery strategy, and Le Verrier’s counterinduction effort failed to explain a deviant motion of the planet Mercury when its orbit comes closest to the sun, a deviation known as its perihelion precession.  He presumed to postulate a gravitationally disturbing planet that he named Vulcan and predicted its orbital positions in 1843.  But unlike Le Verrier and most physicists at the time, Einstein had given Newton’s celestial mechanics the hypothetical status of theory language, and he viewed Newton’s law of gravitation as falsified by the anomalous perihelion precession.  He had initially attempted a revision of Newtonian celestial mechanics by generalizing on his special theory of relativity.  This first attempt is known as his Entwurf version, which he developed in 1913 in collaboration with his mathematician friend Marcel Grossman.  But working in collaboration with his friend Michele Besso he found that the Entwurf version had clearly failed to account accurately for Mercury’s orbital deviations; it showed only 18 seconds of arc each century instead of the actual 43 seconds.

In 1915 he finally abandoned the Entwurf version with its intuitive physical ideas carried over from Newton’s theory, and under prodding from the mathematician David Hilbert turned to mathematics exclusively to produce his general theory of relativity.  He then developed his general theory, and in November 1915 he correctly predicted the deviations in Mercury’s orbit.  He received a congratulating letter from Hilbert on “conquering” the perihelion motion of Mercury.  After years of delay due to World War I his general theory was vindicated by Arthur Eddington’s famous eclipse test of 1919.  Some astronomers reported that they observed a transit of a planet across the sun’s disk, but these claims were found to be spurious when larger telescopes were used, and Le Verrier’s postulated planet Vulcan has never been observed.

Le Verrier’s response to Uranus’ deviant orbital observations was the opposite to Einstein’s response to the deviant orbital observations of Mercury.  Le Verrier reversed the roles of theory and test-design language by preserving his belief in Newton’s physics and using it to revise the test-design language with his postulate of a disturbing planet. Einstein viewed Newton’s celestial mechanics to be hypothetical, because he believed that the theory statements were more likely to be productively revised than the test-design statements, and he took the deviant orbital observations of Mercury to be falsifying, thus indicating that revision was needed. Empirical tests are conclusive decision procedures only for scientists who agree on which language is proposed theory and which is presumed test design, and who furthermore accept both the test design and the test-execution outcomes produced with the accepted test design.

Semantical consequences

Feyerabend recognizes that there are semantical consequences to counterinduction.   In “Trivializing Knowledge”, a paper critical of Popper, Feyerabend states that the “contents” of theories and experiments are constituted by the refutation performed and accepted by the scientific community, rather than functioning as the basis on which falsifiability can be decided, as Popper maintains.  He considers the stock theory like “Every swan is white”, and states that while a black swan falsifies the theory, the refutation depends on the reasons for the anomalous swan’s black color.  Earlier in his “Popper’s Objective Knowledge” he gives the same example, and says that the decision about the significance of the anomalously black swan depends on having a theory of color production in animals. 

But his discussion by means of this stock theory pertains more to the factors that motivate a scientific community to decide between test-design and theory statements, than to a description of the semantics resulting from that decision once made.  Feyerabend has no metatheory of semantical description for characterizing the “contents” of theories and experiments.  In this respect Feyerabend’s philosophy suffers the same deficiency as Popper’s.

Achievements

The conflicts between Popper and Feyerabend were struggles between giants in the philosophy of science profession.  Having started in the theatre before turning to philosophy, Feyerabend chose a theatrical writing style that offended the droll scholars of the profession who tended to treat him dismissively. Judging by the typical fare to be found even today in the philosophy journals with their lingering residual positivism, he stands above the academic crowd by an order of magnitude.  Feyerabend was an outstanding twentieth-century philosopher of science, who advanced the frontier of the discipline, as it was turning from an encrusted positivism to the new contemporary pragmatism.

 


Pages [1] [2] [3] [4] [5] [6] [7] [8] [9]
NOTE: Pages do not corresponds with the actual pages from the book