BOOK VIII - Page 1  

     This book examines the late twentieth-century specialty called “computational philosophy of science”, which consists of computerized strategies encoded in the software designs of the automated discovery systems developed by Herbert Simon, Paul Thagard, Pat Langley, Thomas Hickey, John Sonquist, Robert Litterman, Jan Zytkow, Gary Bradshaw and others. 

Nobel laureate Herbert Simon is the principal figure considered in this BOOK.  Much of this material is presented in reverse chronological order, and the exposition therefore starts with the work of the philosopher of science Paul Thagard, who follows Simon’s cognitive-psychology agenda for his computational philosophy of science investigations.  Thagard’s philosophy of science is rich enough for exposition in terms of the four functional topics in philosophy of science.  But before considering Thagard’s treatment of the four functional topics, consider firstly his psychologistic views on the nature of computational philosophy of science and on the semantics of conceptual change in scientific revolutions.

Thagard’s Psychologistic Computational Philosophy of Science

Thagard has been a Professor of Philosophy at the University of Waterloo since 1992, and is also Adjunct Professor of Psychology, Adjunct Professor of Computer Science, Director of his Computational Epistemology Laboratory, and Director of the Cognitive Science Program. He had previously been an associate professor of philosophy at University of Michigan, Detroit, where he was associated with their Cognitive Sciences Program, and also a Senior Research Cognitive Scientist at Princeton University.  He is a graduate of the University of Saskatchewan, Cambridge, Toronto (Ph.D. in philosophy, 1977) and the University of Michigan (M.S. in computer science, 1985).

Computational philosophy of science has become the new frontier in philosophy of science in recent years, and it portends to become essential to and definitive of twenty-first century philosophy of science.  There are many philosophers now jumping on the bandwagon by writing about the computational approach in philosophy of science, but only exceptional authors who have actually designed, written and exhibited such computer systems for philosophy of science are considered in this book.  Paul Thagard, Pat Langley and Herbert Simon are among the few philosophers of science who have the requisite technical skills to make such contributions, and have demonstrated such skills by actually writing systems.  Thagard’s work is also selected because by the closing decades of the twentieth century he has been one of the movement’s most prolific authors and most inventive academic philosophers of science.  Thagard follows the artificial intelligence (AI) approach and the psychological interpretation of AI systems initially proposed by Simon, who is one of the founding fathers of artificial intelligence.  In his Computational Philosophy of Science (1988) Thagard explicitly proposes a concept of philosophy of science that views the subject as cognitive psychology.  This contrasts with the established linguistic-analysis tradition that achieved ascendancy in twentieth-century academic philosophy of science, and that Hickey prefers for computational philosophy of science.

The analysis of language has often been characterized by a nominalist view, also called “extensionalism” or “referential theory of meaning.”  The nominalist view proposes a two-level semantics, which recognizes only the linguistic symbol, such as the word or sentence, and the objects or entities that the symbols reference.  Two-level semantics recognizes no third level consisting of the idea, concept, “intension” (as opposed to extension), proposition, or any other mental reality mediating between linguistic signs and nonlinguistic referenced objects.

Two-level semantics is the view typically held by the positivist philosophers, who rejected all mentalism in psychology and who preferred behaviorism.  Thagard explicitly rejects the behavioristic approach in psychology and prefers cognitive psychology, which recognizes mediating mental realities.  Two-level semantics is the view that is also characteristic of philosophers who accept the Russellian predicate calculus in symbolic logic, which has the notational convention that expresses existence claims by the logical quantifiers.  It is therefore in effect a nominalist Orwellian newspeak, in which predicate terms are semantically vacuous unless they are placed in the range of quantifiers, such that they reference entities.  If the predicates are quantified, the referenced entities are then called either “mental entities” or “abstract entities.”  Due to this consequent hypostatizing of concepts the positivist philosopher Nelson Goodman divides philosophers into nominalists and “platonists”, and identifies himself as a nominalist.  Logical positivists adopted the Russellian symbolic logic, although some like Rudolf Carnap and Alonzo Church recognize a three-level semantics with meanings associated with predicates without hypostatizing by quantifying predicates.

Thagard explicitly rejects the behavioristic approach in psychology and prefers cognitive psychology, which recognizes mediating mental realities.  But he does not reject the Russellian symbolic logic and he refers to concepts as “mental entities”.  Conceivably his turn away from linguistic analysis and toward psychologism has been motivated by his recognition of the mentalistic semantical level.  Like Simon, Thagard seeks to investigate concepts by developing computer systems that he construes as analogues for the mental states, and then to hypothesize about the human cognitive processes of scientists on the basis of the computer system designs and procedures.  He refers to this new discipline as “computational philosophy of science”, which he defines as the attempt to understand the structure and growth of scientific knowledge in terms of computational and psychological structures.  He thus aims to offer new accounts both of the nature of theories and explanations and of the processes involved in their development. And he distinguishes computational philosophy of science from general cognitive psychology by the former’s normative perspective.

In his Mind: Introduction to Cognitive Science (1996), intended as an undergraduate textbook, he states that the central hypothesis of cognitive science is that thinking can best be understood in terms both of representational structures in the mind and of computational procedures that operate on those structures. He labels this central hypothesis with the acronym “CRUM”, by which he means “Computational Representational Understanding of Mind.”  He says that this hypothesis assumes that the mind has mental representations analogous to data structures and computational procedures analogous to algorithms, such that computer programs using algorithms applied to the data structures can model the mind’s processes.

His How Scientists Explain Disease (1999) reveals some evolution in his thinking, although this book reports no new computer-system contribution to computational philosophy of science.  In this book he examines the historical development of the bacteriological explanation for peptic ulcers.  He explores how collaboration, communication, consensus, and funding are important for research, and he uses the investigation to propose an integration of psychological and sociological perspectives for a better understanding of scientific rationality. Thus interestingly unlike, e.g., neoclassical economists he states that principles of rationality are not derived a priori, but should be developed in interaction with increasing understanding of both social and human cognitive processes.

Thagard’s computational philosophy of science addresses the functional topics: the aim of science, discovery, criticism and explanation.  He has created several computer systems for computational philosophy of science.  As of this writing none of them have produced mathematically expressed theories, and all of them have been applied to the reconstruction of past episodes in the history of science.  None have been applied to the contemporary state of any science, either to propose any new scientific theory or to resolve of any current scientific theory-choice issue.

Thagard on Conceptual Change, Scientific Revolutions, and System PI

Thagard’s semantical views are set forth in the opening chapters of his Conceptual Revolutions (1992).   He says that previous work on scientific discovery, such as Scientific Discovery; Computational Explorations of the Creative Process by Pat Langley, Herbert A. Simon, Gary L. Bradshaw, and Jan M. Zytkow in 1987 has neglected conceptual change.  (This important book is discussed below in the sections reporting on the views and systems developed by Langley, Simon, Zytkow, Bradshaw and colleagues.) Pat Langley is presently Professor of Computer Science at the University of Auckland, New Zealand, Director for the Institute for the Study of Learning and Expertise as Professor of Computing and Informatics, and Head of the Computing Learning Laboratory at Arizona State University.  Bradshaw at the time of this writing is a member of the psychology department at Mississippi State University. Zytkow (1944-2001) received a Ph.D. in philosophy of science from University of Warsaw in 1979, and since 1996 had been chairman of the computer science department at Wichita University, where he founded the Machine Discovery Laboratory. In his later years Zytkow focused on mechanized knowledge discovery by data mining with very large data sets. Thagard proposes both a general semantical thesis about conceptual change in science and also a thesis specifically about theoretical terms.  He maintains that (1) kind-hierarchies and part-hierarchies give structure to conceptual systems, (2) relations of explanatory coherence give structure to propositional systems, and (3) scientific revolutions involve structural transformations in conceptual and propositional systems. His philosophy of scientific criticism is his thesis of explanatory coherence, which is described separately below.  Consider firstly his general semantical thesis.

Thagard opposes his psychologistic account of conceptual change to the view that the development of scientific knowledge can be fully understood in terms of belief revision, the prevailing view in pragmatist analytic philosophy since Willard van Quine.  Thagard says concepts are mental representations that are learned, and that they are open, i.e., not defined in terms of necessary and sufficient conditions.  He maintains that a cognitive-psychology account of concepts and their organization in hierarchies shows how a theory of conceptual change can involve much more than belief revision.  He notes in passing that hierarchies are important in WORDNET, an electronic lexical reference system.  Thagard states that an understanding of conceptual revolutions requires seeing how concepts can fit together into conceptual systems and seeing what is involved in the revolutionary replacement of such systems.  He says conceptual systems consist of concepts organized into kind-hierarchies and part-hierarchies linked to one another by rules.  The idea of kind-hierarchies is not new; the third-century logician Porphyry proposed the tree-hierarchical arrangement since called the Porphyrian tree.  In his Semiotics and Philosophy of Language (1968) the philosopher Umberto Eco calls the Porphyrian tree a “disguised encyclopedia”.  Linguists also recognize taxonomic hierarchies.

Thagard maintains that a conceptual system can be analyzed as a computational network of nodes with each node corresponding to a concept, and each connecting line in the network corresponding to a link between concepts.  The most dramatic changes involve the addition of new concepts and especially new rule-links and kind-links, where the new concepts and links replace old ones in the network. Thagard calls the two most radical types of conceptual change “branch jumping” and “tree switching”, and says that neither can be accounted for by belief revision.  Branch jumping is a reorganization of hierarchies by shifting a concept from one branch of a hierarchical tree to another, and it is exemplified by the Copernican revolution in astronomy, where the earth was reclassified as a kind of planet instead an object sui generis.  Tree switching is a more radical change, and consists of reorganization by changing the organizing principle of a hierarchical tree.  It is exemplified by Darwin’s reclassification of human as animal while changing the meaning of biological classification to a historical one.  He also says that adopting a new conceptual system is more “holistic” than piecemeal belief revision.  Historically the term “holistic” meant unanalyzable, but clearly Thagard is not opposed to analysis; perhaps “systematic” might be a better term than “holistic”.

It is difficult to imagine either “branch jumping” or “tree switching” without belief revision.  In his Computational Philosophy of Science Thagard references Quine’s metaphorical statements in his article “Two Dogmas of Empiricism” in Logical Point of View that science is a web of belief, a connected fabric of sentences that faces the tribunal of sense experience collectively, all susceptible to revision.  He agrees with Quine, but adds that Quine does not go far enough.  Thagard advocates a more procedural viewpoint and the abandonment of the fabric-of-sentences metaphor in favor of more complex cognitive structures and operations.  He concludes that Quine’s “web of belief” does not consist of beliefs, but rather consists of rules, concepts, and problem solutions, and the procedures for using them.

In Conceptual Revolutions Thagard maintains that semantical continuity is maintained through the conceptual change by the survival of links to other concepts, and he explicitly rejects Kuhn’s thesis that scientific revolutions are world-view changes.  He says that old and new theories have links to concepts not contained in the affected theories.  He cites by way of example that while Priestly and Lavoisier had very different conceptual systems describing combustion, there was an enormous amount of information on which they agreed concerning many experimental techniques and findings.  He also says that he agrees with Hanson’s thesis that observations are theory-laden, but he maintains that they are not theory determined.  He says that the key question is whether proponents of successive theories can agree on what counts as data, and that the doctrine that observation is theory-laden might be taken to count against such agreement, but that the doctrine only undermines the positivist thesis that there is a neutral observation language sharable by competing theories.  He states that his own position requires only that the proponents of different theories be able to appreciate each other’s experiments.  This view contrasts slightly with his earlier statement in his Computational Philosophy of Science, where he said that observation is inferential.  There he said that observation might be influenced by theory, but that the inferential processes in observation are not so loose as to allow us to make any observations we want.  And he said that there are few cases of disagreement about scientific observations, because all humans operate with the same sort of stimulus-driven inference mechanisms.

Consider next Thagard’s thesis specific to theoretical terms.  Both Thagard and Simon accept the distinction between theoretical and observation terms, and both use it in some of their computer systems.  In these systems typically the theoretical terms are those developed endogenously by an AI system and the observation terms are inputted exogenously into the system.  But in both their literatures the distinction between theoretical and observation terms has a philosophical significance apart from the roles in their systems.  Thagard says that new theoretical concepts arise by conceptual combination, and that new theoretical hypotheses, i.e., propositions containing theoretical terms, arise by abduction.  Abduction, in which he includes analogy, is a thesis in his philosophy of scientific discovery, which is described separately below.  Thagard’s belief in theoretical terms suggests a residual positivism in his philosophy of science.  But he attempts to distance himself from the positivists’ foundational agenda and their naturalistic philosophy of the semantics of language.  Unlike the positivists he rejects any strict or absolute distinction between theoretical and observable entities, and says that what counts as observable can change with technological advances.  And since Thagard is not a nominalist, he does not have the positivists’ problem with the meaningfulness of theoretical terms.

But he retains the distinction thus modified, because he believes that science has concepts intended to refer to a host of postulated entities and that it has propositions containing these theoretical concepts making such references.  Theoretical propositions have concepts that refer to nonobservable entities, and these propositions cannot be derived by empirical generalization due to the unavailability of any observed instances from which to generalize.  Yet he subscribes to the semantical thesis that all descriptive terms – observational terms as well as theoretical terms – acquire their meanings from their functional rôle in thinking.  Thus instead of a naturalistic semantics, he apparently admits to a kind of relativistic semantics.

However, while Thagard subscribes to a relativistic theory of semantics, he does not recognize the contemporary pragmatist view that a relativistic semantical view implies a relativistic ontology, which in turn implies that all entities are theoretical entities.  Quine calls relativistic ontological determination “ontological relativity”, and says that all entities are “posits” whether microphysical or macrophysical.  From the vantage of the contemporary pragmatist philosophy of language the philosophical distinction between theoretical and observation terms is anachronistic.  Functionally Thagard could retire these linguistic atavisms – “theoretical” and “observational” – if instead he used the terms “endogenous” and “exogenous” respectively to distinguish the descriptive terms developed by a system from those inputted into it.

Collaboratively with Keith J. Holyoak, Thagard developed an artificial-intelligence system called PI (an acronym for “Process of Induction”) that among other capabilities creates theoretical terms by conceptual combination. Hickey says that in the expository language of science all descriptive terms – not just Thagard’s theoretical terms – have associated with them concepts that are combinations of other concepts ultimately consisting of semantic values that are structured by the set of beliefs in which the concepts occur.

Thagard’s system PI is described in “Discovering the Wave Theory of Sound: Inductive Inference in the Context of Problem Solving” in IJCAI Proceedings (1985) and in his Computational Philosophy of Science.  PI is written in the LISP computer programming language.  In a simulation of the discovery of the wave theory of sound, PI created the theoretical concept of sound wave by combining the concepts of sound and wave.  The sound wave is deemed unobservable, while in fact the instances of the perceived effects of water waves and sound waves are observable.  In fact contrary to Thagard a simple standing sound wave can be observed in an enclosed smoke chamber.  In PI the combination is triggered when two active concepts have instances in common. PI only retains such combinations when the constituent concepts produce differing expectations, as determined by the rules for them in PI.  In such cases PI reconciles the conflict in the direction of one of the two donor concepts.  In the case of the sound-wave combined concept the conflict is that water waves are observed in a two-dimensional water surface, while sound is perceived in three-dimensional space.  In PI the rule that sound spreads spherically from a source is “stronger” than the rule that waves spread in a single plane, where the “strength” of a rule is a parameter developed by the system.  Thus the combination of the three-dimensional wave is formed. The meaningfulness of this theoretical term is unproblematic for Thagard, a post-positivist philosopher.

Thagard on Discovery by Analogy and Systems ACME and ARCS

In Conceptual Revolutions Thagard distinguishes three methods of scientific discovery.  They are (1) data-driven discovery by simple abduction to make empirical generalizations from observations and experimental results, (2) explanation-driven discovery using existential abduction and rule abduction to form theories referencing theoretical entities, and (3) coherence-driven discovery by making new theories due to the need to overcome internal contradictions in existing theories.  To date Thagard has offered no discovery-system design that creates new theories by the coherence-driven method, but he has implemented the other two methods in his systems.

Consider firstly data-driven generalization.  The central activity of artificial-intelligence system PI is problem solving with the goal of creating explanations.  The system represents knowledge consisting of concepts represented by nodes in a network and of propositions represented by rules linking the nodes.  Generalization is the formation of general statements, such as may have the simple form “Every X is Y.”  The creation of such rules by empirical generalization is implemented in PI, which takes into account both the number of instances supporting a generalization, and the background knowledge of the variety of kinds of instances referenced.

Consider secondly explanation-driven discovery by abduction.  By “abduction” Thagard means inference to a hypothesis that offers a possible explanation of some puzzling phenomenon.  The PI system contains three complex data structures, i.e., data types in LISP property lists, which are called “messages”, “concepts”, and “rules.”  The message type represents particular results of observations and inferences.  The concept type locates a concept in a hierarchical network of kinds and subkinds.  The concepts manage storage for abductive problem solving.  The rules type represents laws in the conditional “if…then” form, and also contains a measure of strength.  The system fires rules that lead from the set of starting conditions to the goal of explanation. 

Four types of abductive inference accomplish this goal: (1) Simple abduction, which produces hypotheses about individual objects.  These hypotheses are laws, i.e., empirical generalizations. (2) Existential abduction, which postulates the existence of formerly unknown objects.  This type results in theoretical terms referencing theoretical entities, which was discussed in the previous section above.  (3) Rule-forming abduction, which produces rules that explain other rules.  These rules are the theories that explain laws.  Since Thagard retains a version of the doctrine of theoretical terms referencing theoretical entities, he advocates the positivists’ traditional three-layered schema of the structure of scientific knowledge consisting of (a) observations expressed in statements of evidence, (b) laws based on generalization from the observations, and (c) theories, which explain laws.

In Conceptual Revolutions Thagard also mentions a fourth type of abduction, (4) analogical abduction, which uses past cases of hypothesis formation to generate hypotheses similar to existing ones.  But he treats analogy at greater length in his Mental Leaps: Analogy in Creative Thought (1995) co-authored with Keith Holyoak.  In Conceptual Revolutions the authors propose a general theory of analogical thinking, which they illustrate in a variety of applications drawn from a wide spectrum.  Thagard states that analogy is a kind on nondeductive logic, which he calls “analogic.”  Analogic contains two poles, as it were.  They are firstly the “source analogue”, which is the known domain that the investigator already understands in terms of familiar patterns, and secondly the “target analogue”, which is the unfamiliar domain that the investigator is trying to understand.  Analogic then consists in the way the investigator uses analogy to try to understand the targeted domain by seeing it in terms of the source domain.  Analogic involves a “mental leap”, because the two analogues may initially seem unrelated, but the act of making the analogy creates new connections between them. 

Thagard calls his theory of analogy a “multiconstraint theory”, because he identifies three regulating constraints, which are (1) similarity, (2) structure, and (3) purpose.  Firstly the analogy is guided by a direct similarity between the elements involved.  Secondly it is guided by proposed structural parallels between the rôles in the source and target domains.  And thirdly the exploration of the analogy is guided by the investigator’s goals, which provide the purpose for considering the analogy.  Thagard lists four purposes of analogies in science.  They are (1) discovery, (2) development, (3) evaluation, and (4) exploration.  Discovery is the formulation of a new hypothesis.  Development is the theoretical elaboration of the hypothesis.  Evaluation consists of arguments given for its acceptance.  And exploration is the communication of new ideas by comparing them to the old ones.  He notes that some would keep evaluation free of analogy, but he maintains that to do so would contravene the practice of several historic scientists.

Each of the three regulating constraints – similarity, structure, and purpose – is operative in four steps that Thagard distinguishes in the process of analogic: (1) selecting, (2) mapping, (3) evaluating, and (4) learning.  Firstly the investigator selects a source analogy often from memory.  Secondly he maps the source to the target to generate inferences about the target.  Thirdly he evaluates and adapts these inferences to take account of unique aspects of the target.  And finally he learns something more general from the success or failure of the analogy.

Thagard notes two computational approaches for the mechanization of analogic: the “symbolic” approach and the “connectionist” approach. The symbolic systems represent explicit knowledge, while the connectionist systems can only represent knowledge implicitly as the strengths of weights associated with connected links of neuron-like units in networks.  Thagard says that his multiconstraint theory of analogy is implemented computationally as a kind of hybrid combining symbolic representations of explicit knowledge with connectionist processing.  Thagard and Holyoak have developed two analogic systems: ACME (Analogical Constraint Mapping Engine) and more recently ARCS (Analog Retrieval by Constraint Satisfaction).  In 1987 Thagard and Holyoak developed a procedure whereby a network could be used to perform analogical mapping by simultaneously satisfying the four constraints.  The result was the ACME system, which mechanizes the mapping function.  It creates a network when given the source and target analogues, and then a simple algorithm updates the activation of each unit in parallel, to determine which mapping hypothesis should be accepted.

ARCS deals with the more difficult problem of retrieving an interesting and useful source analog from memory in response to a novel target analog, and it must do so without having to consider every potential source analog in the memory.  The capability of matching a given structure to those stored in memory that have semantic overlays with it, is facilitated by information from WORDNET, an electronic thesaurus in which a large part of the English language is encoded.  The output from ARCS is then passed to ACME for mapping.

Thagard on Criticism by “Explanatory Coherence”

Thagard’s theory of explanatory coherence set forth in detail in his Conceptual Revolutions describes procedures and criteria whereby scientists choose to abandon an old theory and its conceptual system, and accept a new one.  He sets forth principles for his system called ECHO that enable the assessment of the global coherence of an explanatory system.  Local coherence is a relation between two propositions.  The term “incohere” means that two propositions do not cohere; i.e., they resist holding together.  The terms “explanatory” and “analogous” are primitive terms in the system, and the following principles define the meaning of “coherence” and “incoherence” in the context of his principles, as paraphrased and summarized below:

Symmetry. If propositions P and Q cohere or incohere, then Q and P cohere or incohere respectively.

Coherence. The global explanatory coherence of a system of propositions depends on the pairwise local coherence of the propositions in the system.

Explanation. If a set of explanatory propositions explain proposition Q, then the explanatory propositions in the set cohere with Q, and each of the explanatory propositions cohere with one another.

Analogy. If P1 explains Q1, P2 explains Q2, and if the P’s are analogous to each other and the Q’s are analogous to each other, then the P’s cohere with each other, and the Q’s cohere with each other.

Data Priority. Propositions describing the results of observation are evidence propositions having independent acceptability.

Contradiction. Mutually contradictory propositions incohere.

Competition. Two propositions incohere if both explain the same evidence proposition and are not themselves explanatorily connected.

Acceptability. The acceptability of a proposition in a system of propositions depends on its coherence with the propositions in the system.  Furthermore the acceptability of a proposition that explains a set of evidence propositions is greater than the acceptability of a proposition that explains only a subset or less than the number in the set including a subset.

In “Explanatory Coherence” in Behavioral and Brain Sciences (1989) and in several later papers Thagard’s theory of explanatory coherence is implemented in a system written in the LISP computer language that applies connectionist algorithms to a network of units. The system name “ECHO” is an acronym for “Explanatory Coherence by Harmony Optimization”.  Although elsewhere Thagard mentioned a coherence-driven discovery method, his ECHO system is a system of theory choice.  Before execution the operator of the system inputs the propositions for the conceptual systems considered by the system, and also inputs instructions identifying which hypothesis propositions explain which other propositions, and which propositions are observation reports and have evidence status.

In ECHO each proposition has associated with it two values: a weight value and an activation value.  A positive activation value represents a degree of acceptance of the hypothesis or evidence statement, and a negative value the degree of rejection.  The weight value represents the explanatory strength of the link between the propositions.  When one of the eight principles of explanatory coherence in the above list says that a proposition coheres with another, an excitatory link is established between the two propositions in the computer network.  And when one of the eight principles says that two propositions incohere, then an inhibitory link is established. 

In summary, in the ECHO system network: (1) A proposition is a unit in the network.  (2) Coherence is an excitatory link between units with activation and weight having a positive value, and incoherence is an inhibitory link with activation and weight having a negative value.  (3) Data priority is an excitatory link from a special evidence unit.  (4) Acceptability of a proposition is activation.  Prior to execution the operator has choices of parameter values that he inputs, which influence the system’s output.  One of these is the “tolerance” of the system for alternative competing theories, which is measured by the absolute value of the ratio of excitatory weights to inhibitory weights.  If the tolerance parameter is low, winning hypotheses will deactivate losers, and only the most coherent will be outputted.

When ECHO runs, activation spreads from the special evidence unit to the data represented by evidence propositions, and then to the explanatory hypotheses, preferring firstly those that explain a greater breadth of the evidence than their competitors.  Then secondly it prefers those that explain with fewer propositions, i.e., are simpler. But the system prefers unified theories to those that explain evidence with special ad hoc hypotheses for each evidence statement explained. Thagard says that by preferring theories that explain more hypotheses, the system demonstrates the kind of conservatism seen in human scientists when selecting theories.  And he says that like human scientists ECHO rejects Popper’s naïve falsificationism, because ECHO does not give up a promising theory just because it has empirical problems, but rather makes rejection a matter of choosing among competing theories.

 Thirdly in addition to breadth and simplicity the system prefers those exhibiting analogy to other previously successful explanations.  In his Computational Philosophy of Science he notes that many philosophers of science would argue that analogy is at best relevant to the discovery of theories and has no bearing on their justification.  But he maintains that the historical record, such as Darwin’s defense of natural selection, shows the need to include analogy as one of the criteria for the best explanation among competing hypotheses.  In summary therefore, other things being equal, activation accrues to units corresponding to hypotheses that: firstly explain more evidence, secondly provide simpler explanations, or thirdly are analogous to other explanatory hypotheses.  This is Thagard’s philosophy of scientific criticism.

These three criteria are also operative in his earlier PI system, where breadth is called “consilience.” During execution this system proceeds through a series of iterations adjusting the weights and activation levels, in order to maximize the coherence of the entire system of propositions. Thagard calls the network “holistic” in the sense that the activation of every unit can potentially have an effect on every other unit linked to it by a path, however lengthy.  He reports that usually not more than one hundred cycles are needed to achieve stable optimization.  The maximized coherence value is calculated as the sum of each of the weight values multiplied by the activation value of the propositions associated with each weight. 

Thagard applied system ECHO to several revolutionary episodes in the history of science.  He lists (1) Lavoisier’s oxygen theory of combustion, (2) Darwin’s theory of the evolution of species, (3) Copernicus’ heliocentric astronomical theory of the planets, (4) Newton’s theory of gravitation, and (5) Hess’ geological theory of plate tectonics.  In reviewing his historical simulations Thagard reports that the criterion in ECHO having the largest contribution to explanatory coherence in scientific revolutions is explanatory breadth – the preference for the theory that explains more evidence than its competitors – as opposed to the other two criteria of simplicity and analogy.  ECHO seems best suited to evaluate nonmathematically expressed alternative theories, but can also evaluate mathematical theories.

Thagard on Explanation and the Aim of Science

Thagard’s views on the three levels of explanation were mentioned above, but he has also made some other statements that warrant mention.  In Conceptual Revolutions he distinguishes six different approaches to the topic of scientific explanation in the philosophy of science literature, the first five of which he finds are also discussed in the artificial-intelligence literature.  The six types are: (1) deductive, (2) statistical, (3) schematic – which uses organized patterns, (4) analogical, (5) causal – which he opposes to specious correlation, and (6) linguistic/pragmatic.  For the last he finds no correlative in the artificial-intelligence literature.  Thagard says that he views these approaches as different aspects of explanation, and that what is needed is a theory of explanation that integrates all these aspects. He says that in artificial intelligence such integration is called “cognitive architecture”, by which is meant a general specification of the fundamental operations of thinking, and he references Herbert Simon’s “General Problem Solver” agenda.

The topic of the aim of science has special relevance to Thagard’s philosophy, since he defines computational philosophy of science as normative cognitive psychology.  Thagard’s discussions of his theory of inference to the “best explanation” implemented in his system PI set forth in Computational Philosophy of Science and his later statement as the theory of optimized explanatory coherence implemented in his system ECHO set forth in Conceptual Revolutions, reveal much of his view on the aim of science.  His statement of the aim of science might be expressed as follows: The aim of science is to develop hypotheses with maximum explanatory coherence including coherence with statements reporting available empirical findings. He notes that no rule relating concepts in a conceptual system will be true in isolation, but he maintains that the rules taken together as a whole in a conceptual system constituting an optimally coherent theory can provide a set of true descriptions.

In Computational Philosophy of Science Thagard states that his theory of explanatory coherence is compatible with both realist and nonrealist philosophies.  But he maintains that science aims not only to explain and predict phenomena, but furthermore to describe the world as it really is, and he explicitly advocates the philosophical thesis of scientific realism, which he equates to the thesis that science in general leads to truth.  Thagard’s concept of “scientific realism” seems acceptable as far as it goes, but it does not go far enough.  The meaning of “scientific realism” in the contemporary pragmatist philosophy of science is based upon the subordination of ontological claims to empirical criteria in science, a subordination that is due to the recognition and practice of ontological relativity.  Thagard’s acceptance of the distinction between observation and theoretical terms suggests that he does not admit the thesis of ontological relativity.


Pages [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [Next 10 >>]
NOTE: Pages do not corresponds with the actual pages from the book