Thursday, August 16, 2007

A methodology for Modeling Scientific Discovery

A METHODOLOGY FOR MODELING SCIENTIFIC DISCOVERY
Sakir Kocabas *
uckoca @ tritu.bitnet
Department of Artificial Intelligence
Marmara Research Center, PK 21, Gebze, Turkey

Abstract
Computational modeling of scientific discovery has been emerging as an important research field in artificial intelligence. Building theoretical models for scientific development has until recently been the exclusive domain for philosophers of science. With the advances in artificial intelligence and especially in machine learning, opportunities have arisen for researchers in this field to test the learning methods developed in modeling scientific discovery. In the last fifteen years, a number of systems have been developed modeling various discoveries ranging from 17th to 20th century physics and chemistry. However, a methodology for building and evaluating such models has still not been developed. This paper focuses on the elements of historical discovery models, and the methods for their systematic construction and evaluation.

* Also affiliated with the Department of Space Sciences and Technology, ITU, Maslak, Istanbul, Turkey.

1. Introduction
Recent research in the computational study of science has revealed a number of important aspects of science that were overlooked by conventional study of science. Shrager and Langley (1990) describe the basic differences between the computational and the conventional philosophical approaches as follows: Conventional philosophical tradition focuses on the structure of scientific knowledge and emphasizes the evaluation of las and theories, while the computational approach focuses on the processes of scientific discovery including the activities of experimentation, data evaluation, and theory formation.

The distinction can be extended even further: Computational study of science concerns not only with the issues of hypothesis formation, testing and verification, but also a series of other issues related with scientific research. Kocabas (1992b) names more than a dozen major research tasks involved in physical sciences. These range from formulating and selecting research goals, defining research framework, gathering and organising related knowledge, and through selecting research strategies, methods, tools and techniques, to designing experiments, data collection, hypothesis and theory formation, theory revision and producing scientific explanations. Any of these research tasks may involve a variety of planning, classification and evaluation problems.

Computational study of science is more concerned with the methodological issues in science rather than the logico-philosophical issues which are the main concern of conventional studies. The main purpose of the former is to investigate the processes that lead to discovery in science, and eventually to build a model (or models) of scientific research which would be used as artificial research assistants.

Another discipline, social study of science, deals with the social dimension of science, e.g., with how scientific communities form and interact, how research projects are developed into research programmes, how these programmes evolve or terminate, and how research traditions develop in human societies. History of science, on the other hand, investigates scientific developments through the historical records, and provides a historical perspective to science.

Computational study of science draws ideas, perspectives, methods and data from conventional philosophical, social and historical studies, but it differs from these disciplines in some essential ways: 1) It has a medium, a computational model, for the reconstruction and analysis of an historical discovery, 2) Using its models, it can investigate the possible alternative routes to the discovery, 3) It aims assembling heuristics for developing models for scientific research for currently active research projects in science.

2. Types of Discovery
A methodology for the systematic evaluation of discovery models should first of all be capable of distinguishing between different types of discovery. In other words, it should provide a classification of discovery, so that one can identify a certain type in the history of science in relation to certain other discoveries.

Kocabas (1991c) introduces an implicit classification, which can be reformulated as follows: 1) Logico-Mathematical/Formal Discovery, 2) Theoretical Discovery, and 3) Empirical Discovery. This classification is somewhat in parallel with the categorization of knowledge by Kocabas (1992a), and reflects an order of diminishing degree of abstraction.

Logico-Mathematical/Formal Discovery: This type of discovery takes place, as the name suggests, in the abstract domain of logic and mathematics. Formal Discovery takes place in a formal domain which involves abstract entities, their classes and properties. Formal discovery requires logico-mathematical knowledge as background knowledge for inductive and/or deductive inference on domain knowledge. Examples of this type of discovery are the mathematical techniques and formal theories starting from the invention of decimal system and algebra to modern mathematics, and various axiom systems.

Theoretical Discovery: This type of discovery requires logico-mathematical, formal and theoretical knowledge, and in general results from theoretical analysis and synthesis. Some examples to theoretical discovery from the history of science are: a) The emergence of the pecial theory of relativity based on Einstein-Lorenz transformations, b) Maxwell's theory of electromagnetism based on his equations, c) Yukawa's theory of nuclear forces and mesons, and d) Dirac's theory of charge symmetry and antiparticles.

Empirical Discovery: Empirical discovery requires experimental and observational data, as well as logico-mathematical and formal knowledge. Theoretical knowledge has not been a prerequisite in the early empirical discoveries in the history of science, but in modern empirical research such as in oxide superconductivity and "cold fusion" experiments, theoretical domain knowledge is necessary. Empirical discovery can be further divided as heuristic and experimental/observational discovery.

Heuristic discoveries take place in attempts to finding qualitative and/or quantitative relationships in experimental data. Some examples to such discoveries are: a) GLAUBER's (Langley, et al., 1987) formulation of acid-alkali theory in the 17th century chemistry, b) STAHL's (Zytkow & Simon, 1986) discovery of componential models of compounds in the 18th century chemistry, c) Quantitative discoveries of simple pysical laws in classical physics (e.g. Kepler's laws, Boyle's law, Ohm's law), d) Discovery of new quantum properties and their value distribution to elementary particles in particle physics.

Experimental/Observational Discovery. This type of discovery is usually initiated by a thechnological inventions or innovations. Two examples are: The discovery of superconductivity by Onnes following his invention of a method to liquify helium, and the discovery of new particle interactions after the invention of cloud chamber.

A number of computational systems have been developed in the last 15 years for modeling these different types of discoveries.
Some of the earliest AI systems such as Logic Theorist were designed to prove theorems in logic. Among the more recent systems, AM (Lenat, 1979) stands out as a successful example in modeling mathematical discovery. The distinguishing characteristics of logico-mathematical discovery is that in principle, it does not require experimentation or observation. Nor does it require the knowledge of a physical domain. Lenat's (1983) EURISKO, in its applications to Naval Fleet Design, Evolution, and three dimensional circuit design, is a good example to formal discovery systems.

Examples of theoretical discovery models are PI (Thagard & Holyoak, 1985), ECHO (Thagard & Novak, 1990), and GALILEO (Zytkow, 1990). The first two systems can be better characterized as conceptual discovery systems, and as such, are closer to formal discovery systems. GALILEO on the other hand is an interesting example of discovery by theoretical analysis. In the history of science, there are some rather interesting theoretical discoveries such as Maxwell's equations and the Einstein-Lorenz transformations. Scarcity of research in modeling theoretical discovery in AI remains to be striking.

Empirical discovery is an extensively studied area in AI, and a number of computational models have been designed to investigate its various aspects. Empirical discovery systems can be divided into two main classes as qualitative and quantitative systems, although this distinction is sometimes irrelevant. Among the qualitative discovery systems, GLAUBER (Langley, et al., 1987), STAHL (Zytkow & Simon, 1986), STAHLp (Rose & Langley, 1986), BR-3 (Kocabas, 1991a), KEKADA (Kulkarni & Simon, 1988), Abe (O'Rorke, Morris & Schulenburg, 1990), and COAST (Rajamoney, 1990), MECHEM (Valdes-Perez, ???), and PAULI (Valdes-Perez, ???) can be cited.

Among the quantitative discovery systems, BACON (Langley, et al., 1987), FAHRENHEIT (Zytkow,1987) and IDS (Nordhausen & Langley, 1987) can be cited as prominent examples. BACON was the first successful example of quantitaive discovery, which has also attracted the interest of philophers of science. The IDS system on the other hand, integrates quantitative and qualitative methods.

So, for a systematic evaluation, computational models can be looked at within the framework of this classification. In this way, we would know why a logico-mathematical or formal discovery system does not need experimental data, and why a theoretical discovery system needs logico-mathematical, formal and theoretical knowledge for its operations.

3. The Methodology
It should be stated at this stage that no discovery model can reflect every detail of a discovery process, except perhaps when the model itself is used in a real-life discovery. In this perspective, historical discovery models can at best be rational reconstructions of the discovery process. In building such models, it is essential to find out and assemble the knowledge that has played a significant role in the discovery.

3.1 Collecting Historical Records
Collecting information about historical discoveries is not an easy task. One can identify three main sources of historical record for scientific discovery as history of science books, scientific research reports, and the log books used by the scientist during their experiments leading to the discovery. Most of the current discovery systems rely on publications on the history of physics and chemistry dedicated to a certain period. Scientific research papers and reports can be used for reconstructing more recent discoveries (e.g. te ones materialized in this century.) Kocabas (1992a) uses such researh reports and articles in science journals for reconstructing the discoveries in oxide superconductivity. Log books are not easy to obtain for their being personal property not meant for publication. It is no surprise that, among the discovery models, only Kulkarni & Simon's (1989) KEKADA is based on a log book, (i.e., Hans Krebs' log book) for its reconstruction of the urea cycle.

3.2 Assembling the Historcal Records in Standard Formats
Building a complex discovery model may require a good deal of time and effort. The main problem in this task is to assemble the necessary knowledge which may have been used in the discovery. It seems best to develop a standard format to assemble this knowledge in a structured way. This format may include the following slots: Discovery (name, date and responsible scientist(s), Historical Background, Available Technology, Empirical Knowledge, Theoretical Knowledge, Inputs, Algorithms, Heuristics, Results, Possible Alternative Results, and Effects of the Discovery. Figures 1 and 2 illustrate two examples of this structured representation.

This format provides a knowledge level view of the discovery, and allows the construction of the model in a systematic way. It also helps to analyze and revise the model as necessary. Additionally, it enables to see the level of detail that the model can be built for the reconstruction of the discovery.

Figure 1. Example of formatted data for the discovery of Y-Ba-Cu-O superconductor in oxide superconductivity.

-----------------------------------------------------------------------

Discovery Event
Discovery: Neutrino
Date of Discovery: 1931, W Pauli; 1934, E Fermi; 1953 C Cowan & F Raines
Source: Ne'eman & Kirsh (1986), p 67-69.

Background :

Historical Background/Problems: After the discovery of neutron in 1932, the picture of the atomic world was complete. Four elementary particles were known: photon, electron, proton and neutron. The nucleus was composed of protons and neutrons, and the behavior of the electrons around the nucleus was well explained by quantum mechanics. But there were unsolved problems such as the process of beta-decay and the nature of the forces that hold the components of the nucleus together. Beta-decay appeared to contradict the basic conservation laws of physics (conservation of energy and angular momentum.)


3 3
H --> He + e
1 2

n --> p + e

Calculations showed that in beta-decay, the mass difference between the original and the produced nuclei is equal to the maximal energy value that the electron can have, yet only a small minority of beta-particles (electrons) actually possess this energy. Most beta particles were emitted with less energy than indicated by the actual mass difference, and were not accompanied by a photon which could compensate for the energy difference.

Theoretical Background: The theories which attempted to explain the structure of the atomic nucleus predicted the existence of additional particles. Experimental physicists searched for these particles. Moreover, Dirac's 1928 relativistic wave equation had implied the existence of a particle with opposite quantum properties of the electron. (According to Dirac's equation, for every charged particle, there is an anti particle.) In 1931, Pauli proposed that during beta-decay, an additional particle is emitted. This particle carried part of the energy liberated in the process. Its mass would be zero or very small, and it would be electrically neutral.

Types of Empirical Knowledge and Technology: Radioactivity, particle detectors, cloud chambers and particle reactions originated by cosmic rays. (Early in the century, physicists who studied radioactivity observed that electroscopes discharged slowly even there was when no radiation in the vicinity. At first this was attributed to natural radioactivity, but in 1910 V. Hess showed that radiation grew stronger in the upper atmosphere. This radiation was later called cosmic rays. Later, it was realized that these were fast particles (mostly protons). Unlike beta-decay, alfa decay was found to conserve energy, momentum and angular momentum (spin).

Discovery Process
Discovery Goals: Investigate beta-decay, and explain why this type of reaction violates the conservation of mass/energy and spin. Inputs: Conservation laws concerning charge, energy, momentum and spin. A set of valid and observed particle reactions involving the electron, proton, neutron, positron, and gamma rays:


2 1
H + gamma --> H + n (1934, Chadwick & Goldhaber)
1 1

p + p --> p + p
n --> p + e
e + /e --> gamma
gamma --> e + /e

The following quantum properties about the particles were known:

Particle mass (MeV) charge spin
-------------------------------------------
gamma 0 0 1
e 0.51 -1 1/2
p 938.26 1 1/2
n 939.55 0 1/2
/e 0.51 1 1/2
-------------------------------------------

Algorithms:

The observed reaction

n --> p + e

violates the mass/energy and spin conservation laws. If we consider
the balance of charge and spin values:

0 = 1 + (-1) electrical charge

1/2 =/= 1/2 + 1/2 spin values

the inequality in the case of spin conservation can be turned into
an equality by introducing a hypothetical particle x, with zero
electrical charge and 1/2 spin:

n --> p + e + x

whose charge and spin balance would be

1 = 1 + (-1) + 0
1/2 == 1/2 + 1/2 + 1/2.

Outputs: Postulation of a new particle (which is called the
anti-neutrino /nu), with zero rest mass, zero electrical charge
and 1/2 spin values.

Secondary Results: The new particle was named as the neutrino (nu) by
Fermi. Later, the neutrino emitted in beta-decay was accepted as
an anti-particle. Hence the subsequent formulation of beta-decay
was
n --> p + e + /nu.

By using the principle of symmetry the possiblity of a
reverse reaction was also considered:

nu + p --> n + e

Alternative Outputs: -
Theory Development: The discovery of the neutrino completed the knowledge
about beta-decay. The validity of the basic conservation laws of physics
was once again supported by the reactions of elementary particles.
In 1953 Konopinski & Mahmoud postulated the lepton number, and
proposed the conservation of lepton number analogous to the
conservation of charge.

Explanations
Types of Explanations: Theoretical, deductive, abductive. New Research Problems: What was the rest mass of the neutrino? Did this particle have an antiparticle? How could this be proved? Would the neutrino react with other particles? What would be the results of reactions?

New Research Directions
The postulation of the neutrino led to the search for this particle, and it took about twenty years to prove its existence in the laboratory in an indirect way:


n --> p + e + /nu
/nu + p --> n + /e
/e + e --> 2 gamma
2 gamma + Cd --> Cd + n gamma

The gamma emissions from the last two reactions were detected by a series of photomultipliers one after the other in miliseconds.
-----------------------------------------------------------------------

Figure 2. Example of formatted historical data for the discovery of the neutrino in particle physics.

-----------------------------------------------------------------------

Discovery Event

Discovery: Y-Ba-Cu-O oxide superconductor
Date of Discovery: 16th February, 1987. Paul Chu et al.
Source: Physics Today?

Background :

Historical Background/Problems:

Theoretical Background: Several theories on superconductivity had been developed. One of these theories was the BCS theory which explains the phenomenon in terms of the conservation of angular and translational momentum.

Current theoretical knowledge implied the impossibility of oxide superconductors with higher Tcs than metal or alloy superconductors. (The theories were based on the accumulated experimental knowledge.)

Knowledge about the relationships between heat conductivity and electrical conductivity.

Types of Empirical Knowledge and Technology: Oxide superconductors had been known since 1973 when D. Johnston discovered superconductivity in LiTi2O4 at temperatures up to 13.7K. In 1975, A. Sleight discovered superconductivity in BaPb(1-x)Bi(x)O3 with a Tc up to 13K.

In 1986, La-Ba-Cu-O superconductor with Tc around 35K was discovered by Bednorz and Mueller.

Knowledge about elements in the Periodic Table. Processes for synthesis of double and triple oxide compounds. Element substitutions in such compounds.

Discovery Process

Discovery Goals: Search for oxides with higher Tcs than La-Ba-Cu-O compound.

Inputs: La-Ba-Cu-O superconducting compound, chemical elements, knowledge about the synthesis of double and triple oxides.

Algorithms: Element substitutions in La-Ba-Cu-O compound. Select an element from Periodic Table with electronic properties similar to La, and substitute it with this element in La-Ba-Cu-O under the relevant experimental conditions.

Outputs: Substitution of Y for La in La-Ba-Cu-O, and the discovery of Y-Ba-Cu-O superconductor.

Secondary Results: Other substitutions may be possible to yield better oxide superconductors.

Alternative Outputs: < to be filled >

Theory Development: The hypothesis that substances with highest Tcs are metal alloys was falsified once more.

Explanations

Types of Explanations: The role of crystal structure in oxide superconductivity was discussed. Explanations were based on electron-phonon interactions.
New Research Problems: Could there be other oxide compounds with higher Tcs?
New Research Directions: Search for oxides with higher Tcs. Explain oxide superconductivity.

-----------------------------------------------------------------------

4. The Discovery Model
Computational models of discovery need to be evaluated in accordance with their type, i.e. for being formal, theoretical or empirical models. However, there are some common points of evaluation. These can be listed as follows: research goals; methods of knowledge representation; the size, order and role of initial knowledge; theory revision and search methods; methods of learning and discovery; generality of the system's methods; and the system's predictive abilities. We can now look at these one by one.

4.1. Research Goals
The research goals of a discovery system varies with its domain of interest, and the methods that it employs. Some systems such as AM (Lenat, 1979), EURISKO (1983), and GLAUBER (Langley, et al., 1987) aim at discovering new concepts, relationships, heuristics or general hypotheses. Some other systems such as BR-3 (Kocabas, 1991a) and AbE (O'Rorke, Morris & Schulenburg, 1990) start with an impasse, and aim at consistency and/or completeness as their main goal, while discovery is a by-product of their activities. Yet others such as COAST (Rajamoney, 1990) and GENSIM/HYPGENE (Karp, 1990) search for consistent explanations, and GALILEO (Zytkow, 1990) for more expressive laws.

The research methods of a system must be adequate enough for its research goals. For example, a consistency oriented system must inevitably have theory revision capabilities, and a completeness oriented system must have the ability to generate and test new concepts and hypotheses. A few systems such as KEKADA (Kulkarni & Simon, 1988) and CER (Kocabas, 1989; 1992b) are capable of generating their own research goals by detecting problem states (inconsistencies, incompletenesses and anomalies) in their knowledge base. The system description of a computational model must clearly state its goals, or how they are generated.

4.2. Knowledge Representation Methods
Knowledge representation still remains to be an important issue in computational models, as it affects the efficiency of a system's methods of search, learning and discovery. Early models, (e.g. GLAUBER, STAHL and BR-3) employ relatively simple representation methods such as list structures and predicate expressions. Recent discovery systems (e.g. AbE, COAST, IDS) employ more structured knowledge representation schemes such as frames and qualitative process schemas, often in combination with predicate logic representation. Each representation scheme has its own advantages and disadvantages in terms of the implementation (see, e.g., Kocabas, 1991b for details). Therefore, the choice of knowledge representation schemes or their integration is an important issue in the design of computational models. Consequently, the system description of a model must explicitly state the knowledge representation methods that it employs, and how they are integrated.


4.3. The Order, Size, and the Role of Initial Knowledge
Initially, the discovery systems were divided into two broad groups as data- and theory-driven systems. Later on, the distinction began to appear as superficial, for some systems (e.g. STAHLp and BR-3) start as data driven models and acquire theory-driven system characteristics during their operations. The size of initial knowledge and how much of it is utilized by a discovery system is an important feature in the correct evaluation of that system. Some systems process data incrementally (e.g., STAHLp, BR-3, AbE, COAST), and the order of data given to the system affects its behavior (see, e.g., Kocabas, 1991a). If the discovery model is an incremental system, its description must evaluate the effects of data order. Data size is important for the evaluation of any discovery model to test the effectiveness of its search methods.

4.4. Theory Revision and Search Methods
One of the prominent problems that haunt discovery systems with large search spaces is the control of search. Whatever search methods are used, the size of the effectively used initial knowledge base is a significant indicator of the system's dimensions. Models with large search spaces utilize a number of search control methods. These can be as widely varied as logical constrains (as in STAHLp), algebraic constraints (as in BR-3 and GALILEO), general rules (as in EURISKO, BACON, KEKADA and IDS), and domain constraints (as in BR-3, KEKADA, AbE, COAST and GENSIM/HYPGENE). The description of a computational model must include its search methods, and explain why those particular methods are used rather than the others.

Theory revision is becoming an indispensable feature of discovery models. This is in line with the understanding that most scientific discoveries are the results of generating and testing hypotheses. If the discovery system has theory revision capabilities, first these must be described in detail in general terms, and then explained with a particular example. Also, where, how and why the system's search and theory revision methods fail need also be explained. Artificial data can be used in testing the effectiveness of a system's theory revision and search methods.

4.5. Learning and Discovery Methods
Discovery systems utilize deductive and inductive methods, but until now, there is no discovery model that uses analogical reasoning in a non- trivial sense. Logico-mathematical, formal and theoretical discovery systems such as AM, EURISKO, PI, GALILEO and ECHO extensively rely on deductive methods, while BACON employs inductive methods. Systems like STAHL, STAHLp, BR-3, KEKADA and IDS employ both inductive and deductive methods. A system's discovery methods cannot be separated from its search and theory revision methods.

4.6. Generality of Methods
Another important metric in the evaluation of a discovery model has been the generality of its discovery and search methods. Some discovery models such as EURISKO and BACON rely on rather general heuristics for their discoveries. Similarly, BR-3 employs algebraic rules to reduce its search space. However, there seems to be a limit for the uses of such general heuristics, as systems with more and structured domain knowledge must inevitably use domain heuristics for constraining search. Therefore, the size and the type of the discovery model must be considered in evaluating the generality of a system's methods.

4.7. Predictive Abilities
Predictive ability can be defined as a system's ability to generate a set of propositions which were undecidable prior to the discovery are decidable afterwards. Predictive ability is an important feature of theoretical and empirical systems. Does the system's predictive ability improve as it discovers new concepts, hypotheses or relationships? The answer to this question is also an indication of how effectively the system integrates and uses the knowledge that it has discovered. Discoveries of systems like BACON and GALILEO are validly applicable to an indefinite number of physical states. However, by themselves, these systems do not apply their knowledge to physical states. IDS, FAHRENHEIT and BR-3, on the other hand, effectively utilize the knowledge they discovered in new problem states.

5. Conclusion
Computational modeling of scientific discovery has been emerging as an important research area in artificial intelligence, and the number of computational models is steadily increasing. A methodology for systematic evaluation of these systems is necessary, not only for researchers in this field, but also for the interested philosophers and historians of science. First of all, a methodology needs to be developed for building historical discovery models. Secondly, a method of classification for such models for a systematic evaluation is needed. Then a set of evaluation criteria needs to be identified, which can include the research goals, knowledge representation methods, the role of initial knowledge, theory revision and search methods, learning and discovery methods, generality and the system's predictive abilities. In this paper we have discussed these issues and provided examples for the methods to be used.

References
Darden, L. (1987). Viewing the history of science as compiled hindsight. The AI Magazine 8, No. 2, (33-42).

Karp, P.D. (1990). Hypothesis formation as design. In: J. Shrager and P. Langley (Eds.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo, CA.

Kocabas, S. (1989). Functional Categorization of knowledge: Applications in modeling scientific discovery. PhD Thesis, Department of Electronic and Electrical Engineering, King's College London, University of London.

Kocabas, S. (1991a). Conflict resolution as discovery in particle physics. Machine Learning, 6, 277-309.

Kocabas, S. (1991b). A review of learning. The Knowledge Engineering Review, 6, 3.

Kocabas, S. (1991c). Computational models of scientific discovery. The Knowledge Engineering Review, 6, 259-305.

Kocabas, S. (1992a). Functional categorization of knowledge. AAAI Spring Symposium Series, 25-27 March 1992, Stanford, CA.

Kocabas, S. (1992b). Four levels of learning and representation in modeling scientific discovery. First Turkish Symposium on AI and Neural Networks, 25-26 June, Bilkent, Ankara.

Kulkarni, D. & Simon, H.D. (1988). The processes of scientific discovery. Cognitive Science, 12, 277-309.

Langley, P., Simon, H.A., Bradshaw, G.L., Zytkow, J.M. (1987). Scientific discovery: Computational explorations of the creative processes. Cambridge, MA: The MIT Press.

Lenat, D.B. (1979). On automated scientific theory formation: A case study using the AM program. In J. Hayes, D. Michie and L.I. Mikulich (Eds.) Machine Intellligence 9, (251-283). New York: Halstead.

Lenat, D.B. (1983). EURISKO: A program that learns new heuristics and domain concepts. Artificial Intelligence 21, Nos. 1-2, (61-98).

Nordhausen, B. & Langley, P. Towards an integrated discovery system. Proceedings of the Tenth International Joint Conference on Artificial Intelligence, 198-200.

O'Rorke, P., Morris, S. & Schulenburg, D. (1990). Theory formation by abstraction. In: J. Shrager and P. Langley (Eds.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo, CA.

Rajamoney, S.A. (1990). A computational approach to theory revision. In: J. Shrager and P. Langley (Eds.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo, CA.

Thagard, P. & Holyoak, K. (1985). Discovering the wave theory of sound: Inductive inference in the context of problem solving. Proceedings of the Ninth International Joint Conference on Artificial Intelligence, 610-612.

Thagard, P. & Nowak, G. (1990). The conceptual structure of the geological revolution. In: J. Shrager and P. Langley (Eds.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo, CA.

Valdes-Perez, R. (199?). MECHEM ......

Valdes-Perez, R. (in press). Discovery of conserved properties in particle physics: A comparison of two models. Machine Learning.

Zytkow, J. (1987). Combining many searches in the FAHRENHEIT discovery system. Proceedings of the Fourth International Workshop on Machine Learning, Los Altos, CA:
Morgan Kaufmann, 281-287.

Zytkow, J. (1990). Deriving laws through analysis of process and equations. In: J. Shrager and P. Langley (Eds.) Computational Models of Scientific Discovery and Theory Formation. Morgan Kaufmann, San Mateo, CA.

Zytkow, J. & Simon, H.D. (1986). A theory of historical discovery: The construction of componential models. Machine Learning, 1, 107-137.


References on Superconductivity
Khurana, A. (1987a). Search and discovery: Superconductivity seen above the boiling point of nitrogen. Physics Today, April, 1987, 17-23.

Khurana, A. (1987b). Search and discovery: Bednorz and Mueller win Nobel Prize for new superconducting materials. Physics Today, December, 1987, 17-19.


References on Physics
Griffiths, D. (1987). Introduction to Elemantary Particles. N.Y., ohn Wiley & Sons.

Ne'eman, Y. and Kirsh, Y. (1986). The Particle Hunters. Cambridge University Press.

Pais, A. (1986). Inward Bound.

No comments: