``The
Logic of Success'', British Journal for the Philosophy of
Science, special millennium issue, 51, 2001, 639-666.
[Ugly MS Word file (BJPS requires it)]
This is the paper to read if you only read one. It portrays computational learning theory as an alternative paradigm for the philosophy of science. Topics covered include underdetermination as complexity, the solution of infinite epistemic regresses of the sort that arise in naturalistic philosophies of science, and a priori, transcendental deductions of the central features of Kuhnian historiography from the logic of convergence. This is the most recent, general overview of my position, except that I could only hint at the results I obtained later in ``A Close Shave with Realism''.The Logic of Reliable Inquiry, Oxford: Oxford University Press, 1996.
This my most comprehensive presentation of computational learning theory as a nonstandard foundation for the philosophy of science. Click on the title for the analytical contents of the book. The first six papers listed below came out after the book, however, and hence are not covered.``Learning Theory and Epistemology'', forthcoming in Handbook of Epistemology, I. Niiniluoto, M. Sintonen, and J. Smolenski, eds. Dordrecht: Kluwer, 20
A review of standard learning theory results for epistemologists.(with O. Schulte and C. Juhl) ``Learning Theory and the Philosophy of Science'', Philosophy of Science 64: 1997, pp. 245-267.
Position piece superceded by ``The Logic of Success''.``Reichenbach, Induction, and Discovery'', Erkenntnis, 35, 1991, pp. 123-149.
Develops my own position out of Reichenbach's by relaxing his assumptions; especially the assumption that all induction concerns probability!
``How
to Do Things with an Infinite Regress'', completed manuscript.
[PDF file]
A fundamental problem for naturalistic epistemology is that reliability does not seem sufficient for knowledge if one has no reason to believe one is reliable. This is often taken as an argument for coherentism. I respond in a different way: I invoke a methodological regress by asking another method to check whether your method will actually succeed. If the question arises again, invoke another method to check the success of the second, and so forth. Then I solve for the intrinsic worth of infinite methodological regresses. The idea is to find the best single-method performance that an infinite regress of methods could be reduced to, in the sense that the single method receives as in inputs only the successive outputs or conjectures of the methods in regress. I solve several different kinds of regresses in this way, with interesting observations about the viability of K. Popper’s response to Duhem’s problem.``Naturalism Logicized'', in After Popper, Kuhn and Feyerabend: Current Issues in Scientific Method, R. Nola and H. Sankey, eds, 34 Dordrecht: Kluwer, 2000, pp. 177-210.
Contains the proofs for ``How to Do Things with an Infinite Regress'' and motivates the problem of solving infinite reliability regresses by referring to Larry Laudan's ``normative naturalism'' program for the philosophy of science, which urges us to check the instrumentality of new scientific methods by using old ones.
``A Close
Shave with Realism: Ockham's Razor Derived from Efficient Convergence'',
completed manuscript.
[PDF file]
One of my best papers. Based on an idea due to learning theorists R. Freivalds and C. Smith, I isolate a ``pure'' Ockham principle of which minimizing existential commitment, minimizing polynomial degree, finding the most restrictive consiervation laws, and optimizing theoretical unity are instances. Then I show that choosing the Ockham hypothesis is necessary for minimizing the number of retractions or errors in the worst case prior to converging to the right answer. I also show that following Ockham's principle is also sufficient for error minimization but is not sufficient for retraction efficiency. Retraction efficiency is equivalent to the principle that one must retain one's current hypothesis until a ``surprise'' occurs. These results are pertinent to the ``realism debate'' because the Ockham principle must be satisfied (as the realist insists) for efficiency's sake even though the Ockham hypothesis might very well be wrong (the anti-realist's point). The key to the study is a topologically invariant notion of ``surprise complexity'' which characterizes the least worst-case transfinite bound achievable in answering a given empirical question.(with C. Juhl) ``Realism, Convergence, and Additivity'', Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association, D. Hull, M. Forbes, and R. Burian, eds., East Lansing: Philosophy of Science Association, 1994, pp. 181-190.
In this paper, we argue that Bayesian ``measure one'' convergence theorems allow for dogmatic exclusion of possibilities that would prevent one from solving the problem if they weren't ignored, which is biased against the anti-realist's position. The paper is superceded by Chapter 13 of The Logic of Reliable Inquiry (above).
(With
C. Juhl and C. Glymour), ``Reliability, Realism, and Relativism'',
in Reading Putnam, P. Clark, ed., London: Blackwell, 1994,
98-161.
[ugly MS Word file]
This paper emerged from my participation in the conference at St. Andrews celebrating Hilary Putnam's Gifford lectures. Putnam invented computational learning theory in a critique of Carnap's confirmation theory. In this paper, I provide some criticism of the morals Putnam drew from his result and apply similar techniques to show that internal realist truth is incomplete. I tended to view Putnam's views on internal realism as an outgrowth of his learning theoretic ideas (which also show up in his technical work in mathematical logic). At the Gifford conference he surprised me by saying that he viewed the three ideas as being entirely separate. A major regret is that I submitted the paper late so that Putnam did not have a chance to respond to it in the volume. [The online ms is in a junky MS Word format.](with C. Glymour) ``Inductive Inference from Theory Laden Data'', Journal of Philosophical Logic, 21, 1992, pp. 391-444.
This was an attempt to respond to Kuhnian worries from a learning theoretic perspective. A more mature perspective on the material is presented in chapter 16 of The Logic of Reliable Inquiry. I now think that the interpretation of Kuhn in ``The Logic of Success'' is both mathematically more tractable and a better fit to Kuhn's relatively tame views. Working on this paper is what convinced me to turn to topology as the fundamental framework for understanding the problem of induction, but the perspective does not yet appear in the paper.
``Iterated
Belief Revision, Reliability, and Inductive Amnesia,'' Erkenntnis,
50, 1998 pp. 11-58.
[pdf file without figures (Miktex problem)]
This is one of my best papers. I took the most recent proposals for iterated belief revision that have come out of the philosophical and artificial intelligence communities (e.g., W. Spohn, J. Pearl, C. Boutillier) and asked what none of their proponents has asked: do they help or hinder the search for truth? Using generalized versions of N. Goodman’s “grue” predicate, I compare the learning powers of the proposed methods. It turns out that some of the methods are subject to ``inductive amnesia'', meaning that they can either predict the future or remember the past but not both! The resulting analysis implies surprisingly strong short-run recommendations concerning the proposed methods, providing useful side-constraint on belief-revision proposals. [The figures are missing online because Miktex doesn't support the figure package I was using in Oztex.]``The Learning Power of Iterated Belief Revision'', in Proceedings of the Seventh TARK Conference Itzhak Gilboa, ed., 1998, pp. 111-125.
A crisp precis of the preceding results, with a cute example from aerodynamics.(with O. Schulte and V. Hendricks) ``Reliable Belief Revision'', in Logic and Scientfic Methods, M. L. Dalla Chiara, et al., eds. Dordrecht: Kluwer, 1997.
My first investigation of belief revision theory. Some nice observations and distinctions, but no negative results. Still, it laid the necessary groundwork for hooking learning theory up to belief revision theory, without which the preceding papers wouldn't have been possible.
(with O. Schulte)
``Church's Thesis and Hume's Problem,'' in Logic and Scientific
Methods, M. L. Dalla Chiara, et al., eds. Dordrecht: Kluwer, 1997,
pp. 383-398.
[PDF file]
Argues that uncomputability is just a species of the problem of induction so that uncomputability should be taken seriously from the ground up in a unified theory of computable inquiry.(with O. Schulte) ``The Computable Testability of Theories with Uncomputable Predictions'', Erkenntnis 43: 29-66, 1995, 29-66.
This paper, which was reviewed in the The Journal of Symbolic Logic, 61: #3, p.1049., solves for how deductively complex a theory can be if a computer is to determine its truth in a given sense. Conversely, we solve for how hard it can be to determine the truth of a theory with a given deductive complexity. The most surprising result is that it can be possible for a computer to refute a hypothesis with certainty (a la Popper) even though the predictions of the theory are infinitely uncomputable (i.e., not even hyper-arithmetically definable). A corollary is that even though a Turing machine can refute the hypothesis with certainty, a computable Bayesian cannot even gradually converge to the truth value of the hypothesis. The results are analogous to, but not the same as, the basis theorems of recursion theory.
I observe that the computability of human behavior is an empirical rather than a metaphysical or mathematical question. The main result is that we can verify (in the limit) that we are computers if and only if we are not computers, which I call an ``empirical paradox''. The argument is reviewed in The Logic of Reliable Inquiry. Penrose responded in his sequel but didn't seem to get the point, in my opinion.``Effective Epistemology, Psychology, and Artificial Intelligence'', Acting and Reflecting, Wilfried Sieg, ed., New York: D. Reidel, 1990, pp. 115-126.
My first duty as an assistant professor (certainly not my idea) was a public debate on the merits of artificial intelligence with my late colleague and Nobel laureate Prof. Herb Simon. By the end of the ``debate'' I learned never to underestimate Herb Simon! This is the write-up of the talk, published with Simon's pointed response. My thesis, which I still maintain, was that the best AI can be viewed as using procedures to explicate vaguely specified problems rather than as formulating efficient solutions to the problems those procedures solve.(With C. Glymour) ``Thoroughly Modern Meno'', in Inference, Explanation, and other Frustrations, John Earman, ed., University of California Press, 1992, pp. 3-23.
Plato as the first computational learning theorist.