My dissertation developed scientifically informed accounts of computation and representation in cognitive science. It extended those accounts to argue for a radical externalism about cognitive scientific theories and a shift in our approach to the philosophy of mind, and weighed in on current methodological debates in cognitive science. Most importantly, it defended a methodologically nominalist approach to the philosophy of cognitive science: one that investigates scientific explanation by setting aside any properties that scientific concepts might refer to, and focusing instead on the concepts themselves and their role in cognitive science’s explanatory economy — what they help scientists to do, and how.
How Computation Explains [Arxiv]
I discuss the monumental shift in our understanding of the brain triggered by the project of computational cognitive science: the use of tools, concepts, and strategies from the computer sciences to investigate the brain. Philosophers have typically understood this project, and the computational explanations it provides, to assume that the brain is a computer, in a sense to be specified by the metaphysics of computation. That metaphysics, by revealing what exactly we attribute to the brain when we say it computes, is supposed to show how and why computational explanations work, and in doing so to provide a philosophical foundation for them. In contrast, I give an account of computational explanation that focuses on the resources computational explanations bring to bear on the study of the brain. I argue that computational explanations help cognitive scientists build perspicuous models that capture precisely the kinds of causal structures they seek, and that no metaphysics of computation is required to understand how they do this.
What is a Theory of Neural Representation for? [Arxiv]
This paper explores the way representational notions figure into cognitive science, with a focus on neuroscience. Philosophers have a way of skipping over that question and going straight to another: what is neural representation? The way representa- tional notions figure into cognitive science is not forgotten — the phrase “neural representation” usually means “representation as cognitive science understands that notion.” But eliding this phrase allows philosophers to focus more squarely on an account of neural representation itself. I argue that the wrong part of the question has been elided. Our ultimate questions, as philosophers of cognitive science, are about the function and epistemology of cognitive scientific explanations — in this case, explanations using representational notions. To answer those questions it is essential to understand the role the notion of representation plays in cognitive science — what it enables scientists to do or explain, and how — but not necessarily important to understand the nature of a property, NEURAL REPRESENTATION, that notion might pick out. I describe this approach, argue that it is a scientifically sensitive form of realism that philosophy of neuroscience can benefit from, and use it to give an account of representational explanation. Specifically, I propose that representational notions help us construct and understand models of the brain’s causal structure, and that we can see how they do this by examining their role in scientific cognition, i.e., without debating the nature of any property they might refer to.
What really lives in the swamp? Kinds and the illustration of scientific reasoning [Arxiv]
It’s not clear what philosophers of science can learn from thought experiments. Consider Swampman: a physical duplicate of Donald Davidson that arises by chance after lightning strikes a swamp. Swampman is a popular counterexample to teleosemantics: he appears to have representation, but no selection history. So, apparently, it’s a mistake to define representation in selectional terms. Teleosemanticists respond that Swampman can’t tell us anything about representation because he’s simply not real, or even realistic: representation is a scientific kind, and if we take scientific kinds seriously, we can’t say that just because some imagined creature looks like typical representational systems, or could be explained in representational terms, it is a representational system. So Swampman isn’t a counter-example to the teleosemantic account of representational systems, because he isn’t an example of a representational system in the first place. I endorse this response to the Swampman counterexample, and especially its motivation: to take the scientific role of representational concepts seriously. But this motivation supports another way of understanding Swampman, according to which he is an illustration of scientific explanation, rather than an example of a representational system. I draw out the logic of this kind of illustration, compare it to some experimental paradigms in science, and argue that it provides a better way of understanding Swampman and other thought experiments in philosophy of science.
Computational Externalism [Draft available on request]
I argue that the brain does not have its computational structure intrinsically, but only in conjunction with its environment. I support this view (externalism) with a case study in the neuroscience and evolutionary biology of color vision, showing that which aspects of the brain's causal structure rise to the level of computation — which features of its causal structure count as part of its functional structure, or `wiring diagram' — depends on its environment. I connect the long-standing philosophical debate over externalism to issues in contemporary cognitive science, and draw some conclusions for pressing debates in neuroscience and psychology.
These are papers in early stages of development, but I can send drafts if you're interested.
Naturalism and the Philosophy of Mind
Philosophers of mind tend to accept three claims. (1) Philosophy of mind should draw support and from cognitive science. (2) Philosophy of mind should deliver a metaphysics of mind: a definition of the mind, or an account of what it is to be minded. (3) The most promising approach in philosophy of mind are computational and representational. I argue that these claims are only consistent on a naïve view of cognitive science and the explanations it provides — specifically, an understanding of those explanations as metaphysically loaded. Starting from a more nuanced understanding of cognitive science, I bring out the inconsistency of the three claims and discuss how we can move forward by dropping one of them.
Computational, Representational, and Functional Explanation: A Case Study in the Antikythera Mechanism
The concepts of computation, representation, and function have central explanatory roles in many different sciences. Cognitive science, in particular, explains the brain as a sort of computer, whose parts have functions, with one of those functions being to represent environmental variables. But these forms of explanation, and especially their epistemic role and status, are not fully understood. In philosophy, this problem is often approached by either toy examples of computation, representation, and function (and the associated explanations), or by real case studies of these forms of scientific explanation. Toy examples can often be too simplistic to provide any understanding of the real scientific explanations at issue. And it can be hard to distil general lessons from real case studies, which are highly complex and tend to support multiple interpretations. A useful and fascinating middle-ground can be found in the Antikythera mechanism — an ancient Greek astronomical device that is explained in computational, representational, and functional terms, and for which the history and development of those explanations is well-known. I use the Antikythera mechanism to draw out the features of computational, representational, and functional explanation, and argue for a particular epistemic role and status for each form of explanation.
Representations in AI: a fresh start for philosophy?
Philosophers can help make progress in explainable AI (XAI) by clarifying the concepts we use to understand AI behavior, like representation, computation, and function. But current approaches mostly port over existing views of those concepts from philosophy of mind and language, along with the goals, methods, and assumptions we take for granted when we give accounts of scientific concepts. I argue that the recent AI boom and the focus on XAI offer a rare opportunity for philosophers to revisit our basic commitments, and especially those goals, methods, and assumptions. And I try to take advantage of that opportunity, revealing some alternative goals/methods/assumptions, arguing that these alternatives are more appropriate to the project of XIA.
Mysterianism, or: the cosmic horror of the self
Mysterians argue that the nature of consciousness is fundamentally unknowable, and I argue that their sense of "unknowability" is precisely the same kind of unknowability that characterizes the objects of cosmic or Lovecraftian horror. I then use this comparison to raise some questions about the relationship between theories of consciousness and the aesthetics of consciousness, and bring out some other puzzles to do with the aesthetics of consciousness.
Gamification and Domain Transfer
I discuss the use of gamification in pedagogy, highlighting a lack of consensus on best practices and some difficulties we face trying to construct those best practices using empirical research. I then show that gamification is an example of domain transfer, and derive a tentative set of best practices based on a broader understanding of domain transfer in science, business, and other domains.
Another negative program in experimental philosophy
Experimental philosophy (X-phi) has famously pursued a "negative program," arguing against the use of individual intuitions (about rightness and wrongness, kind membership, and so on) in philosophy. Better, X-phi has said, to use empirical measures of a more general population's intuitions, judgments, or categorizations as a basis for philosophical thought. I argue that there is a second pervasive tendency in philosophy that experimental philosophy can correct, which is not about the source of our insight, but the kind of insight we seek. Philosophers, including experimental philosophers, often target scientific concepts, investigating the meaning or reference of those concepts as a way of understanding how they figure into scientific explanation. This can be informative, but as long as we're borrowing empirical methods, why not use the methods that cognitive psychology uses to study explanation? Cognitive psychologists studying explanation do not, usually, query subjects about which things belong in a category, like gene or function. They prompt subjects with different kinds of explanation, like ones using the concept gene or function, and examine those explanations’ cognitive effects more broadly (Lombrozo, 2009; Lombrozo et al., 2007; Lombrozo & Carey, 2006; Lombrozo & Gwynne, 2014). In short, they treat explanation like any other cognitive phenomenon, and study the role of concepts in explanations not by asking how they contribute to that cognitive phenomenon. I show that this is a promising approach for experimental philosophy, and that it undermines another problematic a priori method in traditional philosophy.