I work mainly in philosophy of science, blending methods from philosophy and psychology to study scientific reasoning. Currently I spend most of my time thinking about cross-disciplinary theorizing between philosophy, neuroscience, and AI, and how the methods of all three fields help make complex systems, like brains and neural networks, intelligible. In all this work I take a pragmatic approach, and one of my main goals is to elaborate and defend that approach by thinking about methodology in philosophy of science and the psychology of scientific explanation.
Papers
(* = first/co-first authors)
Forthcoming
Drafts
Representation in explainable AI
Philosophers have extended traditional theories of representation to explainable AI (XAI), but they have overlooked both the unique goals of XAI research and the context-sensitivity of concept use. XAI needs its notion of representation to track pragmatically-selected correlations between a model and another domain, but traditional theories of representation are devised for purposes like grounding determinate, objective content. Theories tailored to the philosophical context tend not to fit XAI’s purposes. And while my own view of representation fits XAI better, it is the pragmatist approach generally that shines in cross-disciplinary work. The main pitfall in cross-disciplinary work is to neglect the different purposes that concepts and theories serve in different contexts. By stressing those purposes (what do concepts help scientists do, and how?) pragmatists can tailor their theory to different contexts, while traditional approaches must stitch a new one from scratch.
Concept clarification as scientific methodology
Scientists devote high-profile articles and special issues to clarifying concepts like representation and computation. But they rarely discuss methodology: what do we aim to achieve by clarifying our concepts, and which methods serve those goals? I describe methods from psychology and philosophy, show how they serve different purposes (e.g., explaining why one uses a concept oneself vs coordinating on a shared target of investigation), and argue for greater methodological diversity on those grounds.
Naturalism and Philosophy of Mind
Philosophers of mind tend to accept three claims. (1) Philosophy of mind should draw support and from cognitive science. (2) Philosophy of mind should deliver a metaphysics of mind: a definition of the mind, or an account of what it is to be minded. (3) The most promising approach in philosophy of mind is computational and representational. I argue that these claims are only consistent on a naïve view of cognitive science and the explanations it provides — specifically, an understanding of those explanations as metaphysically loaded. Starting from a more nuanced understanding of cognitive science, I bring out the inconsistency of the three claims and discuss how we can move forward by dropping one of them.
Computational, Representational, and Functional Explanation: A Case Study in the Antikythera Mechanism
The concepts of computation, representation, and function have central explanatory roles in many different sciences. Cognitive science in particular explains the brain as a sort of computer, whose parts have functions, with one of those functions being to represent environmental variables. But these forms of explanation are not fully understood. In philosophy, this problem is often approached through toy examples of computations, representations, and functions, or through real case studies from cognitive science. But toy examples are often too simplistic to illuminate scientific explanation, and it can be hard to distil general lessons from real and complicated case studies. A useful middle-ground can be found in the Antikythera mechanism — an ancient Greek astronomical device that is explained in computational, representational, and functional terms. I use the Antikythera mechanism to draw out the features of computational, representational, and functional explanation, and argue for a particular epistemic role and status for each.
Mysterianism, or the cosmic horror of the self
Mysterians argue that the nature of consciousness is fundamentally unknowable. I argue that this sense of "unknowability" is precisely the same one that characterizes the objects of cosmic or Lovecraftian horror. I use this comparison to bring out some puzzles to do with the aesthetics of consciousness, especially concerning its relationship to philosophical and scientific theories of consciousness.
Gamification and Domain Transfer
I discuss the use of gamification in pedagogy, highlighting a lack of consensus on best practices and some difficulties we face trying to construct those best practices using empirical research. I then show that gamification is an example of domain transfer, and derive a tentative set of best practices based on a broader understanding of domain transfer in science, business, and other domains.
Papers
(* = first/co-first authors)
Forthcoming
- Richmond, A. What really lives in the swamp? Kinds and the illustration of scientific reasoning. Philosophy of Science.
- Baker, B.,* Lange, R.,* Richmond, A.,* Kriegeskorte, N., Cao, R., Pitkow, X., Schwartz, O., & Achille,
A. Use and usability: Three levels of neural representation. Neurons, Behavior, Data, and Theory.
- Richmond, A. (2025). What is a theory of neural representation for? Synthese, 205(14).
- Richmond, A. (2025). How Computation Explains. Mind & Language, 40(1).
- Richmond, A.,* Bowen, J. G., Kayssi, L. F., Küçük, K., Ravikumar, V., Şahin, Y., Anderson, M.L. (2024). Imposing vs finding unity. Cognitive Neuroscience.
- Richmond, A (2023). Commentary: Investigating the concept of representation in the neural and psychological sciences. Frontiers in Psychology, 14.
- Richmond, A. Pragmatist philosophy of cognitive science (under commission, Philosophy Compass)
- Richmond, A. Computational externalism (under review)
- Richmond, A. Experimental philosophy of science: beyond taxonomy (in preparation)
- Richmond, A. Representation in context: neuroscience and explainable AI (in preparation)
Drafts
Representation in explainable AI
Philosophers have extended traditional theories of representation to explainable AI (XAI), but they have overlooked both the unique goals of XAI research and the context-sensitivity of concept use. XAI needs its notion of representation to track pragmatically-selected correlations between a model and another domain, but traditional theories of representation are devised for purposes like grounding determinate, objective content. Theories tailored to the philosophical context tend not to fit XAI’s purposes. And while my own view of representation fits XAI better, it is the pragmatist approach generally that shines in cross-disciplinary work. The main pitfall in cross-disciplinary work is to neglect the different purposes that concepts and theories serve in different contexts. By stressing those purposes (what do concepts help scientists do, and how?) pragmatists can tailor their theory to different contexts, while traditional approaches must stitch a new one from scratch.
Concept clarification as scientific methodology
Scientists devote high-profile articles and special issues to clarifying concepts like representation and computation. But they rarely discuss methodology: what do we aim to achieve by clarifying our concepts, and which methods serve those goals? I describe methods from psychology and philosophy, show how they serve different purposes (e.g., explaining why one uses a concept oneself vs coordinating on a shared target of investigation), and argue for greater methodological diversity on those grounds.
Naturalism and Philosophy of Mind
Philosophers of mind tend to accept three claims. (1) Philosophy of mind should draw support and from cognitive science. (2) Philosophy of mind should deliver a metaphysics of mind: a definition of the mind, or an account of what it is to be minded. (3) The most promising approach in philosophy of mind is computational and representational. I argue that these claims are only consistent on a naïve view of cognitive science and the explanations it provides — specifically, an understanding of those explanations as metaphysically loaded. Starting from a more nuanced understanding of cognitive science, I bring out the inconsistency of the three claims and discuss how we can move forward by dropping one of them.
Computational, Representational, and Functional Explanation: A Case Study in the Antikythera Mechanism
The concepts of computation, representation, and function have central explanatory roles in many different sciences. Cognitive science in particular explains the brain as a sort of computer, whose parts have functions, with one of those functions being to represent environmental variables. But these forms of explanation are not fully understood. In philosophy, this problem is often approached through toy examples of computations, representations, and functions, or through real case studies from cognitive science. But toy examples are often too simplistic to illuminate scientific explanation, and it can be hard to distil general lessons from real and complicated case studies. A useful middle-ground can be found in the Antikythera mechanism — an ancient Greek astronomical device that is explained in computational, representational, and functional terms. I use the Antikythera mechanism to draw out the features of computational, representational, and functional explanation, and argue for a particular epistemic role and status for each.
Mysterianism, or the cosmic horror of the self
Mysterians argue that the nature of consciousness is fundamentally unknowable. I argue that this sense of "unknowability" is precisely the same one that characterizes the objects of cosmic or Lovecraftian horror. I use this comparison to bring out some puzzles to do with the aesthetics of consciousness, especially concerning its relationship to philosophical and scientific theories of consciousness.
Gamification and Domain Transfer
I discuss the use of gamification in pedagogy, highlighting a lack of consensus on best practices and some difficulties we face trying to construct those best practices using empirical research. I then show that gamification is an example of domain transfer, and derive a tentative set of best practices based on a broader understanding of domain transfer in science, business, and other domains.