Johan Heemskerk
Affiliation: University of Warwick
Category: Philosophy
Keywords: Philosophy of Science, Mechanistic explanation, Teleosemantics
Date: Tuesday 2nd of September
Time: 17:30
Location: Room 154 (154)
View the full session: Functional Analysis & Mechanisms
Philosophers interested in the problem of content (i.e. how to specify the relevant relation between a representation and its content) have, in recent years, made an “explanatory turn” (Schulte, 2023). This turn involves critically evaluating content attributions made in scientific explanations to assess whether an implicit theory of content (of philosophical significance) can be found.
Many philosophers who pursue this methodology subscribe to a theoretical framework known as informational teleosemantics (e.g. Martinez, 2013; Artiga 2020; Neander, 2017; Shea, 2018). Informational teleosemantics is the view that the content of a representation is given by (i) an informational link between the representation and an item in the environment, and (ii) the function of the system containing the representation.
In this paper, I claim that there is an implicit theory of content to be found, specifically, in cognitive neuroscience. The implicit theory mostly aligns with informational teleosemantics. However, while many theorists spell out (i) using a basic correlational measure of information, the implicit theory of content assumes a much stronger measure. Contemporary scientific methodologies for discovering representational content implicitly assume that the content of a representation is the item given by maximal mutual information. The assumption of maximal mutual information is reflected, I argue, in methods such as spike-triggered average (STA), conditional mutual information analysis (CMI), and dimensionality reduction. It is also approximated in classical studies. I call the implicit theory ‘maxMI’:
maxMI: Ex is the content of R iff R shares mutual information with each of a set of items, E1-n, of which Ex is a member, and R and Ex have maximal mutual information relative to the rest of E1-n.
In addition, maxMI invokes functions. However, while many philosophers spell out (ii) in terms of etiological functions (such as the notion defined by Larry Wright), maxMI relies on a non-etiological notion of function (such as the notion defined by Robert Cummins). Cummins’ functions are required in order to ensure that the information the system encodes from the environment is ‘available’ to the system itself.
The assumption of maxMI in cognitive neuroscience is philosophically significant because it shows how content can contribute to mechanistic explanations in cognitive neuroscience. This is a contested topic. Not all philosophers think that content does play such a role (e.g. Frances Egan). Such philosophers take content to be a heuristic gloss on mathematical or causal processes. I aim to show precisely how maxMI provides explanatorily salient contents by isolating information which is available to the system itself. By tracing information flows from the external environment throughout the cognitive system using information-theoretic measures, neuroscientists such as Phillipe Schyns are able to discover the precise element of the stimulus decoded by downstream areas. This, I argue, is required in order for content to be mechanistically salient for the operation of the system.
I end by considering areas for future investigation. Is content indeterminacy a problem for maxMI and can it address it? Does content isolated by maxMI fulfil the adequacy conditions placed on content by philosophers?