Andreas Pommer
Affiliation: University of Copenhagen
Category: Philosophy
Keywords: Dynamics, Computation, Representation, Vehicle, Analogue
Date: Thursday 4th of September
Time: 18:00
Location: Gen. Henryk Dąbrowski Hall (006)
View the full session: Format & Vehicle
Cognitive neuroscience seeks to explain cognitive capacities by providing computational models which purport to characterise the representations and algorithms employed in exercising the capacity. The philosophical literature has thoroughly investigated the symbolic approach to computational explanations, but systems neuroscience is currently being transformed by the advancement of techniques to monitor the activity of up to thousands of neurons (Urai et al., 2022), accompanied by an explanatory framework termed neural population dynamics (NPD). It has been suggested that this framework holds promise to explain cognition (Barack & Krakauer, 2021), but it has received little philosophical consideration.
In this talk, I aim to remedy the lacking philosophical analysis of NPD. My analysis interacts with long-standing debates about distributed representations in connectionist networks (Ramsey, 1997; Shea, 2007) and discussions of the importance of the representational format for possible computations (Maley, 2024; Mollo & Vernazzani, 2024). Ultimately, I argue that NPD employs analogue representations which represent information geometrically, thus enabling transformations for what I term geometric computation.
First, I argue that NPD provides analogue representational/algorithmic explanations. NPD adheres to the population doctrine which is the view that representations are distributed across a population of neurons, rather than being represented by individual neurons (Saxena & Cunningham, 2019; Yuste, 2015). These latent representations in the population are extracted through dimensionality reduction techniques, which reveals low-dimensional patterns in the neural activity called manifolds (Cunningham & Yu, 2014). Overlapping with discussions about distributed representations in connectionist networks (O’Brien & Opie, 2006; Shea, 2007), I argue that these manifolds are proper representational vehicles abstracting away from implementational details.
In arguing that manifolds are representational vehicles, I show that they map onto the structure of the external world based on their geometric properties, endowing them with an analogue format (Lee et al., 2023). For instance, head-directions cells innately form a 1D manifold attractor ring mirroring the head-direction variable, and in accordance with the analogue format, the transformations of the population activity on the manifold mirror the relations between head directions (Chaudhuri et al., 2019; cf. Maley, 2011). Specifically, the transformations of the population activity are governed by the underlying dynamical landscape (Shenoy & Kao, 2021; Vyas et al., 2020), and as the population activity forms its trajectory on the manifold, it tokens the contents of the vehicle. Hereby, NPD provides representational/algorithmic explanations.
Second, I argue that the analogue format of NPD provides a geometric type of computational explanations, distinct from the traditional symbolic computational explanations. The representational format constrains the information and consequently the computations available (Maley, 2023; Mollo & Vernazzani, 2024), and having established an analogue format it is possible to provide an analysis of the available information processes. This is in contrast to traditional approaches that establish structural representations purely based on a mirroring of mathematical relations (Shagrir, 2012). Traditional computational explanations are symbolic in the sense that they concern discrete representational vehicles, upon which discrete transformations are performed. In contrast, NPD uses an analogue representational format where the external variables are organised (represented) on the manifold and transformed in a continuous fashion as the population activity evolves.
The resulting type of computation in NPD is geometric, and I will characterise three distinct features. First, the manifold mirrors the dimensionality of the task variables, e.g., a behavioural binary choice is represented by a 1D manifold (Mante et al., 2013). Second, the population activity is a neural state vector, e.g., the similarity of states encodes the strength of an associative memory (Grewe et al., 2017). Third, the dimensions of the manifold codes separate variables, e.g., the same population of neurons may process variables independently if the variables are represented orthogonally in the state space (Okazawa et al., 2021). Through such examples, I argue that NPD represents information geometrically, and therefore provides a geometric type of computational explanation distinct from the traditional symbolic computational explanation.