Can AI help us understand numerical cognition and arithmetical knowledge?

Authors

Markus Pantsar

Affiliation: RWTH Aachen

Category: Philosophy

Keywords: numerical cognition, arithmetical knowledge, epistemology, artificial intelligence, cognitive modelling

Schedule & Location

Date: Friday 5th of September

Time: 16:00

Location: Gen. Henryk Dąbrowski Hall (006)

View the full session: AI

Abstract

In the past decade, many philosophers have tried to explain the nature of arithmetical knowledge with the help of empirical results concerning early numerical abilities (often called either quantical or proto-arithmetical abilities) (Clarke & Beck, 2021; Menary, 2015; Pantsar, 2024). The foundation of such approaches is the belief that understanding the cognitive processes involved in key stages of the ontogenetic development, such as number concept acquisition (Beck, 2017), can help us understand how arithmetical knowledge has developed, and hence potentially also how it should be understood philosophically. This kind of empirically-informed philosophy of arithmetic has been closely connected to progress in the cognitive sciences, whether concerning behavioural studies or neuroscientific research. (Dehaene, 2011; Knops, 2020).

Recently, this approach has been complemented by interesting studies from artificial intelligence (AI) research. These studies use artificial neural networks to emulate human numerical capacities, starting from the proto-arithmetical abilities of subitizing and estimating. One important pioneering experiment in this research direction was reported in (Stoianov & Zorzi, 2012). They presented a deep artificial neural network with two-dimensional images with different sizes and numbers of dots, which is a standard method for studying pre-symbolic numerical abilities in humans (Dehaene, 2011; Xu & Spelke, 2000). This was done by unsupervised learning, so that the network was not trained to focus on any specific aspect of the input. Stoianov and Zorzi found out that the system learned to perform numerosity comparison tasks with similar behavioural signatures to those of the proto-arithmetical abilities of humans and non-human animals. In an interesting further result, the response profiles of the emergent “numerosity detectors” in the network resembled those reported in the lateral intraparietal area of macaque brains (Roitman et al., 2007; Stoianov & Zorzi, 2012). This type of research suggests that we can emulate early human non-symbolic numerical abilities with an AI, thus giving reason for optimism that AI methods could help us explain proto-arithmetical cognition (McClelland et al., 2016).

Further reasons for optimism have emerged from subsequent research. Testolin and colleagues (2020) report that a neural network could also develop similar numerical ability after being trained by a dataset of “natural” visual stimuli derived from, among other things, groups of animals. The reported learning trajectories are highly similar to those reported in longitudinal studies of human proto-arithmetical abilities, and the final competence of the neural network approximated that of human adults (Halberda & Feigenson, 2008; Piazza et al., 2010). Such results provide interesting material for the study of human arithmetical cognition.

This kind of research has also been expanded beyond emulating proto-arithmetical abilities into counting processes. Fang and colleagues (2018) used supervised learning to teach a neural network a counting procedure. In the experiment, two-dimensional blobs are given to the network as input. The guiding idea is that the network “touches” the blobs while connecting the procedure to numeral words, simulating how children learn to count by pointing to objects. The teacher provided the correct counting procedure as the training data, but otherwise the neural networks were generic systems with no pre-trained ability with numerosities. The results show that after mastering the touching procedure, the network reached almost perfect rates in counting to six after 2,000 training trials. With more trials, it learned to count further (Fang et al., 2018).

Fang and colleagues’ experiment intended to simulate the way human children learn to count, where gestures like pointing are advantageous (Alibali & DiRusso, 1999). Through another experiment, Di Nuovo and McClelland (2019) extended this approach to include embodied aspects in learning counting procedures. In the experiment, a humanoid robot with functional five-fingered “hands” was trained to use the fingers to represent spoken numerals. The AI received proprioceptive information from the robot hands, intended to emulate tactile and proprioceptive sensory input in humans. Their analysis showed that the proprioceptive information improved accuracy in recognizing spoken numeral words, established through the AI being faster in creating a uniform number line than a control AI system without the robot hand. Similar results were reported for a humanoid robot also in (Pecyna et al., 2020). These results have counterparts in the study of human numerical cognition where finger counting procedures have been shown to be advantageous for children in learning to count (Bender & Beller, 2012).

Can such AI research help us understand the human cognitive processes involved in the development or arithmetical cognition, and hence potentially give us a better grasp of the characteristics of arithmetical knowledge (Pantsar, 2023)? In this talk, I take that possibility seriously. As recently argued by van Rooij et al (2024), there are good reasons to reclaim AI as a theoretical tool for cognitive science, as it was originally conceived. Here I will adopt that approach for the particular phenomenon of numerical cognition and pursue its philosophical relevance for epistemological questions concerning arithmetic. I argue that AI research can come with many benefits. For one, we can run AI experiments without being concerned about damaging humans and their cognitive development. One interesting question concerning the development of arithmetic is the influence of numeral words and symbols. However, comparative studies in particular with different symbol systems are difficult to conduct, given the importance of arithmetic for children’s educational trajectory. With AI, on the other hand, we can experiment on different symbol systems and their effect on the developmental trajectories without similar ethical concerns.

Nevertheless, even with the potential that AI applications have, we need to face the fundamental limitations of artificial neural networks, including the black box problem. While there are ways to get information about the functioning of the system, much of its processing is even in principle beyond our explanatory capacities. This big question is then how much information we can get out of AI systems from the perspective of explaining human cognition. I conclude the talk with critical remarks concerning such foundational issues.