Ilmari Hirvonen
Affiliation: University of Helsinki
Category: Philosophy
Keywords: facial analysis AI, pseudoscience, technology, pseudotechnology, social technology, social pseudotechnology, criminal behaviour prediction, personality measurement, emotion recognition
Date: Friday 5th of September
Time: 14:30
Location: Gen. Henryk Dąbrowski Hall (006)
View the full session: AI
Facial analysis AI employs automated facial and body analysis technologies to classify individuals based on various attributes, including emotions, personality, intelligence, gender, and race. It finds applications in diverse fields such as hiring, policing, and education. It has even been used to predict criminal tendencies, sexual orientation, and political affiliations (Roemmich et al., 2023).
Such facial analysis AI applications have recently come under intense criticism. They have, for example, been accused of being pseudoscientific (Ajunwa, 2021; Roemmich et al., 2023; Scheurman et al., 2021; Sloane et al., 2022; Stark & Hutson, 2021). Critics argue that the applications lack a solid scientific foundation and may be influenced more by social and behavioural factors than biological ones (Scheurman et al., 2021). Critics have also—with good reasons—warned that applying facial analysis AI for such purposes may have severe societal consequences (Crawford, 2021).
This paper assesses the accuracy of such accusations. In some cases, facial analysis AI does indeed count as pseudoscientific. However, it will be shown that, in other cases, it is better to examine it as a form of pseudotechnology. For this purpose, the concept of pseudotechnology and its relationship with pseudoscience is specified.
Roughly speaking, the relationship between pseudoscience and pseudotechnology is the same as between science and technology. Whereas science and pseudoscience aim to describe reality, technology and pseudotechnology try to solve practical problems. This broad understanding of technology is typically accepted in technology studies (see, e.g. Brooks, 1980, 66). Therefore, besides machines, also computer programs, conceptual systems, and various practices can be technologies or pseudotechnologies. The important thing is that they aim not merely to describe the world but to change it to achieve some goal.
This paper highlights some flaws and strengths of the previous definitions of pseudotechnology (e.g., Bunge 1976; Mahner 2007; Hansson 2020), and a new definition is presented. Here, pseudotechnology is understood as a means used to achieve some intended purpose, but either (a) it is unjustified in that it could achieve its purpose better than random chance through its operation principles, or (b) if it can achieve its intended purpose better than random chance, it is not justified that this happens through some of its features claimed to be crucial for its functioning.
The first part of the definition seeks to cover pseudotechnologies that do not work at all. The second aims to pick out pseudotechnologies that work—at least in some sense—better than chance but whose effectiveness is due to something quite different from what is claimed. Social pseudotechnologies, i.e. pseudotechnologies used to maintain, organise or shape social relations, are often of this kind.
In the previous literature on pseudotechnology, Hansson (2020) has noticed that while pseudosciences are numerous, pseudotechnologies are considerably rarer. He suggests this is because pseudotechnologies typically reveal themselves immediately when used, unlike pseudosciences, which are more challenging to expose. It is argued that Hansson is half right. Although pseudotechnologies are discussed considerably less than pseudosciences, several nevertheless exist.
Hansson is correct in that pseudotechnologies, which fall under engineering and natural sciences, typically reveal themselves by not functioning in the intended manner. However, this is not the case with pseudotechnologies applied to humans, such as psychological and social pseudotechnologies. Often, in such cases of pseudotechnology, it is not entirely clear what the technology is precisely intended to do, whether this goal has been achieved, or why people believe it has been achieved. Moreover, some pseudotechnologies might seem to function as intended because they influence how people think or behave. Facial analysis AI, when crossing into pseudotechnological territory, exemplifies this phenomenon.
It is also argued that pseudotechnologies are not necessarily, or at least not entirely, based on pseudoscience. For example, technology companies MyInterview and Retorio claim that their artificial intelligence applications can be used to infer the personality traits of job applicants from interview videos. The companies use the widely accepted Big Five theory of personality traits (Roemmich et al., 2023), and there is some evidence that, of the five personality traits, conscientiousness is indeed mildly associated with good job performance (Morgeson et al., 2007; Zell & Lesick 2021). However, such associations are statistical, but recruitment decisions are made at the individual level. Individuals can compensate for the effects of their personality traits (Keltikangas-Järvinen 2016), and therefore, recruitment cannot justifiably be based on such general statistical evidence alone.
Moreover, scientific studies that have attempted to analyse personality traits from videos have shown much more modest results than MyInterview’s and Retorio’s advertisements suggest (Kachur et al., 2020; Cai & Liu, 2022). Journalists who have evaluated Retorio’s claims have found that, for example, wearing glasses in an interview video or standing in front of a bookshelf affected the personality analyses conducted by the AI application (Harlan & Schnuck, 2021). In other words, while there is genuine research behind the applications, namely the Big Five personality theory, the applications are social pseudotechnologies when used in recruitment, as there is no basis for claiming that they are any more valid than dice rolls in making recruitment decisions.
This paper examines three facial analysis AI applications and determines whether they qualify as science and technologies or pseudoscience and pseudotechnologies. The applications are (1) predicting criminal behaviour, (2) personality assessment in hiring, and (3) identifying emotional states through video streams and facial photos (Roemmich et al., 2023; Wu & Zhang, 2016; Xi et al., 2020; Mallon, 2006; Schiffer, 2020).
It is also argued that while there are clear instances of bad science, pseudoscience, and pseudotechnology within facial analysis AI, some researchers and developers exercise sufficient caution in making their claims. Therefore, facial analysis AI applications should not automatically be labelled pseudotechnological or pseudoscientific. Such labelling requires evaluating whether the applications can achieve their intended purposes and, if they can, how this happens.