How does the brain process zero models?

Authors

Sonia Ramotowska, Maria Spychalska, Oliver Bott, Fabian Schlotterbeck and Maria Aloni

Affiliation: University of Amsterdam, Max Planck Institute for Psycholinguistics, Bielefeld University, University of Tübingen, University of Amsterdam

Category: Linguistics

Keywords: quantifiers, EEG, neglect zero, verification, sentence processing

Schedule & Location

Date: Tuesday 2nd of September

Time: 14:30

Location: GSSR Plenary Hall (268)

View the full session: Quantifiers, Plurals & Numbers

Abstract

Introduction During language comprehension, listeners readily build a model of what is being said. While some of the models are easy to construct others might require mental effort. Aloni (2022) proposed that listeners automatically exclude zero models that satisfy sentences by the empty set. She hypothesized that the neglect zero tendency is a cognitive bias responsible for several pragmatic inferences,e.g., free choice and distributivity inferences, and non-empty restrictor and scope inferences for quantifiers see (1).

(1) Fewer than three dots are blue. a. ⇝ There is at least one blue dot. non-empty scope b. ⇝ There are dots. non-empty restrictor

This tendency may conflict with the truth-conditions of sentences containing downward-entailing quantifiers. For example, (1) is true also when zero dots are blue, however, this scenario stands at odds with inference (1-a). In behavioral experiments, (Bott, Schlotterbeck, & Klein, 2019; Balbach, Schlotterbeck, Wang, & Bott, 2022) showed that the verification of downward-entailing quantifiers is particularly difficult when participants encounter zero models where inferences like (1-a) are violated. They observed that participants make more mistakes in these cases and it takes them longer to make truth-value judgments. This observation is in line with their theoretical proposal that upward and downward-entailing quantifiers have different verification algorithms. This account predicts the delay in processing of downward-entailing quantifiers shown in many behavioral (Deschamps, Agmon, Loewenstein, & Grodzinsky, 2015; Agmon, Loewenstein, & Grodzinsky, 2019; Schlotterbeck, Ramotowska, van Maanen, & Szymanik, 2020) and EEG experiments(Urbach & Kutas, 2010; Urbach, DeLong, & Kutas, 2015; Nieuwland, 2016; Augurzky, Schlotterbeck, & Ulrich, 2020), as well as, additional penalty in the case of zero models only for the downward-entailing quantifiers. This EEG study aims to test the predictions of the neglect zero (Aloni, 2022) and quantifier verification (Bott et al., 2019) accounts. We contrasted two numerical quantifiers fewer than three and more than three in two types of context, one involving a zero model and one control context. Each context consisted of the verifier and one falsifier model. We predicted that the presence of the zero model would cause the processing cost for fewer than three, namely the expectations of the verifier model are modulated by the presence of zero models in the context, depending on the quantifier. Methods Th experiment was in Dutch. We tested 33 native speakers (3 had to be removed due to artifacts). In each trial of the experiment, they were presented phrase-by-phrase a question (e.g., Who has a card with// fewer than three// black hearts?, // separates phrases), followed by the presentation of a context picture showing a game scenario, followed by a presentation of an answer to the question (e.g., The// zebra// has such a card). The participant’s task was to judge whether the answer to the question was correct or not. The experiment used a 2×2×2 design with Quantifier (fewer than three vs. more than three), CONTEXT (with zero model vs. control ), and REFERENCE (verifier vs. falsifier) factors, resulting in eight conditions. The CONTEXT always presented two animals, each having one card. The cards functioned as a model that allows for the quantified sentence to be verified or falsified (Figure 1). The answer always referred to one of the animals. The total number of trials included 288 targets and 144 fillers. In addition to EEG, we measured responses, reaction times, and tracked eye movements during CONTEXT presentation. The EEG was epoched from the onset of the critical word (the animal) until 1000 ms after with a baseline of 200 ms. To test the effect of zero models on the verification of quantifiers, we ran a repeated measure four-way ANOVA with factors: QUANTIFIER, CONTEXT, REFERENCE, and RoI (anterior vs. posterior) in three time windows: 250-400 ms, 400-550 ms, and 550-700 ms after the onset of the critical word. Results The accuracy on target items ranged between 91% (fewer than 3, zero context, verifier) to 96% (fewer than 3, control context, falsifier). Figure 2 shows the grand averages in all conditions. The effect of reference was significant in all time windows. In the earliest time window, the ERPs were more positive when the reference was made to falsifier models than to verifier models (F(1,29)=65.22, p<0.001, η2=0.15; Δ=2.63), which could be interpreted either as an increased (early) N400 for the falsifier models or a P300 peak for the verifier models. In the second- and third-time windows, the effect was the opposite (F(1,29)=24.10, p<0.001, η2=0.05; F(1,29)=41.39, p<0.001, η2=0.09, ardingly), which may be interpreted in terms of truth-related reanalysis/reprocessing. The effect of the Quantifier was significant only in the second time window in interaction with RoI (F(1,29)=4.31, p=0.047, η2=0.0007). The follow-up analyses per RoI showed that the effect of the Quantifier was significant in the posterior RoI (F(1,29)=5.27, p=0.029, η2=0.006), but not in the anterior RoI. While no effect of context is observed, in the third time window, we observed a significant Reference × context interaction in the posterior RoI (F(1,29)=4.7, p=0.04, η2=0.01). The difference between falsifier and verifier references was larger in the case of control contexts (Δfalsifier−verifier=2.6) than contexts with zero model (Δfalsifier−verifier=1.52), significant interaction contrast: t(29)=2.17; p=0.04. Discussion In our experiment, participants were more accurate when verifying fewer than three against the zero model than in the previous studies (Bott et al., 2019; Balbach et al., 2022). Moreover, contrary to our predictions, the ERP analysis did not reveal any interaction between QUANTIFIER and CONTEXT in any of the time windows. We found that the CONTEXT with the zero model affected ERPs in the case of both quantifiers (the last time window). The analysis revealed the QUANTIFIER effect on ERPs, but not the QUANTIFIER × REFERENCE interaction observed previously (Augurzky et al., 2020). We will discuss the interpretation of these results in the light of literature about the processing of negation.