Personal Probabilism and Fragmentation as a Middle Ground Between Logical Omniscience and Ignorance

Authors

Ethan Lai

Affiliation: University of St Andrews

Category: Philosophy

Keywords: Personal probabilism, Logical omniscience, Topic-sensitive fragmentation, Possible worlds, Epistemic rationality, Logical ignorance

Schedule & Location

Date: Thursday 4th of September

Time: 15:00

Location: Room 232 (232)

View the full session: Content Determination

Abstract

  1. Introduction

Probabilism assumes that rational agents are logically omniscient, meaning they believe all logical truths and infer all logical consequences. However, this assumption contradicts real-world rationality, as actual agents often exhibit logical ignorance. Personal probabilism, developed by Hacking (1967) and Pettigrew (2020), offers an alternative: instead of considering all logically possible worlds, it focuses on an agent’s personally possible worlds. This allows for some logical ignorance while preserving rational coherence.

However, personal probabilism faces a challenge: it must balance between avoiding logical omniscience and preventing extreme logical ignorance. If too weak, it allows omniscience to return; if too strong, it makes extreme ignorance rationally permissible. To resolve this, I propose that fragmentation, developed by Lewis (1982) and Stalnaker (1984), provides the necessary structure. Fragmentation permits multiple sets of personally possible worlds, preventing extreme ignorance while ensuring logical competence. This paper argues that topic-sensitive fragmentation (Chu, MS) refines this approach, providing a robust account of rationality within personal probabilism.

  1. Probabilism and the Problem of Logical Omniscience

Probabilism holds that beliefs come in degrees that conform to probability theory:

K1: Probabilities are non-negative.

K2: The probability of a logical truth is 1.

K3: The probability of a disjunction is the sum of its mutually exclusive disjuncts.

However, K2 leads to logical omniscience, requiring agents to believe all logical truths and their consequences. Empirical evidence contradicts this; even professional mathematicians are uncertain about complex theorems. Bayesianism, as defended by Pettigrew (2020), aims to model logical ignorance without violating probabilistic coherence by replacing logically possible worlds with personally possible worlds. However, this solution leads to new challenges.

  1. Personal Probabilism: Trade-offs Between Omniscience and Ignorance

Personal probabilism defines a world as personally possible if an agent has not ruled it out through empirical reasoning or logical inference. Learning logical facts occurs through:

Direct learning (e.g., proving a theorem and accepting it with full credence).

Relational learning (e.g., realising a probability rule applies to specific cases).

A dilemma arises:

If all personally possible worlds are included, agents must know exactly which logical truths they have yet to learn (revenge problem).

If knowledge depends on direct awareness, belief and knowledge attribution become overly restrictive (one only knows what one explicitly considers), and extreme logical ignorance becomes rationally permissible.

The Linda problem (Tversky & Kahneman, 1983) illustrates this risk: trained participants judged ‘Linda is a feminist bank teller’ as more probable than ‘Linda is a bank teller,’ violating probability theory. Under personal probabilism, their mistake might be rationally excusable if they were unaware of probability rules at the time, which is counterintuitive.

  1. The Possible Worlds Approach and Coarse-Graining Problems

Standard possible worlds semantics (Lewis, 1996; Stalnaker, 1984) models knowledge and belief in terms of ruled-out possibilities. However, it struggles with necessary truths, implying that rational agents automatically know all logical truths. Attempts to refine this include:

Epistemic scenarios (Chalmers, 2011) to capture what is epistemically, rather than metaphysically, possible.

Impossible worlds (Hintikka, 1975; Rantala, 1982) where contradictions can hold, avoiding omniscience.

However, these solutions fail:

Epistemic scenarios do not solve the issue for a priori necessary truths.

Impossible worlds introduce issues in restricting logical structure, making rational updates unclear.

  1. The Lewis-Stalnaker Fragmentation Response

Fragmentation models belief as stored across multiple fragments, with different fragments activated in different contexts. Lewis’ example:

Nassau Street runs east-west.

The railroad runs north-south.

Nassau Street and the railroad are parallel.

Each pair of beliefs is true within different fragments, preventing contradiction. Fragmentation also explains why agents fail to draw obvious conclusions (e.g., forgetting that the post office is closed on public holidays despite knowing the relevant premises).

Applying this to personal probabilism:

Instead of a single set of personally possible worlds, multiple sets correspond to different epistemic fragments.

This prevents knowledge collapse while allowing for rational updates.

  1. Fragmentation and Personal Probabilism: Solving the Attribution and Ignorance Problems

Personal probabilism assumes:

Explicit access to a belief is required for rationality.

Personally possible worlds are globally unified rather than fragmented.

This leads to two key problems:

Attribution problem: If belief requires explicit access, knowledge attribution becomes too restrictive (e.g., one does not ‘know’ basic arithmetic unless actively thinking about it).

Ignorance problem: If rationality depends solely on awareness, extreme logical ignorance is permissible (e.g., failing to apply probability rules in the Linda case is excusable).

Fragmentation resolves this by allowing:

Belief and knowledge to be stored across multiple fragments, even if not explicitly considered.

Multiple personally possible worlds, preventing agents from needing global logical closure.

Retrieval mechanisms that explain reasoning errors without excusing extreme ignorance.

  1. Topic-Sensitive Fragmentation: A Middle Ground Between Omniscience and Ignorance

To refine fragmentation, topic-sensitive fragmentation (Chu, MS) suggests that fragments are structured by subject matter. Rational agents:

Form belief fragments based on topic relevance.

Retrieve relevant information when evaluating related propositions.

Avoid logical omniscience by restricting fragments to specific contexts.

For instance, in the Linda problem, participants activated a stereotype-based fragment rather than a probability-based one. This explains their mistake without excusing extreme ignorance: rational agents should retrieve all relevant fragments when evaluating probability.

Topic-sensitive fragmentation offers a normative standard: rationality requires retrieving topic-relevant information. This aligns with dual-process theories in psychology (Kahneman, 2011), where intuitive reasoning is prone to biases while analytic reasoning corrects them.

  1. Conclusion

Personal probabilism faces a crucial trade-off: preventing logical omniscience without permitting extreme logical ignorance. The fragmentation response provides a structured solution by introducing multiple sets of personally possible worlds, allowing for rational belief attribution and learning. Topic-sensitive fragmentation refines this further, setting rationality standards based on subject matter relevance. This model explains how rational agents manage information, avoid omniscience, and remain logically competent without requiring global coherence.

By adopting topic-sensitive fragmentation, we achieve a middle ground between omniscience and ignorance, making personal probabilism a more viable epistemic framework. Future research should explore computational models of fragment retrieval and cognitive strategies to enhance rational reasoning.