Filling the Gap Between Shannon Information and Semantic Information

Authors

Daniel Piecka

Affiliation: Institute Of Philosophy And Sociology Polish Academy Of Science

Category: Philosophy

Schedule & Location

Date: Thursday 4th of September

Time: 16:00

Location: Room 232 (232)

View the full session: Content Determination

Abstract

There is a deeply entrenched gap in the foundations of the project of naturalizing semantics: a gap between so-called semantic information and Shannon information that has brought more riddles than answers since the birth of the project. The dichotomy has had a wide influence in cognitive science since the first applications of information theory to neuroscience (MacKay and McCulloch, 1952) and in epistemology since Dretske (Dretske, 1981). Recently, in brain and mind sciences, (Godfrey-Smith and Sterelny, 2016) as well as (Piccinini and Scarantino, 2011) recognize two separate kinds of information: Shannon information and semantic information. Rathkopf calls this state of affairs a bifurcation: "At a high level of neural organization, brain information is semantic, but down at the level of single neurons, semantic properties are irrelevant, and the only information to speak of is Shannon information." (Rathkopf, 2020)

The received view states that while Shannon information theory (Shannon, 1948) provides the mathematical tools for quantifying (any kind of) data transmission or storage, semantic information theory deals with the meanings, relevance, and utility or function of that information in specific contexts, for a wide range of agents, from basal organisms to social formations.

Many believe the gap to be fundamentally unbridgeable by means of information theory. For instance, Bickhard sees information theory as another "encodingism" that leads to the homunculus problem (Mirski and Mark H. Bickhard, 2019/ed), Myin criticizes the "detector" theories of representation (Myin, 2016), and Dennett points out the lack of Shannon channels in plain semantic information flow setups, as in his Trafalgar Square example (Dennett, 2017). On the other hand, (Rathkopf, 2020) suggests that the bifurcation is less divisive than we might think, since Shannon information is strongly related to semantic information in practice: measuring rates of information transmission assumes some semantic information function of the neural circuits under investigation. Martinez (MartĂ­nez, 2019) uses rate distortion theory (a part of Shannon information theory) for "sweet spots" semantics, offering an adaptive argument: representations must obey efficient-coding rules within the limits of the agent.

Similarly to these philosophers, I think a pessimistic diagnosis is premature. Specifically, I argue that mechanisms originally selected for reliable transmission can be co-opted for model-building tasks, such as classification and prediction, which are central to representational systems. I argue that genuine representations can be produced as a result of exaptation (Gould and Vrba, 1982) rather than the selection of the Shannon model in biological cognitive systems as their robust task function (Shea, 2018). I argue that information theory might explain not only the reliable transfer of information but also the structured and compositional representations in cognitive systems, by showing an example of a computationally feasible and biologically plausible mechanism that provides a basis for classification tasks akin to those performed by linear regression models and perceptrons—the fundamental classifier models for robust and compositional information processing in agents.

References

Shannon, C. E. (1948). A Mathematical Theory of Communication [https://ieeexplore.ieee.org/document/6773024]. Bell System Technical Journal, 27 (3), 379–423. https://doi.org/10.1002/j.1538-7305.1948.tb01338.x

MacKay, D. M., & McCulloch, W. S. (1952). The limiting information capacity of a neuronal link [http://link.springer.com/10.1007/BF02477711]. The Bulletin of Mathematical Biophysics, 14 (2), 127–135. https://doi.org/10.1007/BF02477711

Dretske, F. (1981). Knowledge and the Flow of Information. Massachusetts The MIT Press.

Gould, S. J., & Vrba, E. S. (1982). Exaptation—a Missing Term in the Science of Form [https://www.cambridge.org/core/product/identifier/S0094837300004310/type/journal_article]. Paleobiology, 8 (1), 4–15. https://doi.org/10.1017/S0094837300004310

Piccinini, G., & Scarantino, A. (2011). Information processing, computation, and cognition. Journal of Biological Physics, 37 (1), 1–38. https://doi.org/10.1007/s10867-010-9195-3

Godfrey-Smith, P., & Sterelny, K. (2016). Biological information [http://plato.stanford.edu/archives/sum2016/entries/information-biological/]. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (summer 2016 edition) (pp. 1–37). The Metaphysics Research Lab.

Myin, E. (2016). Cognitive science without representations [https://doi.org/10.1080/09515089.2016.1175450]. Philosophical Psychology, 29 (4), 504–518. https://doi.org/10.1080/09515089.2016.1175450

Dennett, D. (2017). From Bacteria to Bach and Back: The Evolution of Minds. W.W. Norton & Company.

Shea, N. (2018). Representation in Cognitive Science. Oxford University Press.

Martínez, M. (2019). Representations Are Rate-Distortion Sweet Spots [https://www.cambridge.org/core/product/identifier/S0031824800015592/type/journal_article]. Philosophy of Science, 86 (5), 1214–1226. https://doi.org/10.1086/705493

Rathkopf, C. (2020). What Kind of Information is Brain Information? [http://link.springer.com/10.1007/s11245-017-9512-6]. Topoi, 39 (1), 95–102. https://doi.org/10.1007/s11245-017-9512-6

Mirski, R., & Mark H. Bickhard. (2019/ed). Encodingism is not just a bad metaphor [https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/encodingism-is-not-just-a-bad-metaphor/C81AF06381724E0477E92B3F68B2F]