Null models for comparing information decomposition across complex systems
Alberto Liardi,
Fernando E Rosas,
Robin L Carhart-Harris,
George Blackburne,
Daniel Bor and
Pedro A M Mediano
PLOS Computational Biology, 2025, vol. 21, issue 11, 1-25
Abstract:
A key feature of information theory is its universality, as it can be applied to study a broad variety of complex systems. However, many information-theoretic measures can vary significantly even across systems with similar properties, making normalisation techniques essential for allowing meaningful comparisons across datasets. Inspired by the framework of Partial Information Decomposition (PID), here we introduce Null Models for Information Theory (NuMIT), a null model-based non-linear normalisation procedure which improves upon standard entropy-based normalisation approaches and overcomes their limitations. We provide practical implementations of the technique for systems with different statistics, and showcase the method on synthetic models and on human neuroimaging data. Our results demonstrate that NuMIT provides a robust and reliable tool to characterise complex systems of interest, allowing cross-dataset comparisons and providing a meaningful significance test for PID analyses.Author summary: How do complex systems process information? Perhaps more interestingly, when can we say two systems process information in the same way? Information-theoretic methods have been shown to be promising techniques that can probe the informational architecture of complex systems. Among these, information decomposition frameworks are models that split the information shared between various components into more elemental quantities, allowing for a more intuitive understanding of the system’s properties. In the field of neuroscience, these measures are often used to gauge the differences between conscious states across health and disease. However, comparing these quantities across datasets is non-trivial, and simple normalisation techniques commonly employed have not been formally validated. In this work, we argue that such methods can introduce bias and result in erroneous conclusions, especially when the data under examination is significantly diverse. Our study sheds light on the origins of this issue, as well as its consequences and shortcomings. Moreover, it offers a rigorous procedure that can be employed to standardise these quantities, enabling more robust cross-dataset comparisons.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013629 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 13629&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1013629
DOI: 10.1371/journal.pcbi.1013629
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().