Elastic Information Bottleneck
Yuyan Ni,
Yanyan Lan (),
Ao Liu and
Zhiming Ma
Additional contact information
Yuyan Ni: Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
Yanyan Lan: Institute for AI Industry Research, Tsinghua University, Beijing 100084, China
Ao Liu: School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100049, China
Zhiming Ma: Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
Mathematics, 2022, vol. 10, issue 18, 1-26
Abstract:
Information bottleneck is an information-theoretic principle of representation learning that aims to learn a maximally compressed representation that preserves as much information about labels as possible. Under this principle, two different methods have been proposed, i.e., information bottleneck (IB) and deterministic information bottleneck (DIB), and have gained significant progress in explaining the representation mechanisms of deep learning algorithms. However, these theoretical and empirical successes are only valid with the assumption that training and test data are drawn from the same distribution, which is clearly not satisfied in many real-world applications. In this paper, we study their generalization abilities within a transfer learning scenario, where the target error could be decomposed into three components, i.e., source empirical error, source generalization gap (SG), and representation discrepancy (RD). Comparing IB and DIB on these terms, we prove that DIB’s SG bound is tighter than IB’s while DIB’s RD is larger than IB’s. Therefore, it is difficult to tell which one is better. To balance the trade-off between SG and the RD, we propose an elastic information bottleneck (EIB) to interpolate between the IB and DIB regularizers, which guarantees a Pareto frontier within the IB framework. Additionally, simulations and real data experiments show that EIB has the ability to achieve better domain adaptation results than IB and DIB, which validates the correctness of our theories.
Keywords: information bottleneck; transfer learning; generalization bound (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2022
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/10/18/3352/pdf (application/pdf)
https://www.mdpi.com/2227-7390/10/18/3352/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:10:y:2022:i:18:p:3352-:d:915793
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().