Fast Deep Belief Propagation: An Efficient Learning-Based Algorithm for Solving Constraint Optimization Problems
Shufeng Kong,
Feifan Chen,
Zijie Wang and
Caihua Liu ()
Additional contact information
Shufeng Kong: School of Software Engineering, Sun Yat-sen University, Zhuhai 519000, China
Feifan Chen: School of Software Engineering, Sun Yat-sen University, Zhuhai 519000, China
Zijie Wang: School of Software Engineering, Sun Yat-sen University, Zhuhai 519000, China
Caihua Liu: Department of Computer Science, Cornell University, Ithaca, NY 14850, USA
Mathematics, 2025, vol. 13, issue 20, 1-18
Abstract:
Belief Propagation (BP) is a fundamental heuristic for solving Constraint Optimization Problems (COPs), yet its practical applicability is constrained by slow convergence and instability in loopy factor graphs. While Damped BP (DBP) improves convergence by using manually tuned damping factors, its reliance on labor-intensive hyperparameter optimization limits scalability. Deep Attentive BP (DABP) addresses this by automating damping through recurrent neural networks (RNNs), but introduces significant memory overhead and sequential computation bottlenecks. To reduce memory usage and accelerate deep belief propagation, this paper introduces Fast Deep Belief Propagation (FDBP), a deep learning framework that improves COP solving through online self-supervised learning and graphics processing unit (GPU) acceleration. FDBP decouples the learning of damping factors from BP message passing, inferring all parameters for an entire BP iteration in a single step, and leverages mixed precision to further optimize GPU memory usage. This approach substantially improves both the efficiency and scalability of BP optimization. Extensive evaluations on synthetic and real-world benchmarks highlight the superiority of FDBP, especially for large-scale instances where DABP fails due to memory constraints. Moreover, FDBP achieves an average speedup of 2.87× over DABP with the same restart counts. Because BP for COPs is a mathematically grounded GPU-parallel message-passing framework that bridges applied mathematics, computing, and machine learning, and is widely applicable across science and engineering, our work offers a promising step toward more efficient solutions to these problems.
Keywords: constraint optimization problems; learning to optimize; belief propagation; machine learning; artificial intelligence; optimization algorithms (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2227-7390/13/20/3349/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/20/3349/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:20:p:3349-:d:1776214
Access Statistics for this article
Mathematics is currently edited by Ms. Emma He
More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().