EconPapers    
Economics at your fingertips  
 

A New Accelerated Forward–Backward Splitting Algorithm for Monotone Inclusions with Application to Data Classification

Puntita Sae-jia, Eakkpop Panyahan and Suthep Suantai ()
Additional contact information
Puntita Sae-jia: PhD Degree Program in Mathematics, Department of Mathematics, Faculty of Science, Chiang Mai University, Under the CMU Presidential Scholarship, Chiang Mai 50200, Thailand
Eakkpop Panyahan: Department of Statistics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
Suthep Suantai: Research Center in Optimization and Computational Intelligence for Big Data Prediction, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand

Mathematics, 2025, vol. 13, issue 17, 1-24

Abstract: This paper proposes a new accelerated fixed-point algorithm based on a double-inertial extrapolation technique for solving structured variational inclusion and convex bilevel optimization problems. The underlying framework leverages fixed-point theory and operator splitting methods to address inclusion problems of the form 0 ∈ ( A + B ) ( x ) , where A is a cocoercive operator and B is a maximally monotone operator defined on a real Hilbert space. The algorithm incorporates two inertial terms and a relaxation step via a contractive mapping, resulting in improved convergence properties and numerical stability. Under mild conditions of step sizes and inertial parameters, we establish strong convergence of the proposed algorithm to a point in the solution set that satisfies a variational inequality with respect to a contractive mapping. Beyond theoretical development, we demonstrate the practical effectiveness of the proposed algorithm by applying it to data classification tasks using Deep Extreme Learning Machines (DELMs). In particular, the training processes of Two-Hidden-Layer ELM (TELM) models is reformulated as convex regularized optimization problems, enabling robust learning without requiring direct matrix inversions. Experimental results on benchmark and real-world medical datasets, including breast cancer and hypertension prediction, confirm the superior performance of our approach in terms of evaluation metrics and convergence. This work unifies and extends existing inertial-type forward–backward schemes, offering a versatile and theoretically grounded optimization tool for both fundamental research and practical applications in machine learning and data science.

Keywords: variational inclusion; bilevel optimization; accelerated algorithm; data classification; Two-Hidden-Layer ELM (TELM) (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/17/2783/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/17/2783/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:17:p:2783-:d:1737208

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-10-04
Handle: RePEc:gam:jmathe:v:13:y:2025:i:17:p:2783-:d:1737208