EconPapers    
Economics at your fingertips  
 

A Two-Level Parallel Incremental Tensor Tucker Decomposition Method with Multi-Mode Growth (TPITTD-MG)

Yajian Zhou (), Zongqian Yue and Zhe Chen
Additional contact information
Yajian Zhou: School of Cyberspace Security, Beijing University of Posts and Telecommunications, 1 Nanfeng Rd., Changping District, Beijing 102206, China
Zongqian Yue: School of Cyberspace Security, Beijing University of Posts and Telecommunications, 1 Nanfeng Rd., Changping District, Beijing 102206, China
Zhe Chen: School of Cyberspace Security, Beijing University of Posts and Telecommunications, 1 Nanfeng Rd., Changping District, Beijing 102206, China

Mathematics, 2025, vol. 13, issue 7, 1-28

Abstract: With the rapid growth of streaming data, traditional tensor decomposition methods can hardly handle real-time, high-dimensional data of massive amounts in this scenario. In this paper, a two-level parallel incremental tensor Tucker decomposition method with multi-mode growth (TPITTD-MG) is proposed to address the low parallelism issue of the existing Tucker decomposition methods on large-scale, high-dimensional, dynamically growing data. TPITTD-MG involves two mechanisms, i.e., a parallel sub-tensor partitioning mechanism based on the dynamic programming (PSTPA-DP) and a two-level parallel update method for projection matrices and core tensors. The former can count the non-zero elements in a parallel manner and use dynamic programming to partition sub-tensors, which ensures more uniform task allocation. The latter updates the projection matrices or the core tensors by implementing the first level of parallel updates based on the parallel MTTKRP calculation strategy, followed by the second level of parallel updates of different projection matrices or tensors independently based on different classification of sub-tensors. The experimental results show that execution efficiency is improved by nearly 400% and the uniformity of partition results is improved by more than 20% when the data scale reaches an order of magnitude of tens of millions with a parallelism degree of 4, compared with existing algorithms. For third-order tensors, compared with the single-layer update algorithm, execution efficiency is improved by nearly 300%.

Keywords: tensor; Tucker decomposition; parallel computing; projection matrix; core tensor (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/13/7/1211/pdf (application/pdf)
https://www.mdpi.com/2227-7390/13/7/1211/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:13:y:2025:i:7:p:1211-:d:1629719

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-04-08
Handle: RePEc:gam:jmathe:v:13:y:2025:i:7:p:1211-:d:1629719