EconPapers    
Economics at your fingertips  
 

High-Performance Parallel Support Vector Machine Training

Kristian Woodsend () and Jacek Gondzio ()
Additional contact information
Kristian Woodsend: University of Edinburgh
Jacek Gondzio: University of Edinburgh

A chapter in Parallel Scientific Computing and Optimization, 2009, pp 83-92 from Springer

Abstract: Abstract Support vector machines are a powerful machine learning technology, but the training process involves a dense quadratic optimization problem and is computationally expensive. We show how the problem can be reformulated to become suitable for high-performance parallel computing. In our algorithm, data is pre-processed in parallel to generate an approximate low-rank Cholesky decomposition. Our optimization solver then exploits the problem’s structure to perform many linear algebra operations in parallel, with relatively low data transfer between processors, resulting in excellent parallel efficiency for very-large-scale problems.

Keywords: Support Vector Machine; Interior Point Method; Kernel Matrix; Cholesky Decomposition; Linear Support Vector Machine (search for similar items in EconPapers)
Date: 2009
References: Add references at CitEc
Citations:

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:spr:spochp:978-0-387-09707-7_7

Ordering information: This item can be ordered from
http://www.springer.com/9780387097077

DOI: 10.1007/978-0-387-09707-7_7

Access Statistics for this chapter

More chapters in Springer Optimization and Its Applications from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-04-01
Handle: RePEc:spr:spochp:978-0-387-09707-7_7