EconPapers    
Economics at your fingertips  
 

Gaussian variant of Freivalds’ algorithm for efficient and reliable matrix product verification

Ji Hao (), Mascagni Michael () and Li Yaohang ()
Additional contact information
Ji Hao: Department of Computer Science, California State Polytechnic University Pomona, Pomona, CA 91768, USA
Mascagni Michael: Department of Computer Science, Florida State University, Tallahassee, FL 32306-4530; and Applied and Computational Mathematics Division, Information Technology Laboratory, National Institute of Standards & Technology, ITL, Gaithersburg, MD 20899-8910, USA
Li Yaohang: Department of Computer Science, Old Dominion University, Norfolk, VA 23529, USA

Monte Carlo Methods and Applications, 2020, vol. 26, issue 4, 273-284

Abstract: In this article, we consider the general problem of checking the correctness of matrix multiplication. Given three n×nn\times n matrices 𝐴, 𝐵 and 𝐶, the goal is to verify that A×B=CA\times B=C without carrying out the computationally costly operations of matrix multiplication and comparing the product A×BA\times B with 𝐶, term by term. This is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. Here we extend Freivalds’ algorithm to a Gaussian Variant of Freivalds’ Algorithm (GVFA) by projecting the product A×BA\times B as well as 𝐶 onto a Gaussian random vector and then comparing the resulting vectors. The computational complexity of GVFA is consistent with that of Freivalds’ algorithm, which is O⁢(n2)O(n^{2}). However, unlike Freivalds’ algorithm, whose probability of a false positive is 2-k2^{-k}, where 𝑘 is the number of iterations, our theoretical analysis shows that, when A×B≠CA\times B\neq C, GVFA produces a false positive on set of inputs of measure zero with exact arithmetic. When we introduce round-off error and floating-point arithmetic into our analysis, we can show that the larger this error, the higher the probability that GVFA avoids false positives. Moreover, by iterating GVFA 𝑘 times, the probability of a false positive decreases as pkp^{k}, where 𝑝 is a very small value depending on the nature of the fault on the result matrix and the arithmetic system’s floating-point precision. Unlike deterministic algorithms, there do not exist any fault patterns that are completely undetectable with GVFA. Thus GVFA can be used to provide efficient fault tolerance in numerical linear algebra, and it can be efficiently implemented on modern computing architectures. In particular, GVFA can be very efficiently implemented on architectures with hardware support for fused multiply-add operations.

Keywords: Fault-tolerance; algorithmic resilience; Gaussian Variant of Freivalds’ Algorithm; matrix multiplication; Gaussian random vector; failure probability (search for similar items in EconPapers)
Date: 2020
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://doi.org/10.1515/mcma-2020-2076 (text/html)
For access to full text, subscription to the journal or payment for the individual article is required.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:bpj:mcmeap:v:26:y:2020:i:4:p:273-284:n:6

Ordering information: This journal article can be ordered from
https://www.degruyter.com/journal/key/mcma/html

DOI: 10.1515/mcma-2020-2076

Access Statistics for this article

Monte Carlo Methods and Applications is currently edited by Karl K. Sabelfeld

More articles in Monte Carlo Methods and Applications from De Gruyter
Bibliographic data for series maintained by Peter Golla ().

 
Page updated 2025-03-19
Handle: RePEc:bpj:mcmeap:v:26:y:2020:i:4:p:273-284:n:6