EconPapers    
Economics at your fingertips  
 

Computing high-degree polynomial gradients in memory

Tinish Bhattacharya (), George H. Hutchinson, Giacomo Pedretti, Xia Sheng, Jim Ignowski, Thomas Vaerenbergh, Ray Beausoleil, John Paul Strachan and Dmitri B. Strukov ()
Additional contact information
Tinish Bhattacharya: University of California at Santa Barbara
George H. Hutchinson: University of California at Santa Barbara
Giacomo Pedretti: Hewlett Packard Labs
Xia Sheng: Hewlett Packard Labs
Jim Ignowski: Hewlett Packard Labs
Thomas Vaerenbergh: Hewlett Packard Labs
Ray Beausoleil: Hewlett Packard Labs
John Paul Strachan: Forschungszentrum Juelich GmbH
Dmitri B. Strukov: University of California at Santa Barbara

Nature Communications, 2024, vol. 15, issue 1, 1-11

Abstract: Abstract Specialized function gradient computing hardware could greatly improve the performance of state-of-the-art optimization algorithms. Prior work on such hardware, performed in the context of Ising Machines and related concepts, is limited to quadratic polynomials and not scalable to commonly used higher-order functions. Here, we propose an approach for massively parallel gradient calculations of high-degree polynomials, which is conducive to efficient mixed-signal in-memory computing circuit implementations and whose area scales proportionally with the product of the number of variables and terms in the function and, most importantly, independent of its degree. Two flavors of such an approach are proposed. The first is limited to binary-variable polynomials typical in combinatorial optimization problems, while the second type is broader at the cost of a more complex periphery. To validate the former approach, we experimentally demonstrated solving a small-scale third-order Boolean satisfiability problem based on integrated metal-oxide memristor crossbar circuits, with competitive heuristics algorithm. Simulation results for larger-scale, more practical problems show orders of magnitude improvements in area, speed and energy efficiency compared to the state-of-the-art. We discuss how our work could enable even higher-performance systems after co-designing algorithms to exploit massively parallel gradient computation.

Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.nature.com/articles/s41467-024-52488-y Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-52488-y

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-024-52488-y

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:15:y:2024:i:1:d:10.1038_s41467-024-52488-y