EconPapers    
Economics at your fingertips  
 

AMED: Automatic Mixed-Precision Quantization for Edge Devices

Moshe Kimhi (), Tal Rozen, Avi Mendelson and Chaim Baskin
Additional contact information
Moshe Kimhi: Computer Science Department, Technion IIT, Haifa 3200003, Israel
Tal Rozen: Computer Science Department, Technion IIT, Haifa 3200003, Israel
Avi Mendelson: Computer Science Department, Technion IIT, Haifa 3200003, Israel
Chaim Baskin: Computer Science Department, Technion IIT, Haifa 3200003, Israel

Mathematics, 2024, vol. 12, issue 12, 1-21

Abstract: Quantized neural networks are well known for reducing the latency, power consumption, and model size without significant harm to the performance. This makes them highly appropriate for systems with limited resources and low power capacity. Mixed-precision quantization offers better utilization of customized hardware that supports arithmetic operations at different bitwidths. Quantization methods either aim to minimize the compression loss given a desired reduction or optimize a dependent variable for a specified property of the model (such as FLOPs or model size); both make the performance inefficient when deployed on specific hardware, but more importantly, quantization methods assume that the loss manifold holds a global minimum for a quantized model that copes with the global minimum of the full precision counterpart. Challenging this assumption, we argue that the optimal minimum changes as the precision changes, and thus, it is better to look at quantization as a random process, placing the foundation for a different approach to quantize neural networks, which, during the training procedure, quantizes the model to a different precision, looks at the bit allocation as a Markov Decision Process, and then, finds an optimal bitwidth allocation for measuring specified behaviors on a specific device via direct signals from the particular hardware architecture. By doing so, we avoid the basic assumption that the loss behaves the same way for a quantized model. Automatic Mixed-Precision Quantization for Edge Devices (dubbed AMED) demonstrates its superiority over current state-of-the-art schemes in terms of the trade-off between neural network accuracy and hardware efficiency, backed by a comprehensive evaluation.

Keywords: deep learning; quantization; CNN; latency optimization (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/12/1810/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/12/1810/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:12:p:1810-:d:1412600

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:12:p:1810-:d:1412600