Non-Zero Grid for Accurate 2-Bit Additive Power-of-Two CNN Quantization

Young Min Kim, Kyunghyun Han, Wai-Kong Lee, Hyung Jin Chang, Seong Oun Hwang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Downloads (Pure)

Abstract

Quantization is an effective technique to reduce the memory and computational complexity of CNNs. Recent advances utilize additive powers-of-two to perform non-uniform quantization, which resembles a normal distribution and shows better performance than uniform quantization. With powers-of-two quantization, the computational complexity is also largely reduced because the slow multiplication operations are replaced with lightweight shift operations. However, there are serious problems in the previously proposed grid formulation for 2-bit quantization. In particular, these powers-of-two schemes produce zero values, generating significant training error and causing low accuracy. In addition, due to improper grid formulation, they also fallback to uniform quantization when the quantization level reaches 2-bit. Due to these reasons, on large CNN like ResNet-110, these powers-of-two schemes may not even train properly. To resolve these issues, we propose a new non-zero grid formulation that enables 2-bit non-uniform quantization and allow the CNN to be trained successfully in every attempt, even for a large network. The proposed technique quantizes weight as power-of-two values and projects it close to the mean area through a simple constant product on the exponential part. This allows our quantization scheme to closely resemble a non-uniform quantization at 2-bit, enabling successful training at 2-bit quantization, which is not found in the previous work. The proposed technique achieves 70.57% accuracy on the CIFAR-100 dataset trained with ResNet-110. This result is 6.24% higher than the additive powers-of-two scheme which only achieves 64.33% accuracy. Beside achieving higher accuracy, our work also maintains the same memory and computational efficiency with the original additive powers-of-two scheme.
Original languageEnglish
Article number10087209
Pages (from-to)32051-32060
Number of pages10
JournalIEEE Access
Volume11
Early online date29 Mar 2023
DOIs
Publication statusPublished - 3 Apr 2023

Keywords

  • Quantization (signal)
  • Deep learning
  • Convolutional neural networks
  • Gaussian distribution
  • Mathematical models
  • Internet of Things
  • Computational modeling

Fingerprint

Dive into the research topics of 'Non-Zero Grid for Accurate 2-Bit Additive Power-of-Two CNN Quantization'. Together they form a unique fingerprint.

Cite this