Retour

G-2020-23-EIW07

Random bias initialization improves quantized training

et

référence BibTeX

Binary neural networks improve computationally efficiency of deep models with a large margin. However, there is still a performance gap between a successful full-precision training and binary training. We bring some insights about why this accuracy drop exists and call for a better understanding of binary network geometry. We start with analyzing full-precision neural networks with ReLU activation and compare it with its binarized version. This comparison suggests to initialize networks with random bias, a counter-intuitive remedy.

, 9 pages

Document

G2023-EIW07.pdf (600 Ko)