Retour

G-2020-23-EIW04

Deep LDA-pruned nets and their robustness

, et

référence BibTeX

Deep neural networks usually have unnecessarily high complexities and possibly many features of low utility, especially for tasks that they are not designed for. In this extended abstract, we present our Deep-LDA-based pruning framework as a solution to such problems. In addition to accuracy-complexity analysis, we investigate our approach's potential in improving networks' robustness against adversarial attacks (e.g. FGSM and NewtonFool Attacks) and noises (e.g. Gaussian, Poisson, Speckle). Experimental results on CIFAR100, Adience, and LFWA illustrate our framework's efficacy. Through pruning, we can derive smaller, but accurate and more robust models suitable for particular tasks

, 8 pages

Document

G2023-EIW04.pdf (1,8 Mo)