Edge Intelligence Workshop
March 2-3, 2020

SCIENTIFIC PROGRAM

The presentation files of the invited speakers and the pdf file of the presented posters are accessible from https://drive.google.com/drive/folders/1kaRXK0hTiKwBNN6JNi-7fcGisl0xhm7j?usp=sharing

Monday 2 March 2020
08:30-09:00 Yanhui Geng, Sébastien Le Digabel Directors opening
09:00-09:30 Mehdi Rezagholizadeh Edge intelligence challenges in Huawei Noah's Ark Speech and Semantics Lab
09:30-10:00 Bianca Schroeder Machine learning and data mining for better computer systems
10:00-10:30 Coffee Break
10:30-11:00 Alejandro Murua Neural performance under compression
11:00-11:30 Masoud Asgharian On intrinsic dimension and causality
11:30-12:00 Ashish Khisti Information theoretic approaches in distributed machine learning
12:00-13:00 Lunch box and Poster Session 1
Nonvikan Karl-Augustt Alahassa; Alejandro Murua
Shallow Structured Potts Neural Network Regression (S-SPNNR)

Serge Vicente; Alejandro Murua
Statistical learning with the determinantal point process

Damoon Robatian; Masoud Asgharian
Semi$^+$-supervised learning under sample selection bias

Dipayan Mitra; Ashish Khisti
Distributed Stochastic Gradient Decent with Quantized Compressive Sensing

Alex Labach; Shahrokh Valaee
Neural Network Sparsification Using Gibbs Measures

Eyyüb Sari; Vahid Partovi Nia
Batch Normalization in Quantized Networks

Mohammad Javad Shafiee; Andrew Hryniowski; Francis Li; Zhong Qiu Lin; Alexander Wong
State of Compact Architecture Search For Deep Neural Networks

Adel Abusitta; Omar Abdul Wahab; Talal Halabi
Deep learning for proactive cooperative malware detection System

Vasileios Lioutas; Ahmad Rashid; Krtin Kumar
On Compressing The Embedding Matrix Of Language Models For Edge Deployment

Ramchalam Ramakrishnan; Eyyüb Sari; Vahid Partovi Nia
Differentiable Mask for Pruning Convolutional and Recurrent Networks

Bharat Venkitesh; Md. Akmal Haidar; Mehdi Rezagholizadeh
On-Device Neural Text Segmentation for Augmented Reality Translation
13:00-13:30 Shahrokh Valaee A unifying framework for dropout in neural networks
13:30-14:00 Ali Ghodsi Supervised Random Projections with Light
14:30-15:00 Hassan Ashtiani Learning through the Lens of Compression
15:00-15:30 Coffee Break
15:30-16:00 Yingxue Zhang Bayesian graph neural networks and its application in recommendation systems
16:00-16:30 Nandita Vijaykumar Hardware-software co-design for efficiency and programmability for sparse matrix operations
16:30-17:00 Dominique Orban Perspectives in computational optimization
17:00-17:30 Charles Audet Challenges and perspectives in blackbox optimization
17:30-18:00 Sébastien Le Digabel Blackbox optimization with the NOMAD solver
Tuesday 3 March 2020
08:30-09:00 Brett Meyer Probabilistic sequential multi-objective optimization of convolutional neural networks
09:00-09:30 Warren Gross Stochastic computing for machine learning
09:30-10:00 Eyal de Lara System support for smart applications on the edge
10:00-10:30 Coffee Break
10:30-11:00 Pascal Poupart Diachronic embedding for temporal knowledge graph completion
11:00-11:30 Tiago Falk Signal processing for domain-enriched learning for speech applications
11:30-12:00 Yaoliang Yu Multi-objective federated learning: Convergence and robustness
12:00-13:00 Lunch box and Poster Session 2
Ji Xin; Raphael Tang; Jaejun Lee; Yaoliang Yu; Jimmy Lin
Progress and Challenges in Early Exit for BERT

Guojun Zhang; Yaoliang Yu
Convergence of Gradient Methods on Bilinear Zero-Sum Games

Kaiwen Wu; Allen Houze Wang; Yaoliang Yu
Efficient Wasserstein Adversarial Attacks

Ibtihel Amara; James Clark
Uncertainty Transfer with Knowledge Distillation

Ghouthi Boukli Hacene; Vincent Gripon; Matthieu Arzel; Nicolas Farrugia; Yoshua Bengio
Pruning for Efficient Hardware Implementations of Deep Neural Networks
Qing Tian; Tal Arbel; James Clark
Deep LDA-Pruned Nets and their Robustness

Zeou Hu; Yuxin Zhu; Ihab F Ilyas; Yaoliang Yu
Fair Machine Learning through multi-objective optimization

Xinlin Li; Vahid Partovi Nia
Random Bias Initialization Improves Quantized Training

Joao Felipe Santos; Tiago H Falk
Pruning recurrent speech enhancement models via stuck neuron elimination

Mahdi Zolnouri; Xinlin Li; Vahid Partovi Nia
Importance of Data Loading Pipeline in Training Deep Neural Networks
13:00-13:30 Andrea Lodi Neural networks and mixed-integer programming
13:30-14:00 Daniel Aloise A Lagrangean-based score for assessing the quality of pairwise constraints in semi-supervised clustering
14:30-15:00 Mohan Liu On-device: bringing artificial intelligence closer to consumer
15:00-15:30 Coffee Break
15:30-16:00 Yoshua Bengio Towards low-energy deep learning
16:00-16:30 Vahid Partovi Nia Edge intelligence challenges in Huawei Noah's Ark Computer Vision Lab
16:30-17:00 James Clark Task-dependent structured pruning of neural networks
17:00-17:30 Fatiha Sadat Natural language processing in low-resource settings
17:30-18:00 Adam Oberman From an ODE for Nesterov’s method to Accelerated SGD
18:00-18:30 Murat Erdogdu Convergence rates for diffusions-based sampling and optimization methods
18:30-19:00 Organizers Workshop wrap up



GERAD    Huawei    CERC Data Science CRM