A Universal Compression Algorithm for Deep Neural Networks

In the past decade, deep neural networks (DNNs) have shown state-of-the-art performance on a wide range of complex machine learning tasks. Many of these results have been achieved while growing the size of DNNs, creating a demand for efficient compression and transmission of them. This talk will present DeepCABAC, a universal compression algorithm for DNNs that through its adaptive, context-based rate modeling, allows an optimal quantization and coding of neural network parameters. It compresses state-of-the-art DNNs up to 1.5% of their original size with no accuracy loss and has been selected as basic compression technology for the emerging MPEG-7 part 17 standard on DNN compression.

Speakers, Panelists and Moderators

  • WOJCIECH SAMEK
    WOJCIECH SAMEK
    Head of the Machine Learning Group
    Fraunhofer Heinrich Hertz Institute
    Wojciech Samek founded and is heading the Machine Learning Group at Fraunhofer Heinrich Hertz Institute since 2014. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received the Dr. rer. nat. degree with distinction (summa cum laude) from the Technical University of Berlin in 2014. In 2009 he was visiting researcher at NASA Ames Research Center, Mountain View, CA, and in 2012 and 2013 he had several short-term research stays at ATR International, Kyoto, Japan. He was awarded scholarships from the European Union's Erasmus Mundus programme, the Studienstiftung des deutschen Volkes and the DFG Research Training Group GRK 1589/1. He is PI at the Berlin Institute for the Foundation of Learning and Data (BIFOLD), member of the European Lab for Learning and Intelligent Systems (ELLIS) and associated faculty at the DFG graduate school BIOQIC. Furthermore, he is an editorial board member of Digital Signal Processing, PLOS ONE and IEEE Transactions on Neural Networks and Learning Systems and an elected member of the IEEE MLSP Technical Committee. He is part of various international standardization initiatives, including the MPEG AHG on Compression of Neural Networks for Multimedia Content Description and Analysis, and was organizer of special sessions, workshops and tutorials at top-tier machine learning and signal processing conferences (NIPS, ICML, CVPR, ICASSP, MICCAI), has received multiple best paper awards, and has authored more than 100 journal and conference papers, predominantly in the areas deep learning, interpretable machine learning, neural network compression and federated learning.
RESOURCES

Date

21 Aug 2020

Time

CEST, Geneva
13:00 - 14:00
Sessions

Topics

5G
Watch