Hasso-Plattner-Institut
Prof. Dr. Tilmann Rabl
 

Holger Fröning

Affiliation: Heidelberg University
Title: On Accelerating Deep and Bayesian Neural Architectures

 

Abstract

Deep artificial neural networks are a prominent approach for decision-making in scenarios involving uncertainty. These networks have significantly enhanced performance in various prediction tasks, such as image recognition, speech processing, and signal analysis. However, their utilization demands substantial computational resources and memory. On the other hand, there is a growing need to implement machine learning techniques on resource-constrained devices, including Internet of Things (IoT) devices, edge devices, and mobile platforms. In this talk, we will start by examining prior research focused on accelerating Deep Neural Networks (DNNs) through compression techniques, particularly quantization, pruning, and architecture optimization. While DNNs excel at operating under uncertainty, they are incapable of reasoning about uncertainty itself. Detecting situations where a neural architecture cannot provide a well-founded prediction is crucial. Consequently, probabilistic models have recently garnered significant interest. We will provide a brief overview of these models and discuss potential avenues to address their substantially increased computational demands.