
Quest | CBMM Special Seminar: Adi Shamir
Description
Title: The Insecurity of Machine Learning
Abstract: The development of deep neural networks in the last decade had revolutionized machine learning and led to major improvements in our ability to perform many computational and cognitive tasks. However, this was accompanied by the discovery that deep neural networks are extremely fragile, and it is very easy to fool any neural network by making tiny changes in its inputs. These adversarial examples make it difficult to trust the results of such computations when the input can be manipulated by an adversary, and this problem has many applications and implications in object recognition, autonomous driving, cyber security, etc.
In this talk I will describe a simple conceptual framework which enables us to think about this surprising phenomena from a fresh perspective, turning the existence of adversarial examples in deep neural networks from a baffling mystery into an unavoidable consequence of the geometry of the high dimensional input space. Time permitting, I will then describe several other surprising results on the security of deep neural networks, including how one can backdoor state of the art facial recognition systems by mathematically modifying a small fraction of their weights, and how one can efficiently extract all the weights of a network by analyzing its answers for a small number of chosen queries.