To advance further, deep learning systems need to become more transparent. They will have to prove they are reliable, can withstand malicious attacks, and explain the reasoning behind their decisions, especially in safety-critical applications like self-driving cars.
We invite MIT undergraduates, graduate students and postdoctoral scholars to submit current research abstracts to the Quest Symposium on Robust, Interpretable Deep Learning Systems to be held on Nov. 20, 2018. We welcome submissions on attack and defense methods for deep neural networks, visualizations, interpretable modeling, and other methods for revealing deep network behavior, structure, sensitivities, and biases.
The event will feature faculty talks, an afternoon poster session and refreshments.