Oregon State University

Can’t find an event? We’re busy migrating to a new event calendar. Try looking at the new calendar



Event Details

MS Final Examination – Xin Li

Monday, September 26, 2016 2:00 PM - 4:00 PM

Don't Fool Me: Detecting Adversarial Examples in Deep Networks
Deep learning has greatly improved visual recognition in recent years. However, recent research has shown that there exist many adversarial examples that can negatively impact the performance of such an architecture. Different from previous perspectives that focus on improving the classifiers to detect the adversarial examples, this paper focuses on detecting those adversarial examples by analyzing whether they come from the same distribution as the normal examples. An approach is proposed based on spectral analysis deeply inside the network. The insights gained from such an approach help to develop a comprehensive framework that can detect almost all the adversarial examples. After detecting adversarial examples, we show that many of them can be recovered by simply performing a small average filter on the image. Those findings should provoke us to think more about the classification mechanisms in deep convolutional neural networks.

Major Advisor: Fuxin Li
Committee: Stephen Ramsey
Committee: Rakesh Bobba
GCR: Henri Jansen

Kelley Engineering Center (campus map)
Nicole Thompson
1 541 737 3617
Nicole.Thompson at oregonstate.edu
Sch Elect Engr/Comp Sci
This event appears on the following calendars: