Virginia Tech® home

Stealing Neural Networks With Model Extraction Attacks, Dr. Nicholas Carlini

Date:
Thu, August 27, 2020
11:00 AM – 12:00 PM EDT

Neural networks have recently become sufficiently accurate across a wide range of tasks that they are now deployed in many practical settings. In order to protect their intellectual property and security, these models are often only released through black-box prediction APIs, or when they must be released on-device, are stored encrypted or obfuscated.

In this talk I develop the techniques necessary to steal neural networks by directly extracting their parameters. Unlike attacks that train a copy of a neural network to behave similarly on most inputs, model extraction attacks recover a functionally equivalent model that behaves identically on all inputs. For example, we can extract a hundred thousand parameter model that behaves matches the original with at least 25 bits of precision.

I conclude with a discussion of the theoretical and practical consequences of such attacks, along with the directions for future work necessary to scale these attacks to more realistic environments.

 

With a mission of research, innovation, and workforce development, the Commonwealth Cyber Initiative (CCI) focuses on the intersection of security, autonomous systems, and data. Funded by the Commonwealth of Virginia, CCI is a highly connected statewide network that engages institutions of higher education, industry, government, and nongovernmental and economic development organizations. CCI’s network includes 39 higher education institutions and 320 faculty members as well as more than 20 industry partners. CCI was established in the 2018-20 Virginia budget with an investment of approximately $20 million annually from 2020 and beyond. Follow us on Twitter, LinkedIn, Facebook, Instagram, and YouTube.

Thanks for your interest! Please contact Kendall Beebe at kwbeebe@vt.edu to get involved.

CCI will post a recording of the webinar, which you can watch on CCI’s YouTube channel.