Please use this identifier to cite or link to this item: https://doi.org/10.1145/3319535.3354261
DC FieldValue
dc.titleNeural Network Inversion in Adversarial Setting via Background Knowledge Alignment
dc.contributor.authorYang, Ziqi
dc.contributor.authorZhang, Jiyi
dc.contributor.authorChang, Ee-Chien
dc.contributor.authorLiang, Zhenkai
dc.date.accessioned2021-08-20T07:11:01Z
dc.date.available2021-08-20T07:11:01Z
dc.date.issued2019-01-01
dc.identifier.citationYang, Ziqi, Zhang, Jiyi, Chang, Ee-Chien, Liang, Zhenkai (2019-01-01). Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment. ACM SIGSAC Conference on Computer and Communications Security (CCS) : 225-240. ScholarBank@NUS Repository. https://doi.org/10.1145/3319535.3354261
dc.identifier.isbn9781450367479
dc.identifier.issn15437221
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/198449
dc.description.abstractThe wide application of deep learning technique has raised new security concerns about the training data and test data. In this work, we investigate the model inversion problem under adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values. We develop a solution to train a second neural network that acts as the inverse of the target model to perform the inversion. The inversion model can be trained with black-box accesses to the target model. We propose two main techniques towards training the inversion model in the adversarial settings. First, we leverage the adversary's background knowledge to compose an auxiliary set to train the inversion model, which does not require access to the original training data. Second, we design a truncation-based technique to align the inversion model to enable effective inversion of the target model from partial predictions that the adversary obtains on victim user's data. We systematically evaluate our approach in various machine learning tasks and model architectures on multiple image datasets. We also confirm our results on Amazon Rekognition, a commercial prediction API that offers “machine learning as a service”. We show that even with partial knowledge about the black-box model's training data, and with only partial prediction values, our inversion approach is still able to perform accurate inversion of the target model, and outperform previous approaches.
dc.publisherASSOC COMPUTING MACHINERY
dc.sourceElements
dc.subjectScience & Technology
dc.subjectTechnology
dc.subjectComputer Science, Information Systems
dc.subjectComputer Science, Theory & Methods
dc.subjectTelecommunications
dc.subjectComputer Science
dc.subjectneural networks
dc.subjectdeep learning
dc.subjectmodel inversion
dc.subjectsecurity
dc.subjectprivacy
dc.typeConference Paper
dc.date.updated2021-08-20T05:14:55Z
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/3319535.3354261
dc.description.sourcetitleACM SIGSAC Conference on Computer and Communications Security (CCS)
dc.description.page225-240
dc.published.statePublished
Appears in Collections:Staff Publications
Elements

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
CCS2019.pdf3.98 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.