Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/214505
Title: ON ADVERSARIAL MACHINE LEARNING AND ROBUST OPTIMIZATION
Authors: ZHAO YUE
Keywords: Distributionally Robust Optimization, Adversarial Machine Learning, Backdoor Attack, 3D Deep Learning, Chance-constrianed Program, Maritime Study
Issue Date: 4-Aug-2021
Citation: ZHAO YUE (2021-08-04). ON ADVERSARIAL MACHINE LEARNING AND ROBUST OPTIMIZATION. ScholarBank@NUS Repository.
Abstract: This thesis considers adversarial machine learning and robust optimization problems. Adversarial machine learning studies the robustness and security facet of machine learning algorithms, in particular deep learning, while robust optimization is a powerful technique to tackle uncertainty in operations research problems. We first introduce an understanding of adversarial attacks from the view of robust optimization, including the relation between commonly used uncertainty set and several types of adversarial samples, adversarial training and min-max framework, and Wasserstein adversarial set and distributionally robust optimization. Inspired by the recent development of distributionally robust optimization, we propose the novel concept of the Wasserstein adversarial set to generate a distribution of adversarial samples by solving a tractable optimization problem. Second, we look into specific adversarial machine learning problems in the newly arising 3D deep learning, which is at the heart of many artificial intelligence applications such as autonomous vehicles and augmented reality. We study the evasion (adversarial) attack in the inference phase and the data-poisoning backdoor attack in the training stage. The evasion attack explores the isometry robustness of 3D deep nets and breaks the illusion that 3D deep learning is more robust than that in the 2D domain, especially in the aspect of geometric transformations. It also deepens the understanding of the structure of 3D neural networks with many interesting observations. The data-poisoning backdoor attack could embed a malicious functionality in the neural network by deliberately poisoning a small proportion of training data. Such `backdoor' is silent when testing with standard data but activated once a specific pattern is revealed, for example, it could make the network classify normal cars as an obstacle but the black car as flat which may result in catastrophic accidents. We for the first time propose the framework of backdoor attacks for 3D deep learning in various settings. In the end, we apply the distributionally robust chance-constrained models to study a vessel deployment problem in operations research. High-quality solutions could be obtained by a sequential convex optimization algorithm. Comparison between the existing model and distributionally robust model in the data-driven setting is also conducted. The numerical results suggest that the distributionally robust model is much more compelling than the existing model when dealing with moderately large uncertainty variation and low-risk threshold.
URI: https://scholarbank.nus.edu.sg/handle/10635/214505
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
ZhaoY.pdf7.64 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.