Adversarial Learning and Secure AI
Abstract
Deep learning is based on enormous training datasets, whose acquisition and curation may be insecure. Thus, deep learning is susceptible to backdoors (Trojans) or "error-generic" data-poisoning attacks. Trained models are also susceptible to adversarial perturbations (test-time evasion attacks). Training sets may also be class-imbalanced and deep learning may also overfit to the training data. In this talk, we describe methods to address such problems in deep classification (discrete decision-making) and natural language generative AIs (with discrete sequential decision-making). Defenses can be crafted for before/during training (data cleansing or correction), post-training (no training data available), and operational (test-time) scenarios. Some post-training defenses leverage small clean datasets while others do not. Some defenses remove “superfluous” DNN functionality that enables attacks. This research -- the basis for the 2023 Cambridge University Press book “Adversarial Learning and Secure AI” and numerous other publications -- was supported in part by AFOSR DDDAS and NSF SBIR grants and was conducted in collaboration with former and current Ph.D. students at Penn State.
Bio
Dr. Miller joined Penn State’s EE Department in 1995. He is an active researcher in machine learning (ML), data compression, and bioinformatics. He publishes regularly on ML problems involving e.g. unsupervised clustering, supervised classification, semi-supervised learning, adversarial learning, feature selection, maximum entropy statistical inference, annealing-based techniques, and hidden Markov models. He has numerous publications collectively in: NeurIPS, ICLR, IJCAI, IEEE S&P, IEEE T-PAMI, IEEE TNN-LS, IEEE Trans. Signal Processing, Neural Computation, and Proceedings of the IEEE. He is also author with Zhen Xiang and George Kesidis of the Cambridge University Press book ``Adversarial Learning and Secure AI”. Dr. Miller did seminal work on semi-supervised learning in 1996. He received an NSF CAREER Award in 1996. He was on the IEEE SP Society Conference Board from 2019-2022 and is currently on the Management Board for IEEE Transactions on Artificial Intelligence. He was Chair of the Machine Learning for Signal Processing Technical Committee, within the IEEE Signal Processing Society from 2007-2009. He was an Associate Editor for IEEE Transactions on Signal Processing from 2004-2007. He was General Chair for the 2001 IEEE Workshop on Neural Networks for Signal Processing. Dr. Miller has been a PI or co-PI on grants from NSF, AFOSR, NIH, ONR, AFRL, NASA and NIH. Dr. Miller is also co-founder of the startup Anomalee, Inc., which received an NSF SBIR award in the area of security for AI.
Media Contact: Iam-Choon Khoo