One-Index Vector Quantization Based Adversarial Attack on Image Classification

Haiju Fan, Xiaona Qin, Shuang Chen, Hubert P. H. Shum and Ming Li
Pattern Recognition Letters (PRL), 2024

 Impact Factor: 3.9

One-Index Vector Quantization Based Adversarial Attack on Image Classification

Abstract

To improve storage and transmission, images are generally compressed. Vector quantization (VQ) is a popular compression method as it has a high compression ratio that suppresses other compression techniques. Despite this, existing adversarial attack methods on image classification are mostly performed in the pixel domain with few exceptions in the compressed domain, making them less applicable in real-world scenarios. In this paper, we propose a novel one-index attack method in the VQ domain to generate adversarial images by a differential evolution algorithm, successfully resulting in image misclassification in victim models. The one-index attack method modifies a single index in the compressed data stream so that the decompressed image is misclassified. It only needs to modify a single VQ index to realize an attack, which limits the number of perturbed indexes. The proposed method belongs to a semi-black-box attack, which is more in line with the actual attack scenario. We apply our method to attack three popular image classification models, i.e., Resnet, NIN, and VGG16. On average, 55.9% and 77.4% of the images in CIFAR-10 and Fashion MNIST, respectively, are successfully attacked, with a high level of misclassification confidence and a low level of image perturbation.


Downloads


YouTube


Cite This Research

Plain Text

Haiju Fan, Xiaona Qin, Shuang Chen, Hubert P. H. Shum and Ming Li, "One-Index Vector Quantization Based Adversarial Attack on Image Classification," Pattern Recognition Letters, vol. 186, pp. 47-56, Elsevier, 2024.

BibTeX

@article{fan24oneindex,
 author={Fan, Haiju and Qin, Xiaona and Chen, Shuang and Shum, Hubert P. H. and Li, Ming},
 journal={Pattern Recognition Letters},
 title={One-Index Vector Quantization Based Adversarial Attack on Image Classification},
 year={2024},
 volume={186},
 pages={47--56},
 numpages={10},
 doi={10.1016/j.patrec.2024.09.001},
 issn={0167-8655},
 publisher={Elsevier},
}

RIS

TY  - JOUR
AU  - Fan, Haiju
AU  - Qin, Xiaona
AU  - Chen, Shuang
AU  - Shum, Hubert P. H.
AU  - Li, Ming
T2  - Pattern Recognition Letters
TI  - One-Index Vector Quantization Based Adversarial Attack on Image Classification
PY  - 2024
VL  - 186
SP  - 47
EP  - 56
DO  - 10.1016/j.patrec.2024.09.001
SN  - 0167-8655
PB  - Elsevier
ER  - 


Supporting Grants


Similar Research

Zhengzhi Lu, He Wang, Ziyi Chang, Guoan Yang and Hubert P. H. Shum, "Hard No-Box Adversarial Attack on Skeleton-Based Human Action Recognition with Skeleton-Motion-Informed Gradient", Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023
Arindam Kar, Sourav Pramanik, Arghya Chakraborty, Debotosh Bhattacharjee, Edmond S. L. Ho and Hubert P. H. Shum, "LMZMPM: Local Modified Zernike Moment Per-Unit Mass for Robust Human Face Recognition", IEEE Transactions on Information Forensics and Security (TIFS), 2021

 

RSS Feed