[1]章 进,李 琦.基于积分损失的对抗样本生成算法[J].计算机技术与发展,2022,32(07):1-7.[doi:10. 3969 / j. issn. 1673-629X. 2022. 07. 001]
 ZHANG Jin,LI Qi.Adversarial Examples Generation Algorithm Based on Integrated Loss[J].,2022,32(07):1-7.[doi:10. 3969 / j. issn. 1673-629X. 2022. 07. 001]
点击复制

基于积分损失的对抗样本生成算法()
分享到:

《计算机技术与发展》[ISSN:1006-6977/CN:61-1281/TN]

卷:
32
期数:
2022年07期
页码:
1-7
栏目:
人工智能
出版日期:
2022-07-10

文章信息/Info

Title:
Adversarial Examples Generation Algorithm Based on Integrated Loss
文章编号:
1673-629X(2022)07-0001-07
作者:
章 进李 琦
南京邮电大学 计算机学院,江苏 南京 210023
Author(s):
ZHANG JinLI Qi
School of Computer Science,Nanjing University of Posts and Telecommunications,Nanjing 210023,China
关键词:
对抗样本白盒攻击积分梯度卷积神经网络深度学习
Keywords:
adversarial exampleswhite-box attackintegrated gradientsconvolutional neural networkdeep learning
分类号:
TP391. 41;TP183
DOI:
10. 3969 / j. issn. 1673-629X. 2022. 07. 001
摘要:
随着计算机性能的飞速提升和数据量的爆炸式增长,深度学习在越来越多的领域取得了惊人的成果。 然而,研究者们发现深度网络也存在对抗攻击。 在图像分类领域,攻击者可以通过向原始的图片上加入人为设计的微小的扰动,来使得深度神经网络分类器给出错误的分类,而这种扰动对于人类来说是不可见的,加入了扰动之后的图片就是对抗样本。基于梯度攻击的对抗样本生成算法( projected gradient descent,PGD) 是目前有效的攻击算法,但是这类算法容易产生过拟合。 该文提出了积分损失快速梯度符号法,利用积分损失来衡量输入对于损失函数的重要性程度,规避梯度更新方向上可能陷入局部最优值的情况,不仅进一步提升了对抗样本的攻击成功率,而且也增加了对抗样本的迁移性。 实验结果证明了所提方法的有效性,可以作为测试防御模型的一个基准。
Abstract:
With the rapid improvement of computer performance and the explosive growth of data,deep learning has achieved amazing results in more and more fields. However,researchers have found that deep networks are also vulnerable to adversarial attacks. In the field of image classification,the attackers can add artificially designed small perturbations to? ?the original image to make the deep neural network classifier give the wrong classification,which is invisible to human beings. The image with perturbations is called the adversarial example. The projected gradient descent ( PGD) algorithm based on gradient attack is an effective adversarial examples generation algorithm at present,but this kind of algorithm is easy to over fit. In this paper,the integrated loss fast gradient sign method is proposed,which uses the integrated loss to measure the importance of the input to the loss function,and avoids the situation that the gradient update direction may? fall into the local optimal value. The proposed algorithm further improves the attack success rate of the adversarial sample. Furthermore, it also increases the transfer ability of the adversarial examples. The experiments results show the effectiveness of the proposed method,which can be used as a benchmark to test the defense model.

相似文献/References:

[1]田 鹏,左大义,高艳春,等.面向实际场景的人工智能脆弱性分析[J].计算机技术与发展,2021,31(11):129.[doi:10. 3969 / j. issn. 1673-629X. 2021. 11. 021]
 TIAN Peng,ZUO Da-yi,GAO Yan-chun,et al.Vulnerability Analysis of Artificial Intelligence in Real World[J].,2021,31(07):129.[doi:10. 3969 / j. issn. 1673-629X. 2021. 11. 021]
[2]林庚右,周星宇,潘志松.基于掩膜的人脸压缩重建对抗攻击增强方法[J].计算机技术与发展,2023,33(08):88.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 013]
 LIN Geng-you,ZHOU Xing-yu,PAN Zhi-song.Mask-based Face Compression-reconstruction Adversarial Attack Enhancement Method[J].,2023,33(07):88.[doi:10. 3969 / j. issn. 1673-629X. 2023. 08. 013]
[3]杨 怡,张兴兰.面向入侵检测的频域对抗攻击[J].计算机技术与发展,2023,33(09):72.[doi:10. 3969 / j. issn. 1673-629X. 2023. 09. 011]
 YANG Yi,ZHANG Xing-lan.Frequency Domain Adversarial Attack for Intrusion Detection[J].,2023,33(07):72.[doi:10. 3969 / j. issn. 1673-629X. 2023. 09. 011]
[4]汤家军,王 忠.基于 FGSM 的对抗样本生成算法[J].计算机技术与发展,2023,33(03):105.[doi:10. 3969 / j. issn. 1673-629X. 2023. 03. 016]
 TANG Jia-jun,WANG Zhong.Adversarial Sample Generation Algorithm Based on FGSM[J].,2023,33(07):105.[doi:10. 3969 / j. issn. 1673-629X. 2023. 03. 016]

更新日期/Last Update: 2022-07-10