CN111259881A - Adversarial sample protection method based on feature map denoising and image enhancement - Google Patents
Adversarial sample protection method based on feature map denoising and image enhancement Download PDFInfo
- Publication number
- CN111259881A CN111259881A CN202010031024.9A CN202010031024A CN111259881A CN 111259881 A CN111259881 A CN 111259881A CN 202010031024 A CN202010031024 A CN 202010031024A CN 111259881 A CN111259881 A CN 111259881A
- Authority
- CN
- China
- Prior art keywords
- feature map
- area
- point
- feature
- method based
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000010845 search algorithm Methods 0.000 claims description 5
- 238000012552 review Methods 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 2
- 238000003062 neural network model Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 235000000332 black box Nutrition 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000004821 distillation Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000002360 explosive Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
本发明公开一种基于特征图去噪以及图像增强的敌对样本防护方法,包括步骤:在神经网络模型的第一层卷积层,对目标特征通道进行切片和特征图提取;定位坐标到特征图的最亮点,对特征图进行切片;判断切片属于亮区、暗区还是鲁棒区,如果属于亮区,则将定位点坐标移动到特征图第二亮的点,如果属于暗区或鲁棒区,则搜索寻找所述特征图中的第二亮的点,将特征图切片中所有点的像素值更改为最亮点像素值;将暗区和鲁棒区所有点的像素值归0;将经过以上处理的特征图进行合并叠加。本发明能有效缓解去噪过程对神经网络的影响,使其在识别干净样本时,保持较高的正确率,并具有较好的普适性。
The invention discloses an adversarial sample protection method based on feature map denoising and image enhancement. the brightest point, slice the feature map; judge whether the slice belongs to the bright area, the dark area or the robust area, if it belongs to the bright area, move the coordinates of the positioning point to the second brightest point in the feature map, if it belongs to the dark area or the robust area area, search for the second brightest point in the feature map, change the pixel values of all points in the feature map slice to the pixel value of the brightest point; return the pixel values of all points in the dark area and robust area to 0; The feature maps processed above are merged and stacked. The invention can effectively alleviate the influence of the denoising process on the neural network, so that it can maintain a high accuracy rate when recognizing clean samples, and has better universality.
Description
技术领域technical field
本发明涉及人工智能安全和信息安全技术领域,尤其涉及一种基于特征图去噪以及图像增强的敌对样本防护方法。The invention relates to the technical fields of artificial intelligence security and information security, and in particular to an adversarial sample protection method based on feature map denoising and image enhancement.
背景技术Background technique
自从2012年深度学习迎来爆发式发展,神经网络包括CNN(Convolutional NeuralNetworks)、DN(Deconvolutional networks)、GAN(Generative adversarialnetworks)等在图像检测和识别领域逐渐取得了明显优于传统目标检测方法的成绩,并在计算机视觉领域占据了重要的地位。但神经网络的线性特性致使其易被攻击者恶意构造的对抗样本愚弄,致使深度学习模型的安全收到威胁。Since the explosive development of deep learning in 2012, neural networks including CNN (Convolutional Neural Networks), DN (Deconvolutional networks), GAN (Generative adversarial networks), etc. have gradually achieved significantly better results than traditional target detection methods in the field of image detection and recognition. , and occupies an important position in the field of computer vision. However, the linear nature of neural networks makes them easy to be fooled by adversarial samples maliciously constructed by attackers, which threatens the security of deep learning models.
在对抗样本攻击中,攻击者通过在输入中加入线性扰动致使深度学习模型识别错误。对抗样本的攻击原理为:对抗样本中的微小扰动随迭代次数增加而逐层增大最终致使模型的分类器输出错误。攻击方式分为两类,即黑盒攻击(Black-box Attacks)与白盒攻击(White-box Attacks)。在黑盒场景下,攻击者无法获取模型的参数等详细信息,而在白盒场景下,攻击者可以在已知模型信息的条件下构造敌对样本。In adversarial example attacks, the attacker causes the deep learning model to identify errors by adding linear perturbations to the input. The attack principle of adversarial samples is that the small perturbations in the adversarial samples increase layer by layer with the increase of the number of iterations, and finally cause the classifier output of the model to be wrong. Attack methods are divided into two categories, namely Black-box Attacks and White-box Attacks. In the black-box scenario, the attacker cannot obtain detailed information such as model parameters, while in the white-box scenario, the attacker can construct adversarial samples with known model information.
对抗样本生成算法包括:1)FGSM攻击算法(Fast gradient sign method)。其训练目标是通过在梯度方向增加微小偏移增大损失函数来获取对抗样本。2)I-FGSM攻击算法(iterative FGSM)。其目标是通过多次迭代FGSM算法,在输入中多次增加微小偏移来构造更精准的对抗样本。攻击算法造成的主要危害包括:Adversarial sample generation algorithms include: 1) FGSM attack algorithm (Fast gradient sign method). Its training goal is to obtain adversarial examples by increasing the loss function by adding a small offset in the gradient direction. 2) I-FGSM attack algorithm (iterative FGSM). The goal is to construct more accurate adversarial examples by multiple iterations of the FGSM algorithm, adding small offsets to the input multiple times. The main harms caused by attacking algorithms include:
图像检测与模式识别领域的错误分类;Misclassification in the field of image detection and pattern recognition;
自动驾驶领域的异常识别,加州大学Down Song教授的团队通过在“STOP”交通标识牌上贴胶带来欺骗自动驾驶的人工智能;Abnormal identification in the field of autonomous driving, the team of Professor Down Song of the University of California deceived the artificial intelligence of autonomous driving by sticking tape on the "STOP" traffic sign;
人脸识别领域的错误认证,卡内基梅隆大学研究人员发现,通过佩戴特殊设计的眼镜框架,即可愚弄先进的人工智能识别系统。Misidentification in the field of face recognition, researchers at Carnegie Mellon University have found that advanced artificial intelligence recognition systems can be fooled by wearing specially designed glasses frames.
考虑到对抗样本攻击的危害性,提供可靠、稳定并且效用良好的防护方法十分必要。Considering the harmfulness of adversarial sample attacks, it is necessary to provide reliable, stable and effective protection methods.
已有的敌对样本防护方法有以下三种。There are three types of adversarial sample protection methods.
(1)对抗性训练:通过在训练集中加入对抗样本,使模型习得相应数据,即进行一定正则化。(1) Adversarial training: By adding adversarial samples to the training set, the model can acquire the corresponding data, that is, perform certain regularization.
(2)蒸馏:通过用软标签(soft target)对模型进行训练,来让模型的梯度更加平滑,使攻击者更难获得梯度信息。(2) Distillation: By training the model with a soft target (soft target), the gradient of the model is smoother, making it more difficult for an attacker to obtain gradient information.
(3)去噪:对输入进行去噪操作,减弱和消除攻击者施加的噪声信息。(3) Denoising: denoising the input to weaken and eliminate the noise information imposed by the attacker.
已有的对抗样本防护算法在防御对抗样本任务中取得了良好的成绩,但在运用过程中存在缺陷。蒸馏方法在任务规模较大,模型较为复杂时应用困难,而去噪方法则会使图片丢失部分信息。另一方面考虑到模型结构的复杂性,已有的防护方法难以在模型之间迁移,无疑给防护任务带来困难。除此之外,部分已有的防护方法增加了模型的复杂度,导致计算量大幅增加。Existing adversarial sample protection algorithms have achieved good results in the task of defending against adversarial samples, but there are defects in the application process. The distillation method is difficult to apply when the task scale is large and the model is complex, while the denoising method will make the image lose some information. On the other hand, considering the complexity of the model structure, the existing protection methods are difficult to transfer between models, which undoubtedly brings difficulties to the protection task. In addition, some existing protection methods increase the complexity of the model, resulting in a substantial increase in the amount of computation.
发明内容SUMMARY OF THE INVENTION
为了提高深度学习模型对抗敌对样本时的鲁棒性,本发明提出一种基于特征图去噪以及图像增强的敌对样本防护方法。在不改动模型结构的基础上,在特征图空间进行修改,达到去噪的目的。In order to improve the robustness of the deep learning model against adversarial samples, the present invention proposes an adversarial sample protection method based on feature map denoising and image enhancement. On the basis of not changing the model structure, it is modified in the feature map space to achieve the purpose of denoising.
为了达到之一目的,本发明采用的技术方案如下:In order to achieve one purpose, the technical scheme adopted in the present invention is as follows:
构建图像的神经网络模型,包括三层卷积层,对第一层卷积层进行如下操作:Construct the neural network model of the image, including three convolutional layers, and perform the following operations on the first convolutional layer:
S1、对目标特征通道进行切片和特征图提取;S1. Perform slice and feature map extraction on the target feature channel;
S2、移动定位点坐标到特征图的最亮点,以所述最亮点为中心对特征图进行切片;S2, moving the coordinates of the positioning point to the brightest point of the feature map, and slicing the feature map with the brightest point as the center;
S3、判断所述切片属于亮区、暗区还是鲁棒区,如果属于亮区,则将所述定位点坐标移动到所述特征图第二亮的点,如果属于暗区或鲁棒区,则重新搜索寻找所述特征图中的第二亮的点,将定位点坐标定位到该点;S3, judge that the slice belongs to a bright area, a dark area or a robust area, if it belongs to a bright area, then move the coordinates of the positioning point to the second brightest point in the feature map, if it belongs to a dark area or a robust area, Then search again to find the second brightest point in the feature map, and locate the coordinates of the positioning point to this point;
S4、重复执行S3,直到搜索次数达到预定次数,将所述特征图切片中所有点的像素值更改为最亮点像素值;S4. Repeat S3 until the number of searches reaches a predetermined number of times, and change the pixel values of all points in the feature map slice to the pixel value of the highest point;
S5、将暗区和鲁棒区所有点的像素值归0,去除噪声;S5. Return the pixel values of all points in the dark area and the robust area to 0 to remove noise;
S6、将经过以上处理的特征图进行合并叠加,合并后的特征图与原始特征通道大小相等。S6. Merge and stack the feature maps processed above, and the combined feature map is equal to the size of the original feature channel.
作为优选,S3中采用深度优先搜索算法搜索寻找所述特征图中的第二亮的点。Preferably, in S3, a depth-first search algorithm is used to search for the second brightest point in the feature map.
进一步,所述深度优先搜素算法具体是,通过递归更新当前坐标,以第二亮点坐标为下一次递归采用的定位坐标参数。Further, in the depth-first search algorithm, the current coordinate is updated recursively, and the coordinate of the second bright spot is used as the positioning coordinate parameter recursively used next time.
作为优选,所述判断所述切片属于亮区、暗区还是鲁棒区具体为:判断定位点是否位于有效连通特征边界,如果是则沿边界双向处理所述连通特征,如果不是则寻找该切片内第二亮点;判断为亮区后,定位切片以当前定位坐标为十字中心区域的第二亮点坐标值,即第二亮点定位坐标值需与当前定位坐标直接相邻;对重新定位坐标进行审查,若不符合条件,则结束本次递归调用;所述条件为:全特征图经处理的像素点数目不得超过全图像素点总数额的三分之一。Preferably, the judging whether the slice belongs to a bright area, a dark area or a robust area is specifically: judging whether the positioning point is located at the boundary of an effective connected feature, if so, processing the connected feature bidirectionally along the boundary, if not, searching for the slice The second bright spot in the interior; after judging that it is a bright area, the positioning slice takes the current positioning coordinates as the coordinate value of the second bright spot in the center area of the cross, that is, the positioning coordinate value of the second bright spot needs to be directly adjacent to the current positioning coordinates; review the repositioning coordinates , if the condition is not met, the recursive call ends; the condition is: the number of processed pixels in the full feature map shall not exceed one-third of the total number of pixels in the full image.
进一步的,采用中轴线边界判定算法判断定位点是否位于有效连通特征边界。Further, a central axis boundary determination algorithm is used to determine whether the positioning point is located at the boundary of an effective connected feature.
更进一步的,所述中轴线边界判定算法通过比较中轴线两边对称点像素值与亮区判定值和暗区判定值大小,来判定该轴线是否为边界;所述暗区判定值设置为全图像素点值中位数与平均值的最大值,所述亮区判定值为暗区判定值加相应参数0.05。Further, the central axis boundary determination algorithm determines whether the axis is a boundary by comparing the pixel values of the symmetrical points on both sides of the central axis with the bright area determination value and the dark area determination value; the dark area determination value is set to the full image. The maximum value of the median and the average value of the pixel value, the bright area judgment value is the dark area judgment value plus the corresponding parameter 0.05.
作为优选,设置所述亮区的识别值为全图像素值按正序排序时,位于总像素点数三分之一位置处的像素值。Preferably, the identification value of the bright area is set to be the pixel value located at one-third of the total number of pixels when the pixel values of the entire image are sorted in positive order.
作为优选,设置所述暗区的识别值为全图像素值按倒序排序时,位于总像素点数三分之一位置处的像素值。Preferably, the identification value of the dark area is set to be the pixel value located at one-third of the total number of pixels when the pixel values of the entire image are sorted in reverse order.
本发明在提升神经网络识别样本的鲁棒性的同时,能够有效缓解去噪过程对神经网络的影响,使其在识别干净样本时,能保持较高的正确率;本发明在特征图维度进行处理,具有较好的普适性,因其不改变模型结构的特点,有效规避了可能出现的模型复杂度增加问题。经过在测试集上进行测试,验证本发明能有效防御FGSM等多个攻击算法。While improving the robustness of the neural network to identify samples, the invention can effectively alleviate the influence of the denoising process on the neural network, so that it can maintain a high accuracy rate when recognizing clean samples; The processing has good universality, because it does not change the characteristics of the model structure, and effectively avoids the problem of possible increase in model complexity. After testing on the test set, it is verified that the present invention can effectively defend against FGSM and other attack algorithms.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.
图1为本发明实施例基于特征图去噪以及图像增强的敌对样本防护方法的流程图;1 is a flowchart of an adversarial sample protection method based on feature map denoising and image enhancement according to an embodiment of the present invention;
图2为图1实施例的方法与现有技术在FGSM算法生成敌对样本测试集上的准确率对比图;Fig. 2 is the accuracy comparison diagram of the method of the embodiment of Fig. 1 and the prior art on the FGSM algorithm generating the hostile sample test set;
图3为图1实施例的方法与现有技术在I-FGSM算法生成敌对样本测试集上的准确率对比图;图4为应用图1实施例的方法前后对比效果图,(a)为应用本实施例的方法之前,(b)为应用本实施例方法之后。Fig. 3 is the method of the embodiment of Fig. 1 and the prior art on the I-FGSM algorithm to generate the accuracy rate comparison chart of the hostile sample test set; Fig. 4 is the comparison effect diagram before and after the method of the embodiment of Fig. 1 is applied, (a) is the application Before the method of this embodiment, (b) is after applying the method of this embodiment.
具体实施方式Detailed ways
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the objectives, technical solutions and advantages of the present invention clearer, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the present invention, but not to limit the present invention.
如图1所示,本实施例是基于特征图去噪以及图像增强的敌对样本防护方法,包括以下步骤,As shown in Figure 1, this embodiment is an adversarial sample protection method based on feature map denoising and image enhancement, including the following steps:
步骤1、对目标特征通道进行切片和特征图提取;Step 1. Slice and extract the feature map for the target feature channel;
步骤2、移动定位点坐标到特征图的最亮点,以所述最亮点为中心对特征图进行切片;Step 2. Move the coordinates of the positioning point to the brightest point of the feature map, and slice the feature map with the brightest point as the center;
步骤3、判断所述切片属于亮区、暗区还是鲁棒区,如果属于亮区,则将所述定位点坐标移动到所述特征图第二亮的点,如果属于暗区或鲁棒区,则重新搜索寻找所述特征图中的第二亮的点,将定位点坐标定位到该点;Step 3. Determine whether the slice belongs to a bright area, a dark area or a robust area. If it belongs to a bright area, move the coordinates of the positioning point to the second brightest point in the feature map. If it belongs to a dark area or a robust area , then search again to find the second brightest point in the feature map, and locate the coordinates of the positioning point to this point;
步骤4、重复执行步骤3,直到搜索次数达到预定次数,将所述特征图切片中所有点的像素值更改为最亮点像素值;Step 4. Repeat step 3 until the number of searches reaches a predetermined number of times, and change the pixel values of all points in the feature map slice to the pixel value of the highest point;
步骤5、将暗区和鲁棒区所有点的像素值归0,去除噪声;
步骤6、将经过以上处理的特征图进行合并叠加,合并后的特征图与原始特征通道大小相等。Step 6. Merge and stack the feature maps processed above, and the combined feature map is equal to the original feature channel in size.
在本实施例中,选取第一层卷积层所提取的特征通道,其特征图尺寸为28*28mm,切片数为32。在本实施例中,实验对象为在图像尺寸为28*28mm的MNIST数据集上通过FGSM算法和I-FGSM算法生成的对抗样本数据集。In this embodiment, the feature channel extracted by the first layer of convolutional layer is selected, the size of the feature map is 28*28mm, and the number of slices is 32. In this embodiment, the experimental object is an adversarial sample dataset generated by the FGSM algorithm and the I-FGSM algorithm on the MNIST dataset with an image size of 28*28mm.
在一较佳实施例中,步骤2以所述最亮点为中心对特征图进行切片具体是,以最亮点为中心、以3mm为半径切片。In a preferred embodiment, in step 2, slicing the feature map with the brightest point as the center, specifically, slicing the feature map with the brightest point as the center and a radius of 3 mm.
在一较佳实施例中,搜索寻找所述特征图中的第二亮的点采用的是深度优先搜素算法,具体是通过递归更新当前坐标,以第二亮点坐标为下一次递归采用的定位坐标参数,实现持续搜索判断。In a preferred embodiment, the depth-first search algorithm is used to search for the second bright spot in the feature map, specifically by recursively updating the current coordinates, and taking the second bright spot coordinates as the positioning recursively adopted next time. Coordinate parameters to realize continuous search judgment.
在本实施例中,设置所述亮区的识别值为全图像素值按正序排序时,位于总像素点数三分之一位置处的像素值。In this embodiment, the identification value of the bright area is set to be the pixel value located at one third of the total number of pixels when the pixel values of the whole image are sorted in positive order.
在本实施例中,设置所述暗区的识别值为全图像素值按倒序排序时,位于总像素点数三分之一位置处的像素值。In this embodiment, the identification value of the dark area is set to be the pixel value located at one third of the total number of pixels when the pixel values of the entire image are sorted in reverse order.
在判别有效连通特征时,即判断该区为亮区而非鲁棒区或暗区的操作中,采用的标准如下:When judging the effective connected feature, that is, the operation of judging that the region is a bright region rather than a robust region or a dark region, the criteria used are as follows:
1、判断定位点是否位于有效连通特征边界。如果是,则沿边界双向处理特征。在本实施例中,采用中轴线边界判定算法。暗区判定值设置为全图像素点值中位数与平均值的最大值。亮区判定值为暗区判定值加相应参数,例如0.05。中轴线边界判定算法通过比较中轴线两边对称点像素值与亮区判定值,暗区判定值大小,来判定该轴线是否为边界。1. Determine whether the anchor point is located on the boundary of the effective connected feature. If so, the feature is processed bidirectionally along the boundary. In this embodiment, the central axis boundary determination algorithm is adopted. The dark area judgment value is set to the maximum value of the median and the average value of the pixel value of the whole image. The judgment value of the bright area is the judgment value of the dark area plus the corresponding parameter, such as 0.05. The central axis boundary judgment algorithm determines whether the axis is a boundary by comparing the pixel values of the symmetrical points on both sides of the central axis with the judgment value of the bright area and the judgment value of the dark area.
2、判断为亮区后,定位切片以当前定位坐标为十字中心区域的第二亮坐标值,即坐标值需与当前定位坐标直接相邻,以保证有效特征的连通性。2. After it is judged to be a bright area, the positioning slice takes the current positioning coordinate as the second bright coordinate value of the cross center area, that is, the coordinate value must be directly adjacent to the current positioning coordinate to ensure the connectivity of effective features.
3、对重新定位坐标进行审查。若不符合条件,则结束本次递归调用。新定位坐标值需满足以下条件:全特征图经处理的像素点数目不得超过全图像素点总数额的三分之一。3. Review the relocation coordinates. If the conditions are not met, the recursive call ends. The new positioning coordinate value must meet the following conditions: the number of processed pixels in the full feature map shall not exceed one-third of the total number of pixels in the full image.
在一种具体实施方式中,计算以该定位点为中心,2为半径8个方位像素值中已被判定修改为最亮值的像素点个数,其个数不得超过5。此限制有助于限制因扰动施加面积过大以及扰动与有效特征边界接壤造成的错误重新定位,防止亮区判别定位点泛滥,造成特征图损坏。In a specific implementation manner, the number of pixels that have been determined to be modified to the brightest value among the 8 azimuth pixel values with the positioning point as the center and 2 as the radius is calculated, and the number should not exceed 5. This limit helps limit false relocations caused by excessively large perturbation application areas and perturbations bordering valid feature boundaries, preventing the flooding of bright-area discriminant anchors and resulting in damage to the feature map.
在本实施例中,采用重定位限制、区域重定位限制、方向定位限制三种方法加强对样本的处理效果,有效防止特征图过度处理以致损坏,增加其鲁棒性。In this embodiment, three methods of relocation restriction, area relocation restriction, and direction location restriction are adopted to enhance the processing effect of the sample, effectively preventing the feature map from being damaged due to excessive processing, and increasing its robustness.
1、重定位限制:定位坐标重新定位到全图最亮点的次数。1. Relocation limit: The number of times the location coordinates are relocated to the brightest point of the whole image.
2、区域重定位限制:定位坐标重新在同一区域内定位的次数。2. Area relocation limit: The number of times the location coordinates are relocated in the same area.
3、方向定位限制:对于每次定位,若其上下左右等8个方向方格已被定位的个数大于限制参数则放弃本次定位操作。方向定位限制假设设定参数为5。限制范围为方向方格垂直距离定位点的距离。限制范围设定值为2。3. Direction positioning limit: For each positioning, if the number of 8 direction squares that have been positioned is greater than the limit parameter, the positioning operation will be abandoned. The direction positioning limit assumes that the setting parameter is 5. The limit is the vertical distance of the direction grid from the anchor point. The limit range is set to 2.
以上所列举的数值仅为了说明一个较佳实施例中的一种情况,用于举例而不是限定为该数值,还可能是其他情况,无法一一列举。凡属于本发明的发明思想,由本发明技术方案所体现的实施方式,均在本发明的保护范围之内。The numerical values listed above are only to illustrate a situation in a preferred embodiment, and are used as examples instead of being limited to the numerical values. There may also be other situations, which cannot be listed one by one. All embodiments that belong to the inventive idea of the present invention and are embodied by the technical solutions of the present invention are all within the protection scope of the present invention.
本发明方案所公开的技术手段不仅限于上述实施方式所公开的技术手段,还包括由以上技术特征任意组合所组成的技术方案。The technical means disclosed in the solution of the present invention are not limited to the technical means disclosed in the above embodiments, but also include technical solutions composed of any combination of the above technical features.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010031024.9A CN111259881B (en) | 2020-01-13 | 2020-01-13 | Anti-hostile sample protection method based on feature map denoising and image enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010031024.9A CN111259881B (en) | 2020-01-13 | 2020-01-13 | Anti-hostile sample protection method based on feature map denoising and image enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259881A true CN111259881A (en) | 2020-06-09 |
CN111259881B CN111259881B (en) | 2023-04-28 |
Family
ID=70945161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010031024.9A Active CN111259881B (en) | 2020-01-13 | 2020-01-13 | Anti-hostile sample protection method based on feature map denoising and image enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259881B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510467A (en) * | 2018-03-28 | 2018-09-07 | 西安电子科技大学 | SAR image target recognition method based on variable depth shape convolutional neural networks |
CN109992931A (en) * | 2019-02-27 | 2019-07-09 | 天津大学 | A transferable non-black-box attack adversarial method based on noise compression |
-
2020
- 2020-01-13 CN CN202010031024.9A patent/CN111259881B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510467A (en) * | 2018-03-28 | 2018-09-07 | 西安电子科技大学 | SAR image target recognition method based on variable depth shape convolutional neural networks |
CN109992931A (en) * | 2019-02-27 | 2019-07-09 | 天津大学 | A transferable non-black-box attack adversarial method based on noise compression |
Non-Patent Citations (2)
Title |
---|
ZHUOBIAO QIAO,ET AL.: "Toward Intelligent Detection Modelling for Adversarial Samples in Convolutional Neural Networks", 《IEEE XPLORE》 * |
韦璠,等: "利用特征融合和整体多样性提升单模型鲁棒性", 《软件学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111259881B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108509859B (en) | Non-overlapping area pedestrian tracking method based on deep neural network | |
Lian et al. | DeepWindow: Sliding window based on deep learning for road extraction from remote sensing images | |
CN108447080B (en) | Target tracking method, system and storage medium based on hierarchical data association and convolutional neural network | |
US11042742B1 (en) | Apparatus and method for detecting road based on convolutional neural network | |
CN110097044B (en) | One-stage license plate detection and recognition method based on deep learning | |
CN111754519B (en) | Class activation mapping-based countermeasure method | |
CN113095263B (en) | Training method and device for pedestrian re-recognition model under shielding and pedestrian re-recognition method and device under shielding | |
CN109118523A (en) | A kind of tracking image target method based on YOLO | |
CN104182985B (en) | Remote sensing image change detection method | |
CN102096829B (en) | Iterative optimization distance categorization-based space weak and small target detection method | |
CN107563370B (en) | A marine infrared target detection method based on visual attention mechanism | |
CN107665498A (en) | The full convolutional network airplane detection method excavated based on typical case | |
CN110070560A (en) | Movement direction of object recognition methods based on target detection | |
CN110378421A (en) | A kind of coal-mine fire recognition methods based on convolutional neural networks | |
CN109785356B (en) | Background modeling method for video image | |
Li et al. | Exploring label probability sequence to robustly learn deep convolutional neural networks for road extraction with noisy datasets | |
CN111259881A (en) | Adversarial sample protection method based on feature map denoising and image enhancement | |
CN118229954A (en) | Method for generating imperceptible countermeasure patches end to end | |
CN115717887B (en) | A fast star point extraction method based on grayscale distribution histogram | |
WO2022222087A1 (en) | Method and apparatus for generating adversarial patch | |
CN110059557A (en) | A kind of face identification method adaptive based on low-light (level) | |
Chen et al. | Oil spill detection based on a superpixel segmentation method for SAR image | |
CN115631376A (en) | Confrontation sample image generation method, training method and target detection method | |
CN113762027B (en) | Abnormal behavior identification method, device, equipment and storage medium | |
CN111160190B (en) | Vehicle-mounted pedestrian detection-oriented classification auxiliary kernel correlation filtering tracking method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |