CN111259881B - Hostile sample protection method based on feature map denoising and image enhancement - Google Patents

Hostile sample protection method based on feature map denoising and image enhancement Download PDF

Info

Publication number
CN111259881B
CN111259881B CN202010031024.9A CN202010031024A CN111259881B CN 111259881 B CN111259881 B CN 111259881B CN 202010031024 A CN202010031024 A CN 202010031024A CN 111259881 B CN111259881 B CN 111259881B
Authority
CN
China
Prior art keywords
feature map
point
area
bright
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010031024.9A
Other languages
Chinese (zh)
Other versions
CN111259881A (en
Inventor
王咏珊
刘嘉木
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202010031024.9A priority Critical patent/CN111259881B/en
Publication of CN111259881A publication Critical patent/CN111259881A/en
Application granted granted Critical
Publication of CN111259881B publication Critical patent/CN111259881B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a hostile sample protection method based on feature map denoising and image enhancement, which comprises the following steps: slicing and extracting a feature map of a target feature channel in a first layer of convolution layer of the neural network model; positioning the coordinates to the brightest point of the feature map, and slicing the feature map; judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the coordinates of the positioning point to a second bright point of the feature map, if the slice belongs to the dark area or the robust area, searching and searching for the second bright point in the feature map, and changing the pixel values of all points in the slice of the feature map to the brightest point pixel value; resetting the pixel values of all points of the dark area and the robust area to 0; and combining and superposing the feature graphs subjected to the above processing. The method can effectively relieve the influence of the denoising process on the neural network, so that the method can keep higher accuracy and has better universality when a clean sample is identified.

Description

Hostile sample protection method based on feature map denoising and image enhancement
Technical Field
The invention relates to the technical fields of artificial intelligence safety and information safety, in particular to a hostile sample protection method based on feature map denoising and image enhancement.
Background
Since the advanced study in 2012 began to progress, neural networks including CNN (Convolutional Neural Networks), DN (Deconvolutional networks), GAN (Generative adversarialnetworks) and the like have gradually achieved a performance significantly superior to that of the conventional target detection method in the image detection and recognition fields, and have taken an important role in the computer vision field. But the linear nature of the neural network renders it fooled by an attack sample that an attacker maliciously constructs, rendering the security of the deep learning model compromised.
In countering the sample attack, an attacker causes the deep learning model to identify errors by adding linear perturbations to the input. The principle of attack against the sample is: the layer-by-layer increase of subtle perturbations in the challenge sample with increasing iteration number eventually leads to classifier output errors of the model. The attack modes are divided into two types, namely Black-box Attacks (Black-box Attacks) and White-box Attacks (White-box Attacks). In a black box scene, an attacker cannot acquire detailed information such as parameters of a model, and in a white box scene, the attacker can construct hostile samples under the condition of known model information.
The challenge sample generation algorithm includes: 1) FGSM attack algorithm (Fast gradient sign method). The training goal is to obtain challenge samples by adding a small offset in the gradient direction to increase the loss function. 2) I-FGSM attack algorithm (iterative FGSM). The goal is to construct more accurate challenge samples by iterating the FGSM algorithm multiple times, adding small offsets multiple times in the input. The main hazards posed by attack algorithms include:
error classification in the fields of image detection and pattern recognition;
abnormal recognition in the field of autopilot, a team of the university of california Down Song professor brings artificial intelligence for decepting autopilot by rubberizing a "STOP" traffic sign;
false authentication in the face recognition field, researchers at the university of Kaneji Meron find that by wearing a specially designed glasses frame, an advanced artificial intelligent recognition system can be fooled.
In view of the jeopardy against sample attacks, it is necessary to provide a reliable, stable and well-functioning protection method.
The existing hostile sample protection methods have the following three methods.
(1) Resistance training: and adding an countermeasure sample in the training set to enable the model to acquire corresponding data, namely, carrying out certain regularization.
(2) And (3) distilling: training the model with soft labels (soft targets) makes the gradient of the model smoother, making it more difficult for an attacker to obtain gradient information.
(3) Denoising: and denoising the input to weaken and eliminate noise information applied by an attacker.
Existing challenge sample protection algorithms achieve good performance in defending against challenge sample tasks, but have drawbacks in the process of deployment. The distillation method is difficult to apply when the task scale is large and the model is complex, and the denoising method can cause the picture to lose part of information. On the other hand, considering the complexity of the model structure, the existing protection method is difficult to migrate between models, and certainly brings difficulty to the protection task. In addition, some existing protection methods increase the complexity of the model, resulting in a significant increase in computational effort.
Disclosure of Invention
In order to improve robustness of the deep learning model against hostile samples, the invention provides a hostile sample protection method based on feature map denoising and image enhancement. On the basis of not changing the model structure, the feature map space is modified, so that the denoising purpose is achieved.
In order to achieve one purpose, the invention adopts the following technical scheme:
constructing a neural network model of an image, comprising three convolution layers, and performing the following operations on a first convolution layer:
s1, slicing and feature map extraction are carried out on a target feature channel;
s2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as the center;
s3, judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the positioning point coordinate to a second bright point of the feature map, and if the slice belongs to the dark area or the robust area, searching again to find the second bright point in the feature map, and positioning the positioning point coordinate to the point;
s4, repeatedly executing S3 until the searching times reach the preset times, and changing the pixel values of all the points in the feature map slice into the brightest point pixel values;
s5, returning the pixel values of all points of the dark area and the robust area to 0, and removing noise;
and S6, combining and superposing the feature graphs subjected to the processing, wherein the combined feature graphs are equal to the original feature channels in size.
Preferably, in S3 a depth-first search algorithm is used to search for a second bright point in the feature map.
Further, the depth-first search algorithm is specifically to update the current coordinate by recursion and take the second bright point coordinate as the positioning coordinate parameter adopted by the next recursion.
Preferably, the determining whether the slice belongs to a bright area, a dark area or a robust area is specifically: judging whether the positioning point is positioned at the effective communication characteristic boundary, if so, bidirectionally processing the communication characteristic along the boundary, and if not, searching for a second bright point in the slice; after judging the bright area, the positioning slice takes the current positioning coordinate as a second bright point coordinate value of the cross center area, namely the second bright point positioning coordinate value needs to be directly adjacent to the current positioning coordinate; checking the relocated coordinates, and ending the recursion call if the relocated coordinates do not meet the conditions; the conditions are as follows: the number of the pixel points processed by the full feature map is not more than one third of the total amount of the pixel points of the full map.
Further, a central axis boundary judgment algorithm is adopted to judge whether the locating point is located at the effective communication characteristic boundary.
Further, the central axis boundary judging algorithm judges whether the axis is a boundary or not by comparing the pixel values of the symmetrical points on two sides of the central axis with the bright area judging value and the dark area judging value; the dark area judgment value is set as the maximum value of the median and the average value of the pixel point values of the whole image, and the bright area judgment value is the dark area judgment value plus the corresponding parameter of 0.05.
Preferably, the identification value of the bright area is set to be the pixel value at the position of one third of the total pixel point when the pixel values of the whole image are ordered in positive sequence.
Preferably, the identification value of the dark area is set to be the pixel value at the position of one third of the total pixel point when the full image pixel values are ordered in the reverse order.
The method and the device can effectively relieve the influence of the denoising process on the neural network while improving the robustness of the neural network recognition sample, so that the method and the device can keep higher accuracy when recognizing the clean sample; the method is processed in the dimension of the feature map, has better universality, and effectively avoids the problem of possibly increasing the complexity of the model because the method does not change the structure of the model. Through testing on a testing set, the invention is verified to be capable of effectively defending a plurality of attack algorithms such as FGSM and the like.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for protecting an enemy sample based on feature map denoising and image enhancement according to an embodiment of the present invention;
FIG. 2 is a graph of accuracy versus the method of the embodiment of FIG. 1 in generating hostile sample test sets by FGSM algorithm;
FIG. 3 is a graph of accuracy versus the method of the embodiment of FIG. 1 in generating a hostile sample test set using the I-FGSM algorithm; fig. 4 is a diagram showing the comparison of the effects before and after the method of the embodiment of fig. 1, (a) before the method of the embodiment is applied, and (b) after the method of the embodiment is applied.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, the present embodiment is a hostile sample protection method based on feature map denoising and image enhancement, comprising the steps of,
step 1, slicing and extracting a feature map of a target feature channel;
step 2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as the center;
step 3, judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the positioning point coordinate to a second bright point of the feature map, if the slice belongs to the dark area or the robust area, searching again to find the second bright point in the feature map, and positioning the positioning point coordinate to the point;
step 4, repeatedly executing the step 3 until the searching times reach the preset times, and changing the pixel values of all the points in the feature map slice into the brightest point pixel values;
step 5, returning the pixel values of all points of the dark area and the robust area to 0, and removing noise;
and step 6, combining and superposing the feature images processed by the steps, wherein the combined feature images are equal to the original feature channels in size.
In this embodiment, the feature channel extracted from the first convolution layer is selected, and the feature size of the feature map is 28×28mm, and the number of slices is 32. In this embodiment, the subject is a contrast sample dataset generated by FGSM algorithm and I-FGSM algorithm on MNIST dataset with image size 28 x 28 mm.
In a preferred embodiment, step 2 slices the feature map with the brightest point as the center, specifically slices with the brightest point as the center and a radius of 3 mm.
In a preferred embodiment, the searching and searching for the second bright point in the feature map adopts a depth-first searching algorithm, specifically, the current coordinate is updated recursively, and the second bright point coordinate is taken as a positioning coordinate parameter adopted in the next recursion, so as to realize continuous searching and judging.
In this embodiment, the identification value of the bright area is set to be the pixel value located at the position of one third of the total pixel point when the pixel values of the whole image are ordered in positive order.
In this embodiment, the identification value of the dark area is set to be the pixel value located at the position of one third of the total pixel point when the full image pixel values are ordered in the reverse order.
In the operation of judging the effective connected feature, namely judging the area as a bright area instead of a robust area or a dark area, the following criteria are adopted:
1. and judging whether the positioning point is positioned at the effective communication characteristic boundary. If so, the feature is processed bi-directionally along the boundary. In this embodiment, a central axis boundary determination algorithm is employed. The dark area determination value is set as the maximum value of the median and the average value of the pixel values of the full image. The bright area determination value is the dark area determination value plus a corresponding parameter, for example, 0.05. The central axis boundary judging algorithm judges whether the axis is a boundary or not by comparing the pixel values of the symmetrical points on two sides of the central axis with the bright area judging value and the dark area judging value.
2. And after judging the bright area, the positioning slice takes the current positioning coordinate as a second bright coordinate value of the cross center area, namely the coordinate value needs to be directly adjacent to the current positioning coordinate so as to ensure the connectivity of the effective feature.
3. And (5) checking the relocated coordinates. If the condition is not met, ending the recursion call. The new positioning coordinate value needs to meet the following conditions: the number of the pixel points processed by the full feature map is not more than one third of the total amount of the pixel points of the full map.
In a specific embodiment, the number of pixels which are determined to be modified to be the brightest value in the 8-azimuth pixel values with the locating point as the center and 2 as the radius is calculated, and the number of pixels is not more than 5. This limitation helps to limit false repositioning due to excessive disturbance applied area and disturbance bordering the effective feature boundary, preventing the bright area discrimination anchor point from flooding, resulting in feature map damage.
In the embodiment, three methods of repositioning limitation, area repositioning limitation and direction positioning limitation are adopted to strengthen the processing effect on the sample, so that the damage caused by excessive processing of the feature map is effectively prevented, and the robustness is improved.
1. Relocation limit: the number of times the positioning coordinates are repositioned to the brightest point of the full map.
2. Zone relocation limit: the number of times the positioning coordinates are repositioned within the same area.
3. Directional positioning restriction: for each positioning, if the number of the 8 square grids in the upper, lower, left, right directions is larger than the limiting parameter, the positioning operation is abandoned. The directional positioning constraint assumes a setting parameter of 5. The limiting range is the distance between the vertical direction square and the positioning point. The limit range set point is 2.
The above-listed values are only for illustrating one case of a preferred embodiment, are not limited to the values, and may be other cases, and cannot be listed one by one. All the embodiments which are embodied by the technical scheme of the invention belong to the inventive idea of the invention are within the protection scope of the invention.
The technical means disclosed by the scheme of the invention is not limited to the technical means disclosed by the embodiment, and also comprises the technical scheme formed by any combination of the technical features.

Claims (5)

1. The hostile sample protection method based on feature map denoising and image enhancement is characterized by comprising the following steps of,
constructing a neural network model of an image, comprising three convolution layers, and performing the following operations on a first convolution layer:
s1, slicing and feature map extraction are carried out on a target feature channel;
s2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as the center;
s3, judging whether the slice belongs to a bright area, a dark area or a robust area, judging whether a locating point is positioned at an effective communication characteristic boundary by adopting a central axis boundary judging algorithm, if so, bidirectionally processing the communication characteristic along the boundary, and if not, searching a second bright point in the slice;
after judging the bright area, the positioning slice takes the current positioning coordinate as a second bright point coordinate value of the cross center area, namely the second bright point positioning coordinate value needs to be directly adjacent to the current positioning coordinate;
checking the relocated coordinates, and ending the recursion call if the relocated coordinates do not meet the conditions; the conditions are as follows: the number of the pixel points processed by the full feature map is not more than one third of the total amount of the pixel points of the full map; if the feature map belongs to a bright area, moving the positioning point coordinates to a second bright point of the feature map, and if the feature map belongs to a dark area or a robust area, searching for the second bright point in the feature map again, and positioning the positioning point coordinates to the point; the central axis boundary judging algorithm judges whether the axis is a boundary or not by comparing the pixel values of symmetrical points on two sides of the central axis with the bright area judging value and the dark area judging value; the dark area judgment value is set as the maximum value of the median and the average value of the pixel point values of the whole image;
s4, repeatedly executing S3 until the searching times reach the preset times, and changing the pixel values of all the points in the feature map slice into the brightest point pixel values;
s5, returning the pixel values of all points of the dark area and the robust area to 0, and removing noise;
and S6, combining and superposing the feature graphs subjected to the processing, wherein the combined feature graphs are equal to the original feature channels in size.
2. The feature map denoising and image enhancement based hostile sample protection method of claim 1, wherein a depth-first search algorithm is used in S3 to search for a second bright spot in the feature map.
3. The method for protecting the hostile sample based on feature map denoising and image enhancement as claimed in claim 2, wherein the depth-first search algorithm is specifically to update the current coordinates by recursion and take the second bright point coordinates as the positioning coordinate parameters adopted by the next recursion.
4. The feature map denoising and image enhancement based hostile sample protection method according to claim 1, wherein the recognition value of the bright area is set to be a pixel value located at a third of the total pixel point when the full-image pixel values are ordered in positive order.
5. The feature map denoising and image enhancement based hostile sample protection method according to claim 1, wherein the identification value of the dark area is set to be a pixel value located at a third of the total pixel point when the full-image pixel values are ordered in reverse order.
CN202010031024.9A 2020-01-13 2020-01-13 Hostile sample protection method based on feature map denoising and image enhancement Active CN111259881B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010031024.9A CN111259881B (en) 2020-01-13 2020-01-13 Hostile sample protection method based on feature map denoising and image enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010031024.9A CN111259881B (en) 2020-01-13 2020-01-13 Hostile sample protection method based on feature map denoising and image enhancement

Publications (2)

Publication Number Publication Date
CN111259881A CN111259881A (en) 2020-06-09
CN111259881B true CN111259881B (en) 2023-04-28

Family

ID=70945161

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010031024.9A Active CN111259881B (en) 2020-01-13 2020-01-13 Hostile sample protection method based on feature map denoising and image enhancement

Country Status (1)

Country Link
CN (1) CN111259881B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510467B (en) * 2018-03-28 2022-04-08 西安电子科技大学 SAR image target identification method based on depth deformable convolution neural network
CN109992931B (en) * 2019-02-27 2023-05-30 天津大学 Noise compression-based migratable non-black box attack countermeasure method

Also Published As

Publication number Publication date
CN111259881A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN106780612A (en) Object detecting method and device in a kind of image
CN111754519B (en) Class activation mapping-based countermeasure method
CN108596129A (en) A kind of vehicle based on intelligent video analysis technology gets over line detecting method
CN108805016B (en) Head and shoulder area detection method and device
CN103077539A (en) Moving object tracking method under complicated background and sheltering condition
CN108009544A (en) Object detection method and device
CN104408711A (en) Multi-scale region fusion-based salient region detection method
CN112101195B (en) Crowd density estimation method, crowd density estimation device, computer equipment and storage medium
CN114119676A (en) Target detection tracking identification method and system based on multi-feature information fusion
CN113515774B (en) Privacy protection method for generating countermeasure sample based on projection gradient descent method
CN106570499A (en) Object tracking method based on probability graph model
CN114549933A (en) Countermeasure sample generation method based on target detection model feature vector migration
CN111144274A (en) Social image privacy protection method and device facing YOLO detector
CN111783853A (en) Interpretability-based method for detecting and recovering neural network confrontation sample
CN109389105A (en) A kind of iris detection and viewpoint classification method based on multitask
CN112801021B (en) Method and system for detecting lane line based on multi-level semantic information
CN110472640A (en) A kind of target detection model prediction frame processing method and processing device
CN113240028A (en) Anti-sample block attack detection method based on class activation graph
CN111259881B (en) Hostile sample protection method based on feature map denoising and image enhancement
CN117152486A (en) Image countermeasure sample detection method based on interpretability
CN111402185B (en) Image detection method and device
CN114332982A (en) Face recognition model attack defense method, device, equipment and storage medium
CN117557790A (en) Training method of image mask generator and image instance segmentation method
Wei et al. UDR: An approximate unbiased difference-ratio edge detector for SAR images
CN113205138B (en) Face and human body matching method, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant