CN111259881A - Hostile sample protection method based on feature map denoising and image enhancement - Google Patents
Hostile sample protection method based on feature map denoising and image enhancement Download PDFInfo
- Publication number
- CN111259881A CN111259881A CN202010031024.9A CN202010031024A CN111259881A CN 111259881 A CN111259881 A CN 111259881A CN 202010031024 A CN202010031024 A CN 202010031024A CN 111259881 A CN111259881 A CN 111259881A
- Authority
- CN
- China
- Prior art keywords
- feature map
- area
- point
- bright
- positioning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000010586 diagram Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000003062 neural network model Methods 0.000 claims abstract description 5
- 238000004422 calculation algorithm Methods 0.000 claims description 17
- 238000010845 search algorithm Methods 0.000 claims description 5
- 238000013528 artificial neural network Methods 0.000 abstract description 7
- 238000012360 testing method Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 235000000332 black box Nutrition 0.000 description 2
- 238000004821 distillation Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003042 antagnostic effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000003292 glue Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a hostile sample protection method based on characteristic image denoising and image enhancement, which comprises the following steps: slicing a target characteristic channel and extracting a characteristic diagram on a first layer of convolution layer of the neural network model; positioning the coordinates to the brightest point of the feature map, and slicing the feature map; judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the coordinates of the positioning points to a second bright point of the feature map, if the slice belongs to the dark area or the robust area, searching and searching the second bright point in the feature map, and changing the pixel values of all the points in the slice of the feature map into the brightest point pixel value; pixel values of all points in the dark area and the robust area are returned to 0; and merging and superposing the feature maps subjected to the processing. The method can effectively relieve the influence of the denoising process on the neural network, so that the method keeps higher accuracy and has better universality when identifying a clean sample.
Description
Technical Field
The invention relates to the technical field of artificial intelligence safety and information safety, in particular to a hostile sample protection method based on feature map denoising and image enhancement.
Background
Since the development of deep learning in 2012, neural networks including cnn (volumetric neural networks), dn (volumetric neural networks), gan (generic adaptive networks), and the like have gradually gained achievements obviously superior to the conventional target detection method in the field of image detection and recognition, and have occupied an important position in the field of computer vision. The linear nature of the neural network renders it vulnerable to countersamples of attackers' malicious constructs, rendering the security of the deep learning model threatened.
In countering a sample attack, an attacker renders the deep learning model identification wrong by adding linear perturbations in the input. The attack principle against the sample is as follows: the small perturbations in the countersample increase from layer to layer with increasing number of iterations eventually causing the classifier output of the model to be erroneous. The attack patterns are divided into two categories, Black-box Attacks (Black-box Attacks) and White-box Attacks (White-box Attacks). In the black box scene, an attacker cannot acquire detailed information such as parameters of the model, and in the white box scene, the attacker can construct a hostile sample under the condition of known model information.
The confrontation sample generation algorithm comprises: 1) FGSM attack algorithm (Fast gradient sign method). The training goal is to obtain a challenge sample by adding a small offset in the gradient direction to increase the loss function. 2) I-FGSM attack algorithm (iterative FGSM). The goal is to construct more accurate countermeasure samples by iterating the FGSM algorithm multiple times, adding a small offset multiple times in the input. The main hazards posed by attack algorithms include:
misclassification in the field of image detection and pattern recognition;
anomaly identification in the field of autopilot, the artificial intelligence of deception autopilot by the team of the university of california, Down Song professor by applying glue to a "STOP" traffic sign;
in the field of human face recognition, researchers at the university of Chimeron in the card find that an advanced artificial intelligence recognition system can be fooled by wearing a specially designed spectacle frame.
In view of the harmfulness against sample attacks, it is necessary to provide a protection method that is reliable, stable, and well-functioning.
There are three existing methods for sample protection against hostiles.
(1) Antagonistic training: by adding confrontation samples in the training set, the model learns corresponding data, namely certain regularization is carried out.
(2) And (3) distillation: by training the model with soft labels (soft targets), the gradient of the model is smoother, and the gradient information is more difficult for an attacker to obtain.
(3) Denoising: and carrying out denoising operation on the input to weaken and eliminate noise information applied by an attacker.
The existing confrontation sample protection algorithm achieves good results in the task of defending the confrontation sample, but has defects in the application process. The distillation method is difficult to apply when the task scale is large and the model is complex, and the denoising method can cause the picture to lose part of information. On the other hand, considering the complexity of the model structure, the existing protection method is difficult to migrate between models, and undoubtedly brings difficulty to the protection task. In addition, part of the existing protection methods increase the complexity of the model, and the calculation amount is greatly increased.
Disclosure of Invention
In order to improve the robustness of a deep learning model in resisting hostile samples, the invention provides a hostile sample protection method based on feature map denoising and image enhancement. On the basis of not changing the model structure, the characteristic diagram space is modified to achieve the purpose of denoising.
In order to achieve one purpose, the technical scheme adopted by the invention is as follows:
constructing a neural network model of the image, wherein the neural network model comprises three convolutional layers, and the following operations are carried out on the convolutional layer of the first layer:
s1, slicing the target characteristic channel and extracting a characteristic diagram;
s2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as the center;
s3, judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the locating point coordinate to a second bright point in the feature map, if the slice belongs to the dark area or the robust area, searching for the second bright point in the feature map again, and locating the locating point coordinate to the point;
s4, repeating the step S3 until the searching times reach the preset times, and changing the pixel values of all the points in the feature map slice into the brightest point pixel values;
s5, returning the pixel values of all the points in the dark area and the robust area to 0, and removing noise;
and S6, merging and overlapping the feature maps processed in the above steps, wherein the size of the merged feature map is equal to that of the original feature channel.
Preferably, in S3, a depth-first search algorithm is used to search for the second bright point in the feature map.
Further, the depth-first search algorithm specifically updates the current coordinate through recursion, and takes the second bright spot coordinate as the positioning coordinate parameter adopted by the next recursion.
Preferably, the determining whether the slice belongs to a bright area, a dark area or a robust area specifically includes: judging whether the positioning point is positioned on an effective connected feature boundary, if so, processing the connected feature in two directions along the boundary, and if not, searching a second bright point in the slice; after the positioning section is judged to be a bright area, the positioning section takes the current positioning coordinate as a second bright point coordinate value of the cross center area, namely the second bright point positioning coordinate value needs to be directly adjacent to the current positioning coordinate; examining the re-positioning coordinates, and if the re-positioning coordinates do not meet the conditions, ending the recursive calling; the conditions are as follows: the number of processed pixel points of the full feature map is not more than one third of the total amount of the pixel points of the full image.
Further, whether the positioning point is located on the effective connected characteristic boundary is judged by adopting a central axis boundary judgment algorithm.
Furthermore, the central axis boundary determination algorithm determines whether the axis is a boundary by comparing pixel values of symmetrical points on two sides of the central axis with a bright region determination value and a dark region determination value; the dark area judgment value is set as the maximum value of the median and the average value of the full-image pixel point value, and the bright area judgment value is the dark area judgment value plus a corresponding parameter of 0.05.
Preferably, the identification value of the bright region is set to a pixel value located at a position of one third of the total number of pixels when the full-image pixel values are sorted in the positive order.
Preferably, the identification value of the dark area is set to be a pixel value located at a position of one third of the total number of pixels when the full-image pixel values are sorted in the reverse order.
The method can effectively relieve the influence of the denoising process on the neural network while improving the robustness of the neural network identification sample, so that the higher accuracy can be kept when the clean sample is identified; the method carries out processing in the dimension of the characteristic diagram, has better universality, and effectively avoids the problem of model complexity increase which may occur due to the characteristic that the structure of the model is not changed. Through testing on the test set, the invention is verified to be capable of effectively defending against a plurality of attack algorithms such as FGSM and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flowchart of a method for hostile sample protection based on feature map denoising and image enhancement according to an embodiment of the present invention;
FIG. 2 is a graph comparing the accuracy of the method of the embodiment of FIG. 1 with that of the prior art on a test set of FGSM algorithm generated hostile samples;
FIG. 3 is a graph comparing the accuracy of the method of the embodiment of FIG. 1 with that of the prior art in generating a test set of hostile samples using the I-FGSM algorithm; FIG. 4 is a graph showing the comparative effect before and after the method of FIG. 1 is applied, (a) before the method of this embodiment is applied, and (b) after the method of this embodiment is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, the present embodiment is a hostile sample protection method based on feature map denoising and image enhancement, including the following steps,
step 1, slicing and feature map extraction are carried out on a target feature channel;
step 2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as a center;
step 3, judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the coordinates of the positioning point to a second bright point of the characteristic diagram, if the slice belongs to the dark area or the robust area, searching for the second bright point in the characteristic diagram again, and positioning the coordinates of the positioning point to the point;
step 4, repeating the step 3 until the searching times reach the preset times, and changing the pixel values of all the points in the characteristic image slice into the brightest point pixel values;
and 6, merging and overlapping the processed feature maps, wherein the size of the merged feature map is equal to that of the original feature channel.
In this embodiment, the feature channels extracted from the first convolutional layer are selected, the feature size is 28 × 28mm, and the number of slices is 32. In this example, the experimental subjects are the set of anti-sample data generated by the FGSM algorithm and the I-FGSM algorithm on a MNIST data set with an image size of 28 × 28 mm.
In a preferred embodiment, the step 2 of slicing the feature map with the brightest point as the center is to slice with the brightest point as the center and with a radius of 3 mm.
In a preferred embodiment, the searching for the second bright point in the feature map uses a depth-first search algorithm, specifically, the current coordinate is updated recursively, and the second bright point coordinate is used as the positioning coordinate parameter for the next recursion, so as to achieve continuous search and determination.
In this embodiment, the identification value of the bright region is set to be the pixel value located at a position of one third of the total pixel number when the full-image pixel values are sorted in the positive order.
In this embodiment, the identification value of the dark area is set to be the pixel value located at a position of one third of the total number of pixels when the full-image pixel values are sorted in the reverse order.
When the valid connected features are judged, namely the operation that the area is judged to be a bright area instead of a robust area or a dark area, the adopted standard is as follows:
1. and judging whether the positioning point is positioned on the effective connection characteristic boundary. If so, the feature is processed bi-directionally along the boundary. In this embodiment, a central axis boundary determination algorithm is employed. The dark area determination value is set to the maximum value of the median and the average of the full-image pixel point values. The bright field decision value is the dark field decision value plus a corresponding parameter, e.g., 0.05. The central axis boundary judgment algorithm judges whether the axis is a boundary or not by comparing pixel values of symmetrical points on two sides of the central axis with a bright area judgment value and a dark area judgment value.
2. And after the positioning section is judged to be a bright area, the positioning section takes the current positioning coordinate as a second bright coordinate value of the cross center area, namely the coordinate value needs to be directly adjacent to the current positioning coordinate so as to ensure the connectivity of the effective features.
3. And reviewing the re-positioning coordinates. If the condition is not met, the recursive calling is ended. The new positioning coordinate value needs to satisfy the following conditions: the number of processed pixel points of the full feature map is not more than one third of the total amount of the pixel points of the full image.
In one embodiment, the number of pixels that have been determined to be modified to the brightest value among the 8 azimuth pixel values with 2 as a radius, centered on the anchor point, is calculated, and the number of pixels should not exceed 5. The limitation is helpful for limiting error relocation caused by overlarge disturbance application area and the border between the disturbance and the effective feature boundary, and preventing the damage of the feature map caused by the flooding of bright area judgment locating points.
In the embodiment, three methods of repositioning limitation, area repositioning limitation and direction positioning limitation are adopted to enhance the processing effect on the sample, effectively prevent the damage caused by excessive processing of the characteristic diagram and increase the robustness of the characteristic diagram.
1. And (4) relocation limitation: the number of times the location coordinates are relocated to the brightest spot of the full map.
2. Area relocation restriction: the number of times the location coordinates are repositioned within the same area.
3. And (3) direction positioning limitation: for each positioning, if the number of the 8 directional squares positioned is larger than the limiting parameter, the positioning operation is abandoned. The directional positioning limit assumes a setting parameter of 5. The limit range is the distance between the direction square grid and the positioning point. The limit range set value is 2.
The numerical values listed above are merely illustrative of one aspect of a preferred embodiment, are provided by way of example and are not limiting, and other aspects are possible and cannot be listed. The embodiments of the present invention are within the scope of the present invention, and all embodiments of the present invention are encompassed by the present invention.
The technical means disclosed in the invention scheme are not limited to the technical means disclosed in the above embodiments, but also include the technical scheme formed by any combination of the above technical features.
Claims (8)
1. A hostile sample protection method based on feature map denoising and image enhancement is characterized by comprising the following steps,
constructing a neural network model of the image, wherein the neural network model comprises three convolutional layers, and the following operations are carried out on the convolutional layer of the first layer:
s1, slicing the target characteristic channel and extracting a characteristic diagram;
s2, moving the coordinates of the positioning points to the brightest point of the feature map, and slicing the feature map by taking the brightest point as the center;
s3, judging whether the slice belongs to a bright area, a dark area or a robust area, if the slice belongs to the bright area, moving the locating point coordinate to a second bright point in the feature map, if the slice belongs to the dark area or the robust area, searching for the second bright point in the feature map again, and locating the locating point coordinate to the point;
s4, repeating the step S3 until the searching times reach the preset times, and changing the pixel values of all the points in the feature map slice into the brightest point pixel values;
s5, returning the pixel values of all the points in the dark area and the robust area to 0, and removing noise;
and S6, merging and overlapping the feature maps processed in the above steps, wherein the size of the merged feature map is equal to that of the original feature channel.
2. The method for hostile sample protection based on feature map denoising and image enhancement as claimed in claim 1, wherein a depth-first search algorithm is used in S3 to search for the second bright point in the feature map.
3. The feature map denoising and image enhancement-based hostile sample protection method according to claim 2, wherein the depth-first search algorithm specifically updates current coordinates through recursion, and uses second bright point coordinates as a positioning coordinate parameter adopted in the next recursion.
4. The feature map denoising and image enhancement based hostile sample protection method according to claim 1, wherein the judging whether the slice belongs to a bright area, a dark area or a robust area specifically comprises:
judging whether the positioning point is positioned on an effective connected feature boundary, if so, processing the connected feature in two directions along the boundary, and if not, searching a second bright point in the slice;
after the positioning section is judged to be a bright area, the positioning section takes the current positioning coordinate as a second bright point coordinate value of the cross center area, namely the second bright point positioning coordinate value needs to be directly adjacent to the current positioning coordinate;
examining the re-positioning coordinates, and if the re-positioning coordinates do not meet the conditions, ending the recursive calling; the conditions are as follows: the number of processed pixel points of the full feature map is not more than one third of the total amount of the pixel points of the full image.
5. The feature map denoising and image enhancement-based hostile sample protection method according to claim 4, wherein a central axis boundary determination algorithm is used to determine whether a locating point is located at an effective connected feature boundary.
6. The method for protecting the hostile sample based on the feature map denoising and the image enhancement as claimed in claim 5, wherein the central axis boundary determination algorithm determines whether the axis is a boundary by comparing pixel values of symmetric points on two sides of the central axis with a bright region determination value and a dark region determination value; the dark area determination value is set to the maximum value of the median and the average of the full-image pixel point values.
7. The feature map denoising and image enhancement-based hostile sample protection method according to claim 4, wherein the identification value of the bright area is set to be a pixel value located at a position of one third of the total pixel point when the full image pixel values are sorted in a positive order.
8. The feature map denoising and image enhancement-based hostile sample protection method according to claim 4, wherein the identification value of the dark region is set to be a pixel value located at a position of one third of the total pixel point when the full image pixel values are sorted in reverse order.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010031024.9A CN111259881B (en) | 2020-01-13 | 2020-01-13 | Hostile sample protection method based on feature map denoising and image enhancement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010031024.9A CN111259881B (en) | 2020-01-13 | 2020-01-13 | Hostile sample protection method based on feature map denoising and image enhancement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111259881A true CN111259881A (en) | 2020-06-09 |
CN111259881B CN111259881B (en) | 2023-04-28 |
Family
ID=70945161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010031024.9A Active CN111259881B (en) | 2020-01-13 | 2020-01-13 | Hostile sample protection method based on feature map denoising and image enhancement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111259881B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510467A (en) * | 2018-03-28 | 2018-09-07 | 西安电子科技大学 | SAR image target recognition method based on variable depth shape convolutional neural networks |
CN109992931A (en) * | 2019-02-27 | 2019-07-09 | 天津大学 | A kind of transportable non-black box attack countercheck based on noise compression |
-
2020
- 2020-01-13 CN CN202010031024.9A patent/CN111259881B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108510467A (en) * | 2018-03-28 | 2018-09-07 | 西安电子科技大学 | SAR image target recognition method based on variable depth shape convolutional neural networks |
CN109992931A (en) * | 2019-02-27 | 2019-07-09 | 天津大学 | A kind of transportable non-black box attack countercheck based on noise compression |
Non-Patent Citations (2)
Title |
---|
ZHUOBIAO QIAO,ET AL.: "Toward Intelligent Detection Modelling for Adversarial Samples in Convolutional Neural Networks", 《IEEE XPLORE》 * |
韦璠,等: "利用特征融合和整体多样性提升单模型鲁棒性", 《软件学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN111259881B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106485183B (en) | A kind of Quick Response Code localization method and system | |
CN108121991B (en) | Deep learning ship target detection method based on edge candidate region extraction | |
CN111681197B (en) | Remote sensing image unsupervised change detection method based on Siamese network structure | |
CN111754519B (en) | Class activation mapping-based countermeasure method | |
Huang et al. | An efficient segmentation algorithm for CAPTCHAs with line cluttering and character warping | |
CN107424171A (en) | A kind of anti-shelter target tracking based on piecemeal | |
CN102096821A (en) | Number plate identification method under strong interference environment on basis of complex network theory | |
KR102181861B1 (en) | System and method for detecting and recognizing license plate | |
CN114549933A (en) | Countermeasure sample generation method based on target detection model feature vector migration | |
CN111783853B (en) | Interpretability-based method for detecting and recovering neural network confrontation sample | |
WO2024051183A1 (en) | Backdoor detection method based on decision shortcut search | |
CN105184294A (en) | Inclination character judgment and identification method based on pixel tracking | |
CN114332982A (en) | Face recognition model attack defense method, device, equipment and storage medium | |
CN108520255B (en) | Infrared weak and small target detection method and device | |
CN113536322A (en) | Intelligent contract reentry vulnerability detection method based on countermeasure neural network | |
CN111259881A (en) | Hostile sample protection method based on feature map denoising and image enhancement | |
CN117152486A (en) | Image countermeasure sample detection method based on interpretability | |
WO2022222087A1 (en) | Method and apparatus for generating adversarial patch | |
CN111178111A (en) | Two-dimensional code detection method, electronic device, storage medium and system | |
CN110765875A (en) | Method, equipment and device for detecting boundary of traffic target | |
Kim et al. | A Vehicle License Plate Recognition System Using Morphological ROI (Region of Interest) Map Generated from Morphology Operation | |
CN115422533A (en) | Inter-frame similarity-based anti-patch detection and positioning method and system | |
Wei et al. | UDR: An approximate unbiased difference-ratio edge detector for SAR images | |
Zhang et al. | Certified defense against patch attacks via mask-guided randomized smoothing | |
Chen et al. | Oil spill detection based on a superpixel segmentation method for SAR image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |