CN117253161A - Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense - Google Patents
Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense Download PDFInfo
- Publication number
- CN117253161A CN117253161A CN202311200120.1A CN202311200120A CN117253161A CN 117253161 A CN117253161 A CN 117253161A CN 202311200120 A CN202311200120 A CN 202311200120A CN 117253161 A CN117253161 A CN 117253161A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- countermeasure
- sensing image
- model
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000007123 defense Effects 0.000 title claims abstract description 59
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012937 correction Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 49
- 230000010354 integration Effects 0.000 claims abstract description 28
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 12
- 238000007477 logistic regression Methods 0.000 claims abstract description 9
- 238000002372 labelling Methods 0.000 claims abstract description 3
- 230000001537 neural effect Effects 0.000 claims abstract description 3
- 230000006870 function Effects 0.000 claims description 28
- 238000000926 separation method Methods 0.000 claims description 19
- 238000012360 testing method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 6
- 239000013598 vector Substances 0.000 claims description 6
- 238000010586 diagram Methods 0.000 claims description 5
- 238000012935 Averaging Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 4
- 238000010606 normalization Methods 0.000 claims description 3
- 238000010008 shearing Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 abstract description 6
- 230000008485 antagonism Effects 0.000 abstract 1
- 238000013528 artificial neural network Methods 0.000 description 13
- 238000001514 detection method Methods 0.000 description 8
- 230000002787 reinforcement Effects 0.000 description 5
- 210000005036 nerve Anatomy 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003054 catalyst Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/094—Adversarial learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Abstract
The application relates to a remote sensing image depth recognition method based on feature correction and multistage countermeasure defense. The method comprises the following steps: performing countermeasure attack on the remote sensing data set according to the PGD algorithm and adding disturbance to prepare a countermeasure sample; adding a characteristic correction module into the middle layers of the convolutional neural base sub-networks to construct an integrated model; performing countermeasure training on the integrated model according to the countermeasure sample and a preset countermeasure loss function to obtain a deep integrated network model; labeling a remote sensing data set and countermeasure samples, training a plurality of countermeasure detectors on each sub-network of the deep integration network model according to the labeled samples, and integrating the output of each countermeasure detector through a logistic regression classifier to construct a passive defense module; and sequentially connecting the passive defense module and the depth integration network model to construct a remote sensing image depth recognition model, and carrying out image recognition on the remote sensing image to be detected according to the remote sensing image depth recognition model. By adopting the method, the antagonism robustness of the unmanned aerial vehicle visual recognition system can be improved.
Description
Technical Field
The application relates to the technical field of remote sensing image recognition, in particular to a remote sensing image depth recognition method based on feature correction and multistage countermeasure defense.
Background
In recent years, with the popularization of a deep neural network, the automatic interpretation and interpretation performance of high-resolution remote sensing images is greatly improved, and meanwhile, more modern unmanned aerial vehicles are equipped with a visual navigation and recognition system based on the deep neural network, so that on-board reasoning of images shot in real time can be supported, and useful image interpretation and analysis can be rapidly provided for civil fields.
While the deep neural network model has met with some success, it presents serious vulnerability in the face of challenge samples, which provides a new approach to anti-unmanned aerial vehicles. After an attacker obtains the original input, elaborate and imperceptible disturbance can be added into the original data, so that the performance of the on-board depth recognition model is reduced maliciously, and huge damage is caused. For example, when an intelligent unmanned aerial vehicle performs target recognition and tracking tasks in tasks, an enemy wants to evade recognition of a high-value target, so that the unmanned aerial vehicle tracks another target, or destroys visual navigation of the unmanned aerial vehicle, induces the unmanned aerial vehicle to land in an emergency, and places higher demands on perception of surrounding environment by a visual navigation system based on a deep neural network model. An attacker can illegally access the channel between the drone and the controller that transmits images and manipulate the real-time telemetry image from the sensor while performing the target recognition task, thereby causing a misprediction. Furthermore, previous studies have demonstrated that the challenge sample generated for the proxy model can mislead the target model with high probability due to similarity of the feature representations. The migration countermeasure can greatly reduce the difficulty of attack initiation, an attacker does not need to know the detailed information such as the internal structure, parameters and the like of the target model, and the security threat of the challenge sample to the artificial intelligence application is further improved through the black box attack mode. Therefore, defenders need to design more robust depth recognition systems in combination with challenge defense techniques to resist challenge attacks. Currently, a large number of countermeasure defense methods proposed by researchers for natural images can be mainly divided into two categories: active defense and passive defense. The purpose of the active defense is to promote the robust recognition rate of the target network for the challenge sample, and still maintain high recognition accuracy for the original sample. However, it is impractical to generate a model that is sufficiently robust, and sometimes the inconsistent decisions of the robust and non-robust classifiers are not necessarily caused by the countering perturbations. Thus, passive defense (i.e., challenge detection) is seen as an alternative solution to the binary classification problem of whether an input is attacked. For both types of defenses, most strategies today focus on a single model. However, even after robust enhancement, the single model may still be targeted for stronger and unknown attacks again. Thus, the use of integrated multi-depth neural network models is considered to seek more demanding robustness against challenges. When the errors of the sub-models are uncorrelated and the prediction results are diversified, the integrated model may be more robust than a single model. Training of each sub-model in the depth model needs to be completely independent, and the diversity among members depends on the randomness of the initialization and learning processes. However, merely assembling multiple completely independently trained sub-models and fusing their predictions at the output layer is not very efficient, and it is also necessary to fully exploit deep neural networks for complex feature representation of remote sensing images. Some scholars have proposed deep integration model defense strategies such as integrated challenge detectors in passive defense, integrated challenge training in active defense, and a variety of training loss functions that can limit challenge migration between sub-models, etc. However, these methods are relatively independent, and do not combine the characteristics of active and passive defense mutually in a complementary manner, so that the robustness of the countermeasure framework utilized when the unmanned aerial vehicle performs tasks with high safety such as scene or target recognition is still low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a remote sensing image depth recognition method based on feature correction and multi-stage countermeasure defense, which can improve the countermeasure robustness of an unmanned aerial vehicle visual recognition system.
A remote sensing image depth recognition method based on feature correction and multi-stage countermeasure defense, the method comprising:
acquiring a remote sensing data set; performing countermeasure attack on the remote sensing data set according to the PGD algorithm and adding disturbance to prepare a countermeasure sample; the challenge sample comprises a remote sensing image and an image after challenge;
adding a characteristic correction module into the middle layers of the convolutional neural base sub-networks to construct an integrated model; performing countermeasure training on the integrated model according to the countermeasure sample and a preset countermeasure loss function to obtain a deep integrated network model;
labeling a remote sensing data set and countermeasure samples, training a plurality of countermeasure detectors on each sub-network of the deep integration network model according to the labeled samples, and integrating the output of each countermeasure detector through a logistic regression classifier to construct a passive defense module;
and sequentially connecting the passive defense module and the depth integration network model to construct a remote sensing image depth recognition model, and carrying out image recognition on the remote sensing image to be detected according to the remote sensing image depth recognition model.
In one embodiment, performing image recognition on a remote sensing image to be detected according to a remote sensing image depth recognition model includes:
the method comprises the steps that a remote sensing image to be detected passes through a passive defense module to obtain a plurality of confidence degrees of the remote sensing image to be detected;
averaging a plurality of confidence degrees of the remote sensing image to be detected to obtain an overall confidence degree;
presetting a confidence coefficient threshold value, and if the overall confidence coefficient is larger than the confidence coefficient threshold value, judging the remote sensing image to be detected as an countermeasure sample, and refusing the remote sensing image to be detected to carry out the next step; if the overall confidence coefficient is smaller than the confidence coefficient threshold value, judging that the remote sensing image to be detected is a normal sample, and passing through the remote sensing image to be detected;
and transmitting the remote sensing image to be detected passing through the passive defense module to the deep integration network model for robust recognition, and obtaining the recognition category of the remote sensing image to be detected.
In one embodiment, the process of acquiring the remote sensing dataset includes:
acquiring a plurality of remote sensing images, and performing shearing operation and normalization on the remote sensing images to obtain a normalized image set; the normalized image set is randomly divided into a training set and a testing set, the training set and the testing set are preprocessed, and the data of the images in the training set and the testing set are linearly converted into a data set with the mean value of 0 and the variance of 1 through normal operation, so that a remote sensing data set is obtained.
In one embodiment, the method for making a challenge sample by performing a challenge attack and adding a disturbance to a remote sensing data set according to a PGD algorithm comprises:
performing counterattack on the remote sensing data set according to the PGD algorithm, and adding disturbance to make a countersample as follows
Wherein,for a countering sample obtained by iterating a remote sensing image x gradient in a remote sensing data set n times, S is a random sphere projection, epsilon is a disturbance limit, alpha is an attack step length, and the like>Represent C&W attack, Z is softmax layerOutput, t is the prediction category, i is any category other than t, - κ is the noise limit.
In one embodiment, the process of adding the feature correction module includes:
the process of adding the feature correction module includes a separation stage and a recalibration stage; the characteristic correction module comprises a separation network and a full-connection network; the fully connected network is an auxiliary layer of the integrated model;
in the separation stage, defining a separation network for learning the robustness of the feature unit and outputting a robustness map; setting a soft mask according to the robustness map, and decomposing the feature map by utilizing the soft mask and a preset feature threshold to obtain a robust feature and a non-robust feature;
adding the robust features and the non-robust features into the fully connected network, guiding the separation network to distribute higher robustness scores for feature units which are helpful for the auxiliary layer to make correct decisions according to a preset first loss function, and helping the auxiliary layer to make correct predictions; correct predictions include correct robust features and correct non-robust features;
calibrating the correct non-robust features according to the calibration network to obtain calibrated non-robust features;
in the recalibration stage, recalibrating the calibrated non-steady features according to the auxiliary layer to obtain recalibrated non-steady features;
and outputting a characteristic diagram according to the robust characteristic and the non-robust characteristic after recalibration.
In one embodiment, the first loss function isWherein y is c For a real label->A confidence score for the corresponding; y is c ' is the error label with highest confidence score, ">Is a confidence score.
In one embodiment, the recalibrated calibration loss function is
Wherein,is the output predictive score of the auxiliary layer, y c Is a real tag.
In one embodiment, the counterloss function is
Wherein lambda is sep And lambda (lambda) rec Is a super parameter, f i (.) represents a single sub-model in a deep-integration network model,andrepresenting different weights, +.>A loss function for any form of challenge training variant.
In one embodiment, the objective function of the countermeasure training is
Wherein θ represents network parameters of the deep integration network model, x represents remote sensing images in the remote sensing data set, y represents real labels of the remote sensing images,representing the underlying data distribution of the remote sensing image, delta representing the generated disturbance countermeasure, < >>Representing general constraints against disturbances, +.>The training is based on a loss function of a deep neural network model f with a parameter theta.
In one embodiment, the challenge detector includes a local intrinsic dimension, a kernel density estimate, and a mahalanobis distance; the method comprises the steps of obtaining a plurality of confidence degrees of the remote sensing image to be detected through a passive defense module, wherein the steps comprise:
the remote sensing image to be detected is calculated by a passive defense module through a countermeasure detector and a logistic regression classifier, and the confidence coefficient of the remote sensing image to be detected is obtained to be
p(adv|V(x 0 ))=(1+exp(β 0 +β T ·V(x 0 ))) -1
V(x 0 )={D 1 (x 0 ),D 2 (x 0 ),D 3 (x 0 )}
D i (x 0 )={d i (a 1 ),d i (a 2 ),...,d i (a L )},i=1,2,3
Wherein d 1 Representing the local intrinsic dimension, d 2 Representing a kernel density estimate, d 3 Represents the mahalanobis distance, a L Representing a sub-model/layer extracted feature map in a deep integration network model, D i (x 0 ) Representing the score, x, counted by the challenge detector 0 Representing real-time remote sensing image of remote sensing image to be measured, beta T Is the weight vector, beta 0 Representing the weight bias.
According to the remote sensing image depth recognition method based on feature correction and multistage countermeasure defense, abundant and complex features in a remote sensing image are fully utilized, a feature correction module is introduced, so that more non-robust features are activated in countermeasure training, robustness of a depth integrated network model is improved, a multistage countermeasure defense framework is built by connecting a passive defense module with the depth integrated recognition model subjected to active reinforcement, highest-level countermeasure robustness enhancement is achieved, threat of countermeasure vulnerability caused by the depth recognition model on an intelligent unmanned aerial vehicle is reduced, a plurality of countermeasure detection statistical feature integration and two modules are connected in front and back in the passive defense, the idea of countermeasure attack images possibly encountered by the unmanned aerial vehicle during execution of recognition tasks is realized, the defense against remote sensing countermeasure samples is realized, and the passive defense module and the actively reinforced depth integrated neural network are deployed on a modern intelligent unmanned aerial vehicle vision system, when a hostile unmanned aerial vehicle is subjected to acquire and malicious attack by a channel, the countermeasure robustness of the system is greatly improved, and probability of failure in execution of the task recognition is reduced; in addition, the neural network with lighter weight can be used as a sub-network architecture, so that the reasoning time is reduced, and the neural network is suitable for an edge environment.
Drawings
FIG. 1 is a flow chart of a method for remote sensing image depth identification based on feature correction and multi-level challenge defense in one embodiment;
FIG. 2 is a schematic diagram of the optimization process of the feature correction module in the deep integration model and the active enhancement model in one embodiment;
figure 3 is a schematic diagram of the passive defense module in one embodiment;
fig. 4 is a schematic flow chart of image recognition of a remote sensing image to be detected according to a remote sensing image depth recognition model in another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a remote sensing image depth recognition method based on feature correction and multi-stage countermeasure defense, including the steps of:
102, acquiring a remote sensing data set; performing countermeasure attack on the remote sensing data set according to the PGD algorithm and adding disturbance to prepare a countermeasure sample; the challenge sample includes a remote sensing image and a post-challenge image.
When the data set is prepared, the size of the remote sensing image entering the model is unified, and the data is divided into a training set and a testing set. The scene or target recognition task of the remote sensing image is finished by using the deep neural network, the first thing to do is to unify the size of the input image, and all data are normalized. Due to limitations of resolution of remote sensing images and the like, unification of image sizes through operations such as pooling and the like often damages image characteristics, so that all images are normalized to a unified size through a cutting operation. The image values are then normalized and the values of all pixels are compressed to between 0-1. The images are randomly divided into training and testing sets. Image preprocessing, namely linearly converting data of an image into a data set with the mean value of 0 and the variance of 1 through normal operation.
Challenge samples are generated by challenge attacks, such as using PGD algorithms, to achieve greater robustness in subsequent training of the integrated model.
Step 104, adding a characteristic correction module to the middle layer of each convolution nerve base sub-network to construct an integrated model; and performing countermeasure training on the integrated model according to the countermeasure sample and a preset countermeasure loss function to obtain a deep integrated network model.
According to the method, a new module feature correction module FCM is introduced into the middle layer of each convolution nerve base sub-network to enable the deep nerve network after the countermeasure training to reactivate part of information in non-steady features, so that the countermeasure robustness of the model is improved, the generated countermeasure sample is added into a training set to minimize training loss, parameters in the nerve network are optimized to learn and identify a case of the countermeasure sample with a real label, and therefore after active reinforcement is obtained, the model has strong countermeasure robustness and high generalizationDeep integration network model of chemical capability Here, 3 (i.e., n=3) initialization parameters are chosen to be random, for example, a completely independent res net-18 network structure for the challenge training process. ResNet-18 is a relatively lightweight neural network architecture commonly used in resource-constrained, edge-end environments, such as modern unmanned aerial vehicles.
And 106, marking the remote sensing data set and the countermeasure samples, training a plurality of countermeasure detectors on each sub-network of the deep integration network model according to the marked samples, and integrating the output of each countermeasure detector through a logistic regression classifier to construct a passive defense module.
Deep integration networkWill be further used for training of passive defense modules. When the output of the Baseline model (Baseline) does not agree with the output of the robust model, the passive defense will manifest its value, knowing whether this is because the input is under attack, in the passive defense part, the application adopts a strategy with integrated ideas in two places. In view of the fact that most of remote sensing images are complex in information and rich in characteristics, the statistical characteristic difference exists between a remote sensing data set and a countermeasure sample on the representation of a deep neural network middle layer, the remote sensing data set is marked as 0, the countermeasure sample is marked as 1, and the sub-network f is formed i Training N challenge detectors d on (i=1, 2,., N) 1 (·),d 2 (·),...,d n (. Cndot.) the use of a catalyst. Integrating the output of each detector by a logistic regression classifier, returning confidence p that predicts whether it is an challenge sample i (i=1, 2,., N), and repeating the above steps in N sub-networks in the deep integration network model, resulting in a passive defense module.
Possible output results of the passive detection module can be divided into four categories: if the input is the countermeasure sample, the passive detection module judges correctly, the judgment is marked as TP, and the judgment is wrong, the judgment is marked as F; if the input is a normal sample, the passive detection module judges that the input is correct, the input is marked as TN, and if the input is wrong, the input is marked as FP. The passive detection module is connected with the depth integration model after active reinforcement, wherein TP and FP are rejected first, and FN and TN are identified by the depth integration model.
And step 108, sequentially connecting the passive defense module and the depth integration network model to construct a remote sensing image depth recognition model, and carrying out image recognition on the remote sensing image to be detected according to the remote sensing image depth recognition model.
Aiming at the characteristics of complex information and rich features of remote sensing images, the method combines the countermeasure training independently carried out on a plurality of depth models in active defense, integrates a plurality of countermeasure detection statistical features in passive defense and connects two modules front and back, and jointly defends against attack images possibly encountered by an unmanned aerial vehicle when executing a recognition task, realizes the defense against remote sensing countermeasure samples, improves the countermeasure robustness of an unmanned aerial vehicle visual recognition system, and further improves the accuracy of remote sensing image depth recognition.
In the remote sensing image depth recognition method based on feature correction and multistage countermeasure defense, the abundant and complex features in the remote sensing image are fully utilized, the feature correction module is introduced, so that more non-robust features are activated in countermeasure training, the robustness of a depth integrated network model is improved, a multistage countermeasure defense framework is built by connecting a passive defense module with the depth integrated recognition model after active reinforcement, the highest-level countermeasure robustness enhancement is realized, threat of countermeasure vulnerability brought by the depth recognition model on the intelligent unmanned aerial vehicle is reduced, a plurality of countermeasure detection statistical feature integration and two modules are connected in front and back, the idea of countermeasure attack images possibly encountered by the unmanned aerial vehicle during execution of recognition tasks is jointly defended, the defense of a remote sensing countermeasure sample is realized, the passive defense module and the depth integrated neural network after active reinforcement are deployed on a modern intelligent unmanned aerial vehicle vision system, when a malicious unmanned aerial vehicle is encountered with the remote sensing image captured through a channel, the countermeasure robustness of the system is greatly improved, and the probability of failure of system recognition failure caused by error is reduced; in addition, the neural network with lighter weight can be used as a sub-network architecture, so that the reasoning time is reduced, and the neural network is suitable for an edge environment.
In one embodiment, performing image recognition on a remote sensing image to be detected according to a remote sensing image depth recognition model includes:
the method comprises the steps that a remote sensing image to be detected passes through a passive defense module to obtain a plurality of confidence degrees of the remote sensing image to be detected;
averaging a plurality of confidence degrees of the remote sensing image to be detected to obtain an overall confidence degree;
presetting a confidence coefficient threshold value, and if the overall confidence coefficient is larger than the confidence coefficient threshold value, judging the remote sensing image to be detected as an countermeasure sample, and refusing the remote sensing image to be detected to carry out the next step; if the overall confidence coefficient is smaller than the confidence coefficient threshold value, judging that the remote sensing image to be detected is a normal sample, and passing through the remote sensing image to be detected;
and transmitting the remote sensing image to be detected passing through the passive defense module to the deep integration network model for robust recognition, and obtaining the recognition category of the remote sensing image to be detected.
In an embodiment, as shown in fig. 3 and 4, if there is a real-time remote sensing image x 0 Sub-model f in depth integration model i (i=1, 2,., N) the features extracted from layer i are a l (x 0 ) The hidden layer feature a can be calculated by using the trained countermeasure detector l (x 0 ) A plurality of statistical index scores d of (2) 1 (a l (x 0 )),d 2 (a l (x 0 ) D) and d 3 (a l (x 0 )). The above steps are completed on a total of L layers, features are extracted at each layer, and the extracted features are input to 3 countermeasure detectors to calculate index scores. Conveniently, the score counted for each challenge detector is denoted as D i (x 0 )={d i (a 1 ),d i (a 2 ),...,d i (a L ) I=1, 2,3, resulting in three sets of vectors. Remembering V (x) 0 )={D 1 (x 0 ),D 2 (x 0 ),D 3 (x 0 )}。
Then, a simple logistic regression classifier is trained to calculate the posterior probability, i.e., the confidence score of the input real-time remote sensing image as the challenge sample, i.e., p (adv|V (x) 0 ))=(1+exp(β 0 +β T ·V(x 0 ))) -1 。β T Is a weight vector used to fit training data and adjust the importance of the 3 challenge detectors. Finally, the confidence p of whether the prediction is the challenge sample is obtained.
The second integration is performed in the decision layer, i.e. the above steps are performed simultaneously on N sub-networks, resulting in a set of confidence vectors p= { P 1 ,p 2 ,p 3 }. Confidence scores p that output them 1 ,p 2 ,p 3 Averaging, i.eThe final overall confidence p is derived. A threshold value θ is set. If p is greater than θ, determining that the test image is a challenge sample; if p is smaller than θ, the test image is determined to be a normal sample.
In one embodiment, the process of acquiring the remote sensing dataset includes:
acquiring a plurality of remote sensing images, and performing shearing operation and normalization on the remote sensing images to obtain a normalized image set; the normalized image set is randomly divided into a training set and a testing set, the training set and the testing set are preprocessed, and the data of the images in the training set and the testing set are linearly converted into a data set with the mean value of 0 and the variance of 1 through normal operation, so that a remote sensing data set is obtained.
In one embodiment, the method for making a challenge sample by performing a challenge attack and adding a disturbance to a remote sensing data set according to a PGD algorithm comprises:
performing counterattack on the remote sensing data set according to the PGD algorithm, and adding disturbance to make a countersample as follows
Wherein,for a countering sample obtained by iterating a remote sensing image x gradient in a remote sensing data set n times, S is a random sphere projection, epsilon is a disturbance limit, alpha is an attack step length, and the like>Represent C&W attack, Z is softmax layer output, t is prediction category, i is any category except t, -kappa is noise limit.
In one embodiment, the process of adding the feature correction module includes:
the process of adding the feature correction module includes a separation stage and a recalibration stage; the characteristic correction module comprises a separation network and a full-connection network; the fully connected network is an auxiliary layer of the integrated model;
in the separation stage, defining a separation network for learning the robustness of the feature unit and outputting a robustness map; setting a soft mask according to the robustness map, and decomposing the feature map by utilizing the soft mask and a preset feature threshold to obtain a robust feature and a non-robust feature;
adding the robust features and the non-robust features into the fully connected network, guiding the separation network to distribute higher robustness scores for feature units which are helpful for the auxiliary layer to make correct decisions according to a preset first loss function, and helping the auxiliary layer to make correct predictions; correct predictions include correct robust features and correct non-robust features;
calibrating the correct non-robust features according to the calibration network to obtain calibrated non-robust features;
in the recalibration stage, recalibrating the calibrated non-steady features according to the auxiliary layer to obtain recalibrated non-steady features;
and outputting a characteristic diagram according to the robust characteristic and the non-robust characteristic after recalibration.
In particular embodiments, algorithms may encourage the model to capture robust features, ignoring some non-robust features, during a general countermeasure training process. Non-robust features are those generic features that cannot constitute the correct predictions of the model and that may be targets of an attacker. However, the non-robust activation features still contain a large number of discriminant cues that are not utilized to enhance the robustness of the model. For remote sensing images with rich features and complex ground information, the above phenomenon of discarding non-robust features resulting in insufficient robustness enhancement is very evident. In order to enable the deep neural network after the countermeasure training to reactivate part of the information in the non-robust features, considering the light weight of the intelligent unmanned aerial vehicle edge device and the real-time property when reasoning, as shown in fig. 2, the application introduces a new module feature correction module FCM in each sub-model of the depth set, which is just like an insert, and does not affect the end-to-end countermeasure training with little additional computational overhead. The characteristic correction module FCM module is attached to the intermediate layer, including the separation and recalibration phases.
In the separation stage, a separation network is defined which learns the robustness of each feature unit and outputs a robustness mapThe robustness map contains a score for each feature unit, the higher the score, the more robust the feature activates the corresponding robustness. For the decomposition of the feature map, r-based soft mask is defined by Gumbel softmax +.>
Wherein σ (·) is a sigmoid function, g 1 And g 2 Representing two random values sampled from the gummel distribution. A threshold value is set and a threshold value is set,scores above this threshold correspond to robust features, noted gamma + The rest is non-robust feature gamma - 。
In order to learn the score in the robustness map based on the influence of each feature unit on the prediction correctness, a fully connected network (MLP) is added as an auxiliary layer in the separation stage. Randomly selecting two characteristic units gamma + And gamma - Is input into the network by a first loss functionTo direct the split network to assign a higher robustness score to the feature cells that help the auxiliary layer make the correct decisions, helping the auxiliary layer to make the correct predictions.
Wherein y is c As a real tag it is possible to provide a real tag,a confidence score for the corresponding; y is c ' is the error label with the highest confidence score,>for its confidence score.
Obtaining gamma + And gamma - After that, a calibration network R (,) is introduced again, and the non-robust feature gamma is calculated - Is adjusted to The goal of the recalibration phase is to have the unsteady state activate the reacquisition cues to assist the model in making the correct decisions. To guide the calibration network to achieve this goal, the present applicationIn recalibration feature->After which the auxiliary layer is attached again and the recalibration loss is calculated. Calibration loss->The calculation formula of (2) is as follows: /> Wherein,is the output predictive score of the auxiliary layer. By training the auxiliary layer to make the correct decisions based on the recalibrated features, the calibration network can be guided to adjust the non-robust activation to provide clues related to the basic real class.
After the recalibration phase, robust features gamma are applied + And recalibrated non-robust featuresThereby obtaining an output characteristic map->And passes it to the subsequent layers of the model. By means of the recalibration phase, more useful cues can be captured from the non-robust activation, which cues are suppressed in conventional countermeasure training. The FCM module is added into each sub-model for countermeasure training, and the total loss function can be generalized asWherein lambda is sep And lambda (lambda) rec Is a superparameter for controlling each submodel f i ?>And->Is a weight of (2). />The penalty function against training variants (e.g., TRADES, FAT, etc.) may be of any form.
In one embodiment, the first loss function is
Wherein y is c For a real label->A confidence score for the corresponding; y is c ' is the error label with the highest confidence score,>is a confidence score.
In one embodiment, the recalibrated calibration loss function is
Wherein,is the output predictive score of the auxiliary layer, y c Is a real tag.
In one embodiment, the counterloss function is
Wherein lambda is sep And lambda (lambda) rec Is a super parameter, f i (. Cndot.) represents depthA single sub-model in the network model is integrated,andrepresenting different weights, +.>A loss function for any form of challenge training variant.
In one embodiment, the objective function of the countermeasure training is
Wherein θ represents network parameters of the deep integration network model, x represents remote sensing images in the remote sensing data set, y represents real labels of the remote sensing images,representing the underlying data distribution of the remote sensing image, delta representing the generated disturbance countermeasure, < >>Representing general constraints against disturbances, +.>The training is based on a loss function of a deep neural network model f with a parameter theta.
In one embodiment, the challenge detector includes a local intrinsic dimension, a kernel density estimate, and a mahalanobis distance; the method comprises the steps of obtaining a plurality of confidence degrees of the remote sensing image to be detected through a passive defense module, wherein the steps comprise:
the remote sensing image to be detected is calculated by a passive defense module through a countermeasure detector and a logistic regression classifier, and the confidence coefficient of the remote sensing image to be detected is obtained to be
p(adv|V(x 0 ))=(1+exp(β 0 +β T ·V(x 0 ))) -1
V(x 0 )={D 1 (x 0 ),D 2 (x 0 ),D 3 (x 0 )}
D i (x 0 )={d i (a 1 ),d i (a 2 ),...,d i (a L )},i=1,2,3
Wherein d 1 Representing the local intrinsic dimension, d 2 Representing a kernel density estimate, d 3 Represents the mahalanobis distance, a L Representing a sub-model/layer extracted feature map in a deep integration network model, D i (x 0 ) Representing the score, x, counted by the challenge detector 0 Representing real-time remote sensing image of remote sensing image to be measured, beta T Is the weight vector, beta 0 Representing the weight bias.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (10)
1. A remote sensing image depth recognition method based on feature correction and multi-stage countermeasure defense, the method comprising:
acquiring a remote sensing data set; performing challenge attack on the remote sensing data set according to a PGD algorithm and adding disturbance to prepare a challenge sample; the challenge sample comprises a remote sensing image and an image after challenge;
adding a characteristic correction module into the middle layers of the convolutional neural base sub-networks to construct an integrated model; performing countermeasure training on the integrated model according to the countermeasure sample and a preset countermeasure loss function to obtain a deep integrated network model;
labeling the remote sensing data set and the countermeasure samples, training a plurality of countermeasure detectors on each sub-network of the deep integration network model according to the labeled samples, and integrating the output of each countermeasure detector through a logistic regression classifier to construct a passive defense module;
and sequentially connecting the passive defense module and the depth integration network model to construct a remote sensing image depth recognition model, and carrying out image recognition on the remote sensing image to be detected according to the remote sensing image depth recognition model.
2. The method of claim 1, wherein performing image recognition on the remote sensing image to be detected according to the remote sensing image depth recognition model comprises:
the method comprises the steps that a remote sensing image to be detected passes through a passive defense module to obtain a plurality of confidence degrees of the remote sensing image to be detected;
averaging a plurality of confidence degrees of the remote sensing image to be detected to obtain an overall confidence degree;
presetting a confidence coefficient threshold value, and if the overall confidence coefficient is larger than the confidence coefficient threshold value, judging the remote sensing image to be detected as an countermeasure sample, and refusing the remote sensing image to be detected to carry out the next step; if the overall confidence coefficient is smaller than the confidence coefficient threshold value, judging that the remote sensing image to be detected is a normal sample, and passing through the remote sensing image to be detected;
and transmitting the remote sensing image to be detected passing through the passive defense module to the deep integration network model for robust recognition, and obtaining the recognition category of the remote sensing image to be detected.
3. The method of claim 1, wherein the acquiring of the remote sensing dataset comprises:
acquiring a plurality of remote sensing images, and performing shearing operation and normalization on the remote sensing images to obtain a normalized image set; and randomly dividing the normalized image set into a training set and a testing set, preprocessing the training set and the testing set, and linearly converting the data of the images in the training set and the testing set into a data set with the mean value of 0 and the variance of 1 through normal operation to obtain a remote sensing data set.
4. A method according to any one of claims 1 to 3, wherein challenge-creating a challenge sample from the telemetry data set and adding a disturbance according to a PGD algorithm comprises:
performing challenge-resistance attack on the remote sensing data set according to a PGD algorithm and adding disturbance to prepare a challenge-resistance sample as follows
Wherein,for a countering sample obtained by iterating a remote sensing image x gradient in a remote sensing data set n times, S is a random sphere projection, epsilon is a disturbance limit, alpha is an attack step length, and the like>Represent C&W attack, Z is softmax layer output, t is prediction category, i is any category except t, -kappa is noise limit.
5. The method of claim 1, wherein the adding feature correction module comprises:
the process of adding the feature correction module includes a separation stage and a recalibration stage; the characteristic correction module comprises a separation network and a fully-connected network; the fully connected network is an auxiliary layer of the integrated model;
in the separation stage, defining a separation network for learning the robustness of the feature unit and outputting a robustness map; setting a soft mask according to the robustness map, and decomposing a feature map by utilizing the soft mask and a preset feature threshold to obtain robust features and non-robust features;
adding the robust features and the non-robust features into the fully connected network, guiding the separation network to distribute higher robustness scores for feature units which are helpful for the auxiliary layer to make correct decisions according to a preset first loss function, and helping the auxiliary layer to make correct predictions; the correct prediction includes correct robust features and correct non-robust features;
calibrating the correct non-robust features according to a calibration network to obtain calibrated non-robust features;
in the recalibration stage, recalibrating the calibrated non-steady features according to the auxiliary layer to obtain recalibrated non-steady features;
and outputting a characteristic diagram according to the robust characteristic and the non-robust characteristic after recalibration.
6. The method of claim 5, wherein the first loss function is
Wherein y is c As a real tag it is possible to provide a real tag,a confidence score for the corresponding; y is c ' is the error label with the highest confidence score,is a confidence score.
7. The method of claim 5, wherein the recalibrated calibration loss function is
Wherein,is the output predictive score of the auxiliary layer, y c Is a real tag.
8. The method of claim 1, wherein the counterdamage function is
Wherein lambda is sep And lambda (lambda) rec Is a super parameter, f i (.) represents a single sub-model in a deep-integration network model,and->Representing different weights, +.>A loss function for any form of challenge training variant.
9. The method of claim 1, wherein the objective function of the countermeasure training is
Wherein θ represents network parameters of the deep integration network model, x represents remote sensing images in the remote sensing data set, y represents real labels of the remote sensing images,representing the underlying data distribution of the remote sensing image, delta representing the generated disturbance countermeasure, < >>Representing general constraints against disturbances, +.>The training is based on a loss function of a deep neural network model f with a parameter theta.
10. The method of claim 2, wherein the challenge detector comprises a local intrinsic dimension, a nuclear density estimate, and a mahalanobis distance; the method comprises the steps of obtaining a plurality of confidence degrees of the remote sensing image to be detected through a passive defense module, wherein the steps comprise:
the remote sensing image to be detected is calculated by a passive defense module through a countermeasure detector and a logistic regression classifier, and the confidence coefficient of the remote sensing image to be detected is obtained to be
p(adv|V(x 0 ))=(1+exp(β 0 +β T ·V(x 0 ))) -1
V(x 0 )={D 1 (x 0 ),D 2 (x 0 ),D 3 (x 0 )}
D i (x 0 )={d i (a 1 ),d i (a 2 ),...,d i (a L )},i=1,2,3
Wherein d 1 Representing the local intrinsic dimension, d 2 Representing a kernel density estimate, d 3 Represents the mahalanobis distance, a L Representing a sub-model/layer extracted feature map in a deep integration network model, D i (x 0 ) Representing the score, x, counted by the challenge detector 0 Representing real-time remote sensing image of remote sensing image to be measured, beta T Is the weight vector, beta 0 Representing the weight bias.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311200120.1A CN117253161A (en) | 2023-09-18 | 2023-09-18 | Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311200120.1A CN117253161A (en) | 2023-09-18 | 2023-09-18 | Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117253161A true CN117253161A (en) | 2023-12-19 |
Family
ID=89134315
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311200120.1A Pending CN117253161A (en) | 2023-09-18 | 2023-09-18 | Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117253161A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117577333A (en) * | 2024-01-17 | 2024-02-20 | 浙江大学 | Multi-center clinical prognosis prediction system based on causal feature learning |
-
2023
- 2023-09-18 CN CN202311200120.1A patent/CN117253161A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117577333A (en) * | 2024-01-17 | 2024-02-20 | 浙江大学 | Multi-center clinical prognosis prediction system based on causal feature learning |
CN117577333B (en) * | 2024-01-17 | 2024-04-09 | 浙江大学 | Multi-center clinical prognosis prediction system based on causal feature learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11676025B2 (en) | Method, apparatus and computer program for generating robust automatic learning systems and testing trained automatic learning systems | |
CN109492556B (en) | Synthetic aperture radar target identification method for small sample residual error learning | |
CN111507370B (en) | Method and device for obtaining sample image of inspection tag in automatic labeling image | |
Firpi et al. | Swarmed feature selection | |
CN111507469B (en) | Method and device for optimizing super parameters of automatic labeling device | |
CN113221905B (en) | Semantic segmentation unsupervised domain adaptation method, device and system based on uniform clustering and storage medium | |
CN111753881B (en) | Concept sensitivity-based quantitative recognition defending method against attacks | |
CN110941794A (en) | Anti-attack defense method based on universal inverse disturbance defense matrix | |
CN113297572B (en) | Deep learning sample-level anti-attack defense method and device based on neuron activation mode | |
CN117253161A (en) | Remote sensing image depth recognition method based on feature correction and multistage countermeasure defense | |
CN113254927B (en) | Model processing method and device based on network defense and storage medium | |
CN111835707A (en) | Malicious program identification method based on improved support vector machine | |
EP3832550A1 (en) | Device and method for training a classifier | |
CN112052933A (en) | Particle swarm optimization-based safety testing method and repairing method for deep learning model | |
Alqahtani et al. | An improved deep learning approach for localization and recognition of plant leaf diseases | |
CN114239688B (en) | Ship target identification method, computer device, program product and storage medium | |
US8433741B2 (en) | Methods and apparatus for signature prediction and feature level fusion | |
CN114046790A (en) | Factor graph double-loop detection method | |
Bailly et al. | Boosting feature selection for neural network based regression | |
Sun et al. | HRRP target recognition based on soft-boundary deep SVDD with LSTM | |
CN115909027B (en) | Situation estimation method and device | |
CN114254686A (en) | Method and device for identifying confrontation sample | |
CN115640755A (en) | Air combat incomplete information data processing method based on extreme learning machine | |
US20210149986A1 (en) | Computer architecture for multi-domain probability assessment capability for course of action analysis | |
US11599827B2 (en) | Method and apparatus for improving the robustness of a machine learning system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |