CN111666985B - Deep learning confrontation sample image classification defense method based on dropout - Google Patents

Deep learning confrontation sample image classification defense method based on dropout Download PDF

Info

Publication number
CN111666985B
CN111666985B CN202010433696.2A CN202010433696A CN111666985B CN 111666985 B CN111666985 B CN 111666985B CN 202010433696 A CN202010433696 A CN 202010433696A CN 111666985 B CN111666985 B CN 111666985B
Authority
CN
China
Prior art keywords
training
network
result
classification
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202010433696.2A
Other languages
Chinese (zh)
Other versions
CN111666985A (en
Inventor
韩波
陈东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202010433696.2A priority Critical patent/CN111666985B/en
Publication of CN111666985A publication Critical patent/CN111666985A/en
Application granted granted Critical
Publication of CN111666985B publication Critical patent/CN111666985B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

In order to solve the problem of resisting sample attack in the field of deep learning image classification, picture resisting sample defense can be effectively realized by utilizing the randomness of dropout. The invention discloses an image classification defense method for a deep learning confrontation sample, which is based on the observation that certain nodes in a network are often subjected to decisive change when a confrontation sample successfully attacks a deep neural classification network, skillfully utilizes the randomness of dropout to train the network for multiple times, then obtains multiple recognition results, and reasonably processes the recognition results to obtain a final classification result, thereby realizing the defense of the confrontation sample.

Description

Deep learning confrontation sample image classification defense method based on dropout
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a deep learning confrontation sample image classification defense method based on dropout.
Background
With the advent of the artificial intelligence era, deep learning techniques have been widely applied to various image classification fields: such as face recognition, automatic driving, fault detection, etc. However, as with various attack means in the field of classical computers, there are also attack means for generating an anti-sample for deep neural networks in the field of artificial intelligence. The confrontation sample is not easy to be detected by naked eyes, but the deep neural network can be classified wrongly, so that the deep learning technology has great hidden danger when being widely applied to social life.
Due to the limitation of the deep neural network on the network structure and the mathematical function used in the network structure, the pictures can be classified wrongly by adding a small perturbation pixel on an input picture in a specific attack mode, and the perturbed pictures are called as countersamples. Since the concept of resisting samples was proposed by szegdy et al 2014, at least a dozen effective attack methods such as FGSM, BIM, C & W, ANGRI, UPSET, etc. have been proposed for the image classification field. Meanwhile, defense methods such as a Thermometer Encoding method and Total variation minimization are also proposed for the attack methods.
The inventor of the present application finds that the method in the prior art at least has the following technical problems in the process of implementing the present invention:
firstly, the generalization is poor, the existing defense method can only produce better defense effect on one or two attack methods, and has no obvious effect on other attack methods; secondly, the existing defense methods are all existing attack methods, and lack of good defense capability to unknown attacks.
It is understood from the above that the methods of the prior art have a problem of poor generalization.
Disclosure of Invention
The invention provides a deep learning confrontation sample image classification defense method based on dropout, which is used for solving or at least partially solving the technical problem of poor generalization in the conventional method.
In order to solve the technical problem, the invention provides a deep learning countermeasure sample image classification defense method based on dropout, which comprises the following steps:
s1: obtaining an original picture from an open picture data set, dividing the original picture into a training set and a testing set, and taking a deep neural network with dropout operation as a classification network;
s2: training the classification network for multiple times by adopting a training set, and keeping training weight and recognition result obtained by each training;
s3: identifying the pictures in the test set based on a classification network corresponding to the training weight obtained by each training to obtain a plurality of identification results, wherein each identification result corresponds to one training weight;
s4: fitting the training set identification result and the test set identification result to obtain a general network for classification;
s5: and identifying the picture to be identified by using the overall network to obtain a classification result.
In one embodiment, the dropout value of the deep neural network with the dropout operation in S1 is set to a preset value.
In one embodiment, the training set recognition result or the test set recognition result is denoted as a 1 ,a 2 ,a 3 ,…,a j Wherein the 1 st recognition result a 1 Corresponding to the first training weight w 1 J identification result a j Training weight w corresponding to j j Each recognition result a j Is of the form [ p 1j ,p 2j ,p 3j ,…,p ij ]Array of (1), p ij To identify the result a j Probability that the middle picture belongs to class i.
In one embodiment, S4 specifically includes: and fitting the plurality of recognition results by adopting a neural network or fitting by adopting a classical method to obtain a general network for classification.
In one embodiment, the fitting of the plurality of recognition results is performed using a classical method, comprising:
correspondingly adding the probabilities of all classes in the recognition result to obtain a final probability, and taking the maximum probability as the final classification result of the picture, wherein the calculation mode is as follows:
Figure BDA0002501433100000021
wherein p is 1j To identify the result a j Probability of a middle Picture belonging to class 1, p ij To identify the result a j Probability that the middle picture belongs to class i.
In one embodiment, fitting the plurality of recognition results using a neural network comprises:
respectively splicing the recognition result of the training set and the recognition result of the test set to generate a new training set and a new test set;
training the constructed neural network by adopting a new training set to obtain the trained neural network, and testing by utilizing a new test set;
and taking the classification network and the trained neural network as an overall network.
In one embodiment, the training set recognition result and the test set recognition result are respectively spliced, specifically:
a=[a 1 ,a 2 ,a 3 ,…,a j ]
wherein, a 1 For the first recognition result in the training set or test set recognition results, a j For the jth recognition result in the training set or test set recognition results, aAnd identifying the splicing result of the result for the training set or the test set.
One or more technical solutions in the embodiments of the present application at least have one or more of the following technical effects:
the invention provides a deep learning confrontation sample image classification defense method based on dropout, which comprises the steps of firstly obtaining an original image from an open image data set, dividing the original image into a training set and a testing set, and taking a deep neural network with dropout operation as a classification network; then, training the classification network for multiple times by adopting a training set, and keeping the training weight obtained by each training; then, identifying the pictures in the test set based on a classification network corresponding to the training weight obtained by each training to obtain a plurality of identification results, wherein each identification result corresponds to one training weight; fitting the multiple recognition results to obtain a total network for classification; and finally, identifying the picture to be identified by utilizing the overall network to obtain a final classification result.
Compared with the prior art that the defense method for the confrontation sample is usually directed at a plurality of very limited attack methods, the defense method for the confrontation sample has no relation with the attack types based on the criterion, can defend various existing attacks, even unknown attacks, and has good effect, and the whole training process does not need the confrontation sample and can be completed by using a normal picture, so that the generalization can be improved, and the defense effect can be improved.
Further, the existing defense methods for confrontation samples are often tedious and inflexible. The method has a simple and clear thought and is very flexible, for example, when a plurality of recognition results are fitted, a neural network method or a classical method can be adopted, so that various choices exist, the adjustment can be carried out according to actual conditions, and the flexibility is improved.
Further, when the constructed neural network is used for processing, the training data is a simple matrix obtained by splicing the recognition results, so that the training speed is high, and the occupied resources are few.
Furthermore, the general network obtained by using the classification network and the trained neural network can not only defend against sample attack, but also improve the classification accuracy of the network on normal pictures.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic overall flow chart of a method for defending against a sample image by deep learning based on dropout according to the present invention;
FIG. 2 is a schematic diagram of the overall network structure obtained by the present invention;
FIG. 3 is a schematic diagram of a simple neural network architecture employed in an exemplary embodiment;
FIG. 4 is a schematic diagram of a complex neural network architecture employed in an exemplary embodiment.
Detailed Description
The invention aims to provide a simple, flexible, effective and strong-generalization defense method for a confrontation sample for all systems which use a deep neural network of dropout processing to classify images. The method is mainly based on an observation of a deep neural network: when an attack on the deep neural network by an antagonistic sample is successful, it is often the case that certain nodes in the deep neural network are decisively changed. Based on the observation, the defense method utilizes the randomness of dropout to defend the confrontation sample, and enhances the randomness of the system through multiple training on the premise of ensuring the classification accuracy. The method is simple and effective, has strong operability, has good generalization and can obtain good defense effect on various attacks because the assumption based on the method does not depend on the attack type.
The technical scheme of the invention is as follows:
a deep learning confrontation sample image classification defense method based on dropout mainly comprises the following three steps: firstly, repeatedly training a deep neural network for classification by using a normal training set picture, and keeping the training weight of each time; secondly, identifying the same picture under all training weights respectively, and keeping identification results respectively; thirdly, the recognition result is operated by using a neural network or other combined screening modes to obtain a final classification result.
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment provides a deep learning confrontation sample image classification defense method based on dropout, which comprises the following steps:
s1: obtaining an original picture from an open picture data set, dividing the original picture into a training set and a testing set, and taking a deep neural network with dropout operation as a classification network;
s2: training the classification network for multiple times by adopting a training set, and keeping training weight and recognition result obtained by each training;
s3: identifying the pictures in the test set based on a classification network corresponding to the training weight obtained by each training to obtain a plurality of identification results, wherein each identification result corresponds to one training weight;
s4: fitting the training set identification result and the test set identification result to obtain a general network for classification;
s5: and identifying the picture to be identified by using the overall network to obtain a classification result.
Specifically, the public picture data set in S1 can be selected as needed, for example, cifar-10, and the deep neural network with dropout operation can also be selected as needed, for example, vgg-16. S2 is a plurality of times of training of the classification network, wherein the training times can be flexibly set according to requirements, in particularThe classification network can be repeatedly trained for j times by using a training set divided from an original picture, and the training weights of the training sets are respectively kept as w 1 ,w 2 ,w 3 ,…,w j
And S3, performing multiple recognition, namely recognizing the test set pictures respectively by using the trained weights to obtain corresponding recognition results.
And S4, processing all the recognition results finally to obtain the overall network for classification.
And S5, using the obtained overall network for identifying the actual picture to obtain a final classification result.
It should be noted that, in S2 and S4, the network only needs to be trained by using the normal pictures in the training set.
FIG. 1 is a general flow diagram of the present invention illustrating one embodiment of defending a deep learning classification network against sample attacks. The defense of the confrontation sample starts with training the deep neural classification network for multiple times and keeping the weight, then identifying by using the weight respectively, and finally carrying out reasonable fitting on a plurality of identification results by using a classical method or a neural network to obtain a final result. The invention has no other requirements for the most initial classification network except for the dropout operation, so the part is variable, and any classification network meeting the conditions can be processed by the method, thereby the generalization can be improved.
In one embodiment, the dropout value of the deep neural network with the dropout operation in S1 is set to a preset value.
Specifically, to improve the randomness of dropout, all dropout values in the network are set to appropriate values p without substantially affecting the network performance. The specific Attack method does not affect the implementation of the defense method and is marked as Attack.
In one embodiment, the training set recognition result or the test set recognition result is denoted as a 1 ,a 2 ,a 3 ,…,a j Wherein the 1 st recognition result a 1 Corresponding to the first training weight w 1 J identification result a j Training weight w corresponding to j j Each recognition result a j Is of the form [ p 1j ,p 2j ,p 3j ,…,p ij ]Array of (1), p ij To identify the result a j Probability that the middle picture belongs to class i.
In one embodiment, S4 specifically includes: and fitting the plurality of recognition results by adopting a neural network or fitting by adopting a classical method to obtain a general network for classification.
In one embodiment, fitting the plurality of recognition results using a classical approach comprises:
correspondingly adding the probabilities of all classes in the recognition result to obtain a final probability, and taking the maximum probability as the final classification result of the picture, wherein the calculation mode is as follows:
Figure BDA0002501433100000061
wherein p is 1j To identify the result a j Probability of a middle Picture belonging to class 1, p ij To identify the result a j Probability that the middle picture belongs to class i.
Specifically, if a classical method is used to process all recognition results, a classical model fusion processing mode can be used.
In one embodiment, fitting the plurality of recognition results using a neural network comprises:
respectively splicing the recognition result of the training set and the recognition result of the test set to generate a new training set and a new test set;
training the constructed neural network by adopting a new training set to obtain the trained neural network, and testing by utilizing a new test set;
and taking the classification network and the trained neural network as an overall network.
In particular, the neural network may be a simple network or a complex network.
1. When fitting with a simple network using a simpler neural network. Simple neural networks can be designed as stacks of a small number of convolutional layers, typically less than ten layers, but need to be tuned for the specific situation. Experiments prove that a better defense result can be obtained by a simpler neural network, but when the training times j are changed, the network performance is unstable. The method comprises the following specific steps:
firstly, the recognition results of the training set are spliced. I.e. j times of recognition results of each picture are spliced into a j x i probability matrix, i.e. a new training set,
a=[a 1 ,a 2 ,a 3 ,…,a j ]
then the input of the subsequent simple network is used for training;
and then, splicing the recognition results of the test set to correspondingly generate a new test set, wherein the splicing method is similar to that of the recognition results of the training set, and the constructed simple neural network is tested by using the new test set.
2. Fitting is performed using a more complex neural network. Complex neural networks have a greater number of network layers than simple neural networks, and more complex layer operations, such as pooling, normalization, and the like. Experiments prove that the more complex neural network can obtain a better defense result, and the network performance is stable when the training times j are changed. The method comprises the following specific steps:
similar to a simple network, firstly, the recognition results of the training set are spliced to generate a new training set, and the constructed complex neural network is trained by using the new training set. And then splicing the identification results of the test set to correspondingly generate a new test set, and testing the constructed complex neural network by using the new test set.
The neural network after result processing and the classification network together form the final overall classification network.
Part of parameters in the invention need to be flexibly adjusted according to actual conditions: the dropout value (p) of the classification network in S1 can be adjusted, the training times (j) in S2 can be adjusted, and the processing mode in S4 can be redesigned according to the actual situation.
FIG. 2 is a diagram of a general classification network architecture illustrating one embodiment of how a neural network may be used to defend against sample attacks. And selecting a proper deep neural network containing dropout operation as a classification network aiming at the picture classification task. And then, repeatedly training the classification network by using the normal training set picture, and keeping the training weight of each time. And then, identifying the pictures under all training weights respectively, and keeping identification results respectively. And finally, splicing the multiple recognition results into a matrix which is used as the input of a subsequent neural network to obtain the final classification result.
Fig. 3 is an exemplary diagram of a simple neural network structure, which is formed by stacking a few convolutional layers (generally, less than ten layers, which needs to be adjusted according to actual conditions). The network can play a good defense effect, but meanwhile, when the dimension of input data changes, the network performance is unstable.
Fig. 4 is an exemplary diagram of a complex neural network structure, generally having more network layers and more complex layer operations, which may be determined according to the actual situation. The network can play a better defense effect, and when the dimensionality of input data changes, the network performance is stable.
In order to solve the problem of resisting sample attack in the field of deep learning image classification, picture resisting sample defense can be effectively realized by utilizing the randomness of dropout. The invention discloses an image classification defense method for a deep learning confrontation sample, which is based on the observation that certain nodes in a network are often subjected to decisive change when a confrontation sample successfully attacks a deep neural classification network, skillfully utilizes the randomness of dropout to train the network for multiple times, then obtains multiple recognition results, reasonably processes the recognition results to obtain an overall network, and finally classifies the images to be recognized by utilizing the overall network to obtain a final classification result, thereby realizing the defense of the confrontation sample.
The advantages of the invention mainly include:
(1) The existing defense methods against samples are usually directed at a plurality of very limited attack methods and have poor generalization, for example, a Thermometer Encoding method is mainly directed at LS-PGA attack, and no method which can carry out wide defense and has good effect exists. The invention has no relation with the attack type based on the criterion, can defend various existing attacks, even unknown attacks, has good effect, does not need to resist samples in the whole training process, and can be completed by using normal pictures.
(2) The existing defense method for the confrontation sample is often tedious and not strong in flexibility. The invention has simple and clear thought and is very flexible, for example, a plurality of choices exist in the processing stage in the step 4, the adjustment can be carried out according to the actual situation, and when the constructed neural network is used for processing, the training speed is high and the occupied resources are less because the training data is a simple matrix.
(3) The method can not only defend against sample attack, but also improve the classification accuracy of the network to normal pictures.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the spirit or scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass these modifications and variations.

Claims (3)

1. A defense method for classifying antagonistic sample images based on deep learning of dropout is characterized by comprising the following steps:
s1: obtaining an original picture from an open picture data set, dividing the original picture into a training set and a testing set, and taking a deep neural network with dropout operation as a classification network;
s2: training the classification network for multiple times by adopting a training set, and keeping training weight and recognition result obtained by each training;
s3: identifying the pictures in the test set based on a classification network corresponding to the training weight obtained by each training to obtain a plurality of identification results, wherein each identification result corresponds to one training weight, each picture in the test set is identified under all the training weights, and each identification result is reserved;
the training set recognition result or the test set recognition result is represented as a 1 ,a 2 ,a 3 ,…,a j Wherein, the 1 st recognition result a 1 Corresponding to the first training weight w 1 J identification result a j Training weight w corresponding to j j Each recognition result a j Is of the form [ p 1j ,p 2j ,p 3j ,…,p ij ]Array of (1), p ij To identify the result a j The probability that the middle picture belongs to the i class;
s4: fitting the training set identification result and the test set identification result to obtain a general network for classification;
s5: identifying the picture to be identified by utilizing a general network to obtain a classification result;
wherein, S4 specifically includes: fitting a plurality of recognition results by adopting a neural network or fitting by adopting a classical method to obtain a general network for classification, wherein the neural network is a simple network or a complex network;
fitting the plurality of recognition results with a neural network, comprising:
respectively splicing the recognition result of the training set and the recognition result of the test set to generate a new training set and a new test set;
training the constructed neural network by adopting a new training set to obtain the trained neural network, and testing by utilizing a new test set;
taking the classification network and the trained neural network as a total network;
wherein, splice training set recognition result, test set recognition result respectively, specifically do:
a=[a 1 ,a 2 ,a 3 ,…,a j ]
wherein, a 1 For the first recognition result in the training set or test set recognition results, a j The j-th recognition result in the recognition results of the training set or the test set, a is the splicing result of the recognition results of the training set or the test set, and the splicing result is a probability matrix.
2. The method of claim 1, wherein a dropout value of the deep neural network with a dropout operation in S1 is set to a preset value.
3. The method of claim 1, wherein fitting the plurality of recognition results using a classical approach comprises:
correspondingly adding the probabilities of all classes in the identification result to obtain a final probability, taking the maximum probability as the final classification result of the picture, wherein the calculation mode is as follows:
Figure FDA0003835377630000021
wherein p is 1j To identify the result a j Probability of a middle Picture belonging to class 1, p ij To identify the result a j Probability that the middle picture belongs to class i.
CN202010433696.2A 2020-05-21 2020-05-21 Deep learning confrontation sample image classification defense method based on dropout Expired - Fee Related CN111666985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010433696.2A CN111666985B (en) 2020-05-21 2020-05-21 Deep learning confrontation sample image classification defense method based on dropout

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010433696.2A CN111666985B (en) 2020-05-21 2020-05-21 Deep learning confrontation sample image classification defense method based on dropout

Publications (2)

Publication Number Publication Date
CN111666985A CN111666985A (en) 2020-09-15
CN111666985B true CN111666985B (en) 2022-10-21

Family

ID=72384206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010433696.2A Expired - Fee Related CN111666985B (en) 2020-05-21 2020-05-21 Deep learning confrontation sample image classification defense method based on dropout

Country Status (1)

Country Link
CN (1) CN111666985B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102601761B1 (en) * 2022-12-28 2023-11-13 한국과학기술원 Method and apparatus of revising a deep neural network for adversarial examples

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344759A (en) * 2018-06-12 2019-02-15 北京理工大学 A kind of relatives' recognition methods based on angle loss neural network
CN110276248A (en) * 2019-05-10 2019-09-24 杭州电子科技大学 A kind of facial expression recognizing method based on sample weights distribution and deep learning

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100595782C (en) * 2008-04-17 2010-03-24 中国科学院地理科学与资源研究所 Classification method for syncretizing optical spectrum information and multi-point simulation space information
US10848508B2 (en) * 2016-09-07 2020-11-24 Patternex, Inc. Method and system for generating synthetic feature vectors from real, labelled feature vectors in artificial intelligence training of a big data machine to defend
CN108549940B (en) * 2018-03-05 2021-10-29 浙江大学 Intelligent defense algorithm recommendation method and system based on multiple counterexample attacks
CN110907909B (en) * 2019-10-30 2023-09-12 南京市德赛西威汽车电子有限公司 Radar target identification method based on probability statistics
CN110863935B (en) * 2019-11-19 2020-09-22 上海海事大学 Method for identifying attached matters of blades of ocean current machine based on VGG16-SegUnet and dropout

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344759A (en) * 2018-06-12 2019-02-15 北京理工大学 A kind of relatives' recognition methods based on angle loss neural network
CN110276248A (en) * 2019-05-10 2019-09-24 杭州电子科技大学 A kind of facial expression recognizing method based on sample weights distribution and deep learning

Also Published As

Publication number Publication date
CN111666985A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
CN106776842B (en) Multimedia data detection method and device
Jeong et al. Ood-maml: Meta-learning for few-shot out-of-distribution detection and classification
Yang et al. Benchmarking attribution methods with relative feature importance
CN108737406B (en) Method and system for detecting abnormal flow data
Lerch-Hostalot et al. Unsupervised steganalysis based on artificial training sets
CN106415594B (en) Method and system for face verification
CN111507370A (en) Method and device for obtaining sample image of inspection label in automatic labeling image
CN111598182B (en) Method, device, equipment and medium for training neural network and image recognition
CN108710893B (en) Digital image camera source model classification method based on feature fusion
CN111783505A (en) Method and device for identifying forged faces and computer-readable storage medium
CN114187311A (en) Image semantic segmentation method, device, equipment and storage medium
CN111062036A (en) Malicious software identification model construction method, malicious software identification medium and malicious software identification equipment
Li et al. Image manipulation localization using attentional cross-domain CNN features
CN114330499A (en) Method, device, equipment, storage medium and program product for training classification model
CN112765607A (en) Neural network model backdoor attack detection method
CN114821282A (en) Image detection model and method based on domain confrontation neural network
CN113705596A (en) Image recognition method and device, computer equipment and storage medium
CN113011307A (en) Face recognition identity authentication method based on deep residual error network
CN111666985B (en) Deep learning confrontation sample image classification defense method based on dropout
CN114758199A (en) Training method, device, equipment and storage medium for detection model
CN114330650A (en) Small sample characteristic analysis method and device based on evolutionary element learning model training
KR20200038072A (en) Entropy-based neural networks partial learning method and system
CN117475253A (en) Model training method and device, electronic equipment and storage medium
CN115861306B (en) Industrial product abnormality detection method based on self-supervision jigsaw module
CN117058716A (en) Cross-domain behavior recognition method and device based on image pre-fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20221021

CF01 Termination of patent right due to non-payment of annual fee