CN112270368A - Image classification method based on misclassification perception regularization training - Google Patents

Image classification method based on misclassification perception regularization training Download PDF

Info

Publication number
CN112270368A
CN112270368A CN202011222382.4A CN202011222382A CN112270368A CN 112270368 A CN112270368 A CN 112270368A CN 202011222382 A CN202011222382 A CN 202011222382A CN 112270368 A CN112270368 A CN 112270368A
Authority
CN
China
Prior art keywords
training
sample
image
image classification
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011222382.4A
Other languages
Chinese (zh)
Inventor
张道强
徐梦婷
张涛
李仲年
邵伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202011222382.4A priority Critical patent/CN112270368A/en
Publication of CN112270368A publication Critical patent/CN112270368A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an image classification method based on misclassification perception regularization training, which comprises a training stage and a classification stage, wherein the training stage comprises the following steps: 1. establishing an image classification model based on a neural network; training the image classification model by adopting a training set; 2. constructing an antagonistic sample of each training sample in the training set to obtain an antagonistic sample set; 3. carrying out error classification perception regularization training on the trained image classification model by adopting a countermeasure sample, and 4, repeatedly and sequentially executing the steps S2 and S3 echo times, wherein echo is the preset regularization training times to obtain a finally trained image classification model; the classification stage comprises: 5. and inputting the image to be classified into the finally trained image classification model, wherein the output of the model is the class label of the target in the image to be classified. The method adopts different training modes for correctly classified samples and incorrectly classified samples, and the verifiable robustness of the classification model is improved.

Description

Image classification method based on misclassification perception regularization training
Technical Field
The invention belongs to the technical field of image classification, and particularly relates to an image classification method for improving verifiable robustness.
Background
Machine vision generally requires determining the type of an object in an image, and then processing the images of different types separately. With the development of deep learning, a neural network-based model is generally adopted to classify images at present. Although neural networks have been widely successful in various tasks such as image classification, speech recognition and computer-aided disease diagnosis, they have the disadvantage of lacking robustness. For example, it is easy for a visually imperceptible challenge image (challenge sample) to mislead a trained network, and the challenge samples are not only effective in the digital space but also exist in the physical world, thereby affecting the reliability of a neural network-based classification model.
In view of the importance of robustness against robustness in neural networks, a number of defense approaches are currently available. For example, the countermeasure training can be regarded as a data enhancement technique that can train a neural network on countermeasure samples, effective for the strongest known counterattack (e.g., C & W attacks), but it does not provide strong guarantees for robustness — it cannot prove that other attacks can make the model discrimination wrong. To address this lack of robustness assurance, recent efforts to validate robustness defense may be aimed at changing network predictions without any attacks in a particular area. The literature: balunovic, Mislave, and Martin Vechev. "adaptive training and programmable defenses. Bridging the gap." International Conference on Learning responses.2019, which combines countertraining and verifiable defense methods to train a neural network, can make a model have very high verifiable robustness and accuracy. Wherein the classification model accuracy is defined as follows:
Figure BDA0002762525580000011
wherein n is the total number of samples,
Figure BDA0002762525580000013
indicates if the ith sample xiClassifying by classification model, and obtaining the label y with maximum probability value equal to its original correct label yiThen it is 1, otherwise it is 0. The verifiable robustness is defined as follows:
Figure BDA0002762525580000012
wherein
Figure BDA0002762525580000021
Indicates when the sample xiIs classified correctly and meets the robustness robust (x)i) And returning to 1 when testing, and otherwise, returning to 0.
Verifiable robustness is defined on the premise that the sample is correctly classified. In the training process of the above document, both correctly classified samples and incorrectly classified samples are trained in the same way; however, in the testing process, the verifiable robustness of the model is calculated only on the basis of correctly classified samples, and the influence of incorrectly classified samples on the verifiable robustness of the model is not considered.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide an image classification method capable of improving verifiable robustness, which adopts different training modes for correctly classified samples and incorrectly classified samples, and improves the verifiable robustness of a classification model.
The technical scheme is as follows: the invention adopts the following technical scheme:
the image classification method based on the misclassification perception regularization training comprises a training stage and a classification stage, wherein the training stage comprises the following steps:
s1, establishing an image classification model based on a neural network, wherein the image classification model is used for classifying targets in an input image to obtain target class labels;
training the image classification model by adopting a training set;
s2, constructing an antagonistic sample of each training sample in the training set to obtain an antagonistic sample set;
s3, adopting the confrontation sample to carry out the misclassification perception regularization training on the trained image classification model,
s4, repeating and sequentially executing the step S2, S3 echo times, wherein echo is the preset regularization training times, and obtaining a finally trained image classification model;
the classification phase comprises:
and S5, inputting the image to be classified into the finally trained image classification model, wherein the output is the class label of the target in the image.
In step S1, a stochastic gradient descent algorithm is used to train the image classification model, in order to improve the fitting ability of the model, the initial learning rate is 0.03, the learning rate after m iterations is updated to 0.015, and m is a preset training time threshold.
In the step S2, a PGD attack is used to construct a confrontation sample { x ', y } of a training sample { x, y }, where x is a training sample image with a category label of y, and x' is a confrontation sample image corresponding to x; the method specifically comprises the following steps:
s21, perturbing x to obtain an initial countermeasure sample x'0X + Uniform (— e, + e), where ∈ is a preset maximum perturbation range, and Uniform (— e, + e) is a Uniform distribution function within a range of (— e, + e); initializing the current iteration time t to 0;
s22, carrying out the t iteration:
Figure BDA0002762525580000031
wherein the Clipx,∈(. is execution sample x'tAs a function of the pixel-by-pixel clipping of. sign () is a function for extracting a real number symbol, alpha is a preset step length, h (x't) Image classification model h pair x 'trained for step S1'tA prediction probability vector of (a); lCE(. cndot.) is a cross _ entry loss function,
Figure BDA0002762525580000032
solving a sign for the gradient;
s23, if t<n, let t be t +1, re-execute step S22; otherwise, ending the iteration; x 'from last iteration'n+1I.e. the confrontation sample image for x.
The step S3 specifically includes:
s31, respectively taking the training sample image x and the confrontation sample image x' as the input of the image classification model, and calculating the misclassification perception regularization training loss function
Figure BDA0002762525580000033
Figure BDA0002762525580000034
Wherein
Figure BDA0002762525580000035
A loss function between the output of the image classification model and the class label y is used as the input of the training sample image, and beta is the weight of the loss function of the training sample image; ADV (h)θ(x'), y) is a loss function between the output of the image classification model and the class label y when the confrontation sample image is used as input, lambda is the weight of the misclassification perception regular term, and theta is the current parameter of the image classification model h;
KL(hθ(x)||hθ(x′))*(1-hθ(x)y) A regularization term for improving verifiable robustness of the image classification model; wherein:
Figure BDA0002762525580000036
for measuring the difference between the training sample and the confrontation sample, K is the number of sample classes, x is the training sample, x' is the confrontation sample corresponding to x, hθ(x)kA probability value representing the probability that sample x belongs to label k; (1-h)θ(x)y) Is KL (·) weight parameter, hθ(x)yIndicates the probability value of the sample x belonging to the label y whenA value close to 0 when the sample is correctly classified and a value close to 1 when the sample is incorrectly classified;
s32, updating parameters of the image classification model:
Figure BDA0002762525580000037
wherein eta is the model parameter update step length,
Figure BDA0002762525580000038
and (4) the gradient of the misclassification perception regularization training loss function at the current model parameter theta is considered, and thetam is an updated image classification model parameter.
The image classification model may be: four layers of convolution network, wherein the first 3 layers are convolution layers, and the 4 th layer is a first full-connection layer consisting of 250 hidden units; the filter sizes of the 3 convolutional layers are 32, 32 and 128 respectively, the kernel sizes are 3, 3 and 4, and the span is 1, 2 and 2; a ReLU active layer is arranged behind the 3 convolutional layers and the first full-connection layer; the last layer is a second fully-connected layer with 10 output neurons.
The image classification model may also be: three layers of convolution network, wherein the first 2 layers of convolution layers and the 3 rd layer are all connection layers; the sizes of the kernels of the 2 convolutional layers are 5 and 4 respectively, and the span is 2; both the convolutional layer and the fully-connected layer are followed by a ReLU activation function layer.
Preferably, the maximum perturbation range e has a value of
Figure BDA0002762525580000041
Preferably, the misclassification perceptual regularization term weight λ is 6.
Has the advantages that: the invention discloses an image classification method based on misclassification perception regularization training. By different training modes of correctly classifying samples and incorrectly classifying samples, the verifiable robustness of the image classification model is improved, and the reliability of the model is improved.
Drawings
FIG. 1 is a flowchart of an image classification method based on mis-classification perceptual regularization training disclosed in the present invention;
FIG. 2 is a graph of the impact of different misclassification perceptual regularization term weights on accuracy and verifiable robustness;
FIG. 3 is a graph of the effect of different maximum perturbation ranges on verifiable robustness.
Detailed Description
The invention is further elucidated with reference to the drawings and the detailed description.
Example 1:
as shown in fig. 1, the invention discloses an image classification method based on misclassification perception regularization training, which comprises a training stage and a classification stage, wherein the training stage comprises:
s1, establishing an image classification model based on a neural network, wherein the image classification model is used for classifying targets in an input image to obtain target class labels;
in this embodiment, the image classification model is a 4-layer convolutional network, the first 3 layers are convolutional layers, the filter sizes are 32, and 128, the kernel sizes are 3, and 4, and the spans are 1, 2, and 2, respectively. The convolutional layer is followed by a fully connected layer consisting of 250 hidden units. Each layer is followed by a ReLU active layer; the last layer is a fully connected layer with 10 output neurons.
Training the image classification model by adopting a training set; in this embodiment, CIFAR-10 is used as a training set, and SGD (Stochastic gradient descent) with momentum of 0.9 is used to train the image classification model, at an initial stage of training, the fitting capability of the model is poor, the learning rate is set to 0.03, the learning rate is halved after m iterations and updated to 0.015, so that the model swings in a smaller region near an optimal value when converging. m is a preset training time threshold, and m is 60 in the embodiment.
S2, constructing an antagonistic sample of each training sample in the training set to obtain an antagonistic sample set;
in the step S2, a PGD attack is used to construct a confrontation sample { x ', y } of a training sample { x, y }, where x is a training sample image with a category label of y, and x' is a confrontation sample image corresponding to x; the method specifically comprises the following steps:
s21, perturbing x to obtain an initial countermeasure sample x'0X + Uniform (— e, + e), where ∈ is a preset maximum perturbation range, and Uniform (— e, + e) is a Uniform distribution function within a range of (— e, + e); initializing the current iteration time t to 0;
s22, carrying out the t iteration:
Figure BDA0002762525580000051
wherein the Clipx,∈(. is execution sample x'tAs a function of the pixel-by-pixel clipping of. sign () is a function for extracting a real number symbol, alpha is a preset step length, h (x't) Image classification model h pair x 'trained for step S1'tA prediction probability vector of (a); lCE() is a cross _ entry loss function that measures the difference between the predicted probability and the true probability distribution; since tag y is a scalar,/CEInternal y is first converted to h (x't) The vector with the same size is converted by the following method: e.g. y ═ k, i.e. challenge sample x'tIf the original label is of the kth class, the transformed vector shows that the position of the kth class is 1, and the values of the rest positions are 0;
Figure BDA0002762525580000055
solving a sign for the gradient;
s23, if t<n, let t be t +1, re-execute step S22; otherwise, ending the iteration; x 'from last iteration'n+1I.e. the confrontation sample image for x.
S3, carrying out mis-classification perception regularization training (MAAR) on the trained image classification model by adopting a confrontation sample, which specifically comprises the following steps:
s31, respectively training sample image x and confrontation sampleThe image x' is used as the input of an image classification model, and a misclassification perception regularization training loss function is calculated
Figure BDA0002762525580000052
Figure BDA0002762525580000053
Wherein
Figure BDA0002762525580000054
A loss function between the output of the image classification model and the class label y is used as the input of the training sample image, and beta is the weight of the loss function of the training sample image; ADV (h)θ(x'), y) is a loss function between the output of the image classification model and the class label y when the confrontation sample image is used as input, lambda is the weight of the misclassification perception regular term, and theta is the current parameter of the image classification model h;
KL(hθ(x)||hθ(x′))*(1-hθ(x)y) A regularization term for improving verifiable robustness of the image classification model; wherein:
Figure BDA0002762525580000061
for measuring the difference between the training sample and the confrontation sample, K is the number of sample classes, x is the training sample, x' is the confrontation sample corresponding to x, hθ(x)kA probability value representing the probability that sample x belongs to label k; (1-h)θ(x)y) Is KL (·) weight parameter, hθ(x)yA probability value representing the probability that a sample x belongs to a label y is a value close to 0 when the sample is correctly classified and is a value close to 1 when the sample is incorrectly classified;
s32, updating parameters of the image classification model:
Figure BDA0002762525580000062
wherein eta is the model parameter update step length,
Figure BDA0002762525580000063
and (4) the gradient of the misclassification perception regularization training loss function at the current model parameter theta is considered, and thetam is an updated image classification model parameter.
S4, repeating and sequentially executing the step S2, S3 echo times, wherein echo is the preset regularization training times, and obtaining a finally trained image classification model;
the classification phase comprises:
and S5, inputting the image to be classified into the finally trained image classification model, wherein the output is the class label of the target in the image.
The value of the misclassification perception regular term weight λ may affect the accuracy and verifiable robustness of the classification model, and this embodiment explores the influence of different λ on the classification result, as shown in fig. 2, the classification accuracy is implemented as a curve in which the λ varies, and the dotted line is a curve in which the verifiable robustness varies with the λ. As can be seen from the figure, λ ═ 6 is the best misclassification perceptual regular term weight value.
Example 2:
the present embodiment explores the impact of different maximum perturbation ranges e on verifiable robustness when constructing confrontational samples. This embodiment is different from embodiment 1 in the value of the maximum disturbance range. The embodiment sets the maximum disturbance range
Figure BDA0002762525580000064
The optimal value is selected from the following, and compared with the literature: balunovic, Mislave, and Martin Vechev. "adaptive tracking and programmable defects: Bridging the gap." International Conference on Learning reproduction.2019. The results are shown in FIG. 3, where COLT in FIG. 3 is the result of the above-mentioned document and MAAR is the result of the method disclosed in the present invention. As can be seen from FIG. 3, under different disturbance magnitudes, the MAAR disclosed by the present invention has higher verifiable robustness than the best COLT model at present. Specifically, in
Figure BDA0002762525580000065
In time, MAAR can verify that the robustness reaches 62.8%, while COLT is only 59.6%.
Example 3:
the present embodiment differs from embodiment 1 in that the structure and training set of the image classification model are different, and the present embodiment adopts the following image classification model: the image classification model is a three-layer convolution network: the convolutional network has 2 convolutional layers with kernel sizes of 5 and 4, respectively, a span of 2, followed by 1 fully-connected layer. Each layer is followed by a ReLU activation function layer. In the embodiment, a handwritten digital data set MNIST is used for training the model to obtain a classification model of the handwritten digital image. And compared with the prior method, the results are shown in the table 1:
TABLE 1
Figure BDA0002762525580000071
In Table 1, the references relating to methods [1] to [6] are as follows:
[1]Balunovic,Mislav,and Martin Vechev."Adversarial training and provable defenses:Bridging the gap."International Conference on Learning Representations.2019.
[2]Zhang,H.;Chen,H.;Xiao,C.;Gowal,S.;Stanforth,R.;Li,B.;Boning,D.;and Hsieh,C.-J.2019a.Towards stable andefficient training of verifiably robust neural networks.arXivpreprint arXiv:1906.06316.
[3]Eric Wong,Frank Schmidt,Jan Hendrik Metzen,and J.Zico Kolter.Scaling provable adversarial defenses.In Advances in Neural Information Processing Systems 31.2018.
[4]Kai Y.Xiao,Vincent Tjeng,Nur Muhammad(Mahi)Shafiullah,and Aleksander Madry.Training for faster adversarial robustness verification via inducing reLU stability.In International Conference on Learning Representations,2019.
[5]Matthew Mirman,Gagandeep Singh,and Martin Vechev.A provable defense for deep residual networks.arXiv preprint arXiv:1903.12519,2019.
[6]Krishnamurthy Dvijotham,Sven Gowal,Robert Stanforth,Relja Arandjelovic,Brendan O’Donoghue,Jonathan Uesato,and Pushmeet Kohli.Training verified learners with learned ver-ifiers.arXiv preprint arXiv:1805.10265,2018a.

Claims (8)

1. the image classification method based on the misclassification perception regularization training comprises a training stage and a classification stage, and is characterized in that the training stage comprises the following steps:
s1, establishing an image classification model based on a neural network, wherein the image classification model is used for classifying targets in an input image to obtain target class labels;
training the image classification model by adopting a training set;
s2, constructing an antagonistic sample of each training sample in the training set to obtain an antagonistic sample set;
s3, adopting the confrontation sample to carry out the misclassification perception regularization training on the trained image classification model,
s4, repeating and sequentially executing the step S2, S3 echo times, wherein echo is the preset regularization training times, and obtaining a finally trained image classification model;
the classification phase comprises:
and S5, inputting the image to be classified into the finally trained image classification model, wherein the output is the class label of the target in the image.
2. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 1, wherein in step S1, a stochastic gradient descent algorithm is used to train the image classification model, the initial learning rate is 0.03, the learning rate after m iterations is updated to 0.015, and m is a preset training time threshold.
3. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 1, wherein in the step S2, a PGD attack is used to construct confrontation samples { x ', y } of training samples { x, y }, where x is a training sample image with a class label of y and x' is a confrontation sample image corresponding to x; the method specifically comprises the following steps:
s21, perturbing x to obtain an initial countermeasure sample x'0X + Uniform (— e, + e), where ∈ is a preset maximum perturbation range, and Uniform (— e, + e) is a Uniform distribution function within a range of (— e, + e); initializing the current iteration time t to 0;
s22, carrying out the t iteration:
Figure FDA0002762525570000011
wherein the Clipx,∈(. is execution sample x'tAs a function of the pixel-by-pixel clipping of. sign () is a function for extracting a real number symbol, alpha is a preset step length, h (x't) Image classification model h pair x 'trained for step S1'tA prediction probability vector of (a); lCE(. cndot.) is a cross _ entry loss function,
Figure FDA0002762525570000012
solving a sign for the gradient;
s23, if t is less than n, making t equal to t +1, and re-executing step S22; otherwise, ending the iteration; x 'from last iteration'n+1I.e. the confrontation sample image for x.
4. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 1, wherein the step S3 specifically includes:
s31, respectively taking the training sample image x and the confrontation sample image x' as the input of the image classification model, and calculating the misclassification perception regularization training loss function
Figure FDA0002762525570000021
Figure FDA0002762525570000022
Wherein
Figure FDA0002762525570000023
A loss function between the output of the image classification model and the class label y is used as the input of the training sample image, and beta is the weight of the loss function of the training sample image; ADV (h)θ(x'), y) is a loss function between the output of the image classification model and the class label y when the confrontation sample image is used as input, lambda is the weight of the misclassification perception regular term, and theta is the current parameter of the image classification model h;
KL(hθ(x)||hθ(x′))*(1-hθ(x)y) A regularization term for improving verifiable robustness of the image classification model; wherein:
Figure FDA0002762525570000024
for measuring the difference between the training sample and the confrontation sample, K is the number of sample classes, x is the training sample, x' is the confrontation sample corresponding to x, hθ(x)kA probability value representing the probability that sample x belongs to label k; (1-h)θ(x)y) Is KL (·) weight parameter, hθ(x)yA probability value representing the probability that a sample x belongs to a label y is a value close to 0 when the sample is correctly classified and is a value close to 1 when the sample is incorrectly classified;
s32, updating parameters of the image classification model:
Figure FDA0002762525570000025
wherein eta is the model parameter update step length,
Figure FDA0002762525570000026
and (4) the gradient of the misclassification perception regularization training loss function at the current model parameter theta is considered, and thetam is an updated image classification model parameter.
5. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 1, wherein the image classification model is a four-layer convolutional network, wherein the first 3 layers are convolutional layers, and the 4 th layer is a first fully-connected layer consisting of 250 hidden units; the filter sizes of the 3 convolutional layers are 32, 32 and 128 respectively, the kernel sizes are 3, 3 and 4, and the span is 1, 2 and 2; a ReLU active layer is arranged behind the 3 convolutional layers and the first full-connection layer; the last layer is a second fully-connected layer with 10 output neurons.
6. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 1, wherein the image classification model is a three-layer convolutional network, wherein the first 2 layers of convolutional layers and the 3 rd layer are fully-connected layers; the sizes of the kernels of the 2 convolutional layers are 5 and 4 respectively, and the span is 2; both the convolutional layer and the fully-connected layer are followed by a ReLU activation function layer.
7. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 3, wherein the maximum perturbation range e has a value of
Figure FDA0002762525570000027
8. The image classification method based on the mis-classification perceptual regularization training as claimed in claim 4, wherein a mis-classification perceptual regularization term weight λ is 6.
CN202011222382.4A 2020-11-05 2020-11-05 Image classification method based on misclassification perception regularization training Pending CN112270368A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011222382.4A CN112270368A (en) 2020-11-05 2020-11-05 Image classification method based on misclassification perception regularization training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011222382.4A CN112270368A (en) 2020-11-05 2020-11-05 Image classification method based on misclassification perception regularization training

Publications (1)

Publication Number Publication Date
CN112270368A true CN112270368A (en) 2021-01-26

Family

ID=74345628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011222382.4A Pending CN112270368A (en) 2020-11-05 2020-11-05 Image classification method based on misclassification perception regularization training

Country Status (1)

Country Link
CN (1) CN112270368A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436051A (en) * 2021-06-17 2021-09-24 南京航空航天大学 Image privacy protection method and system based on image countermeasure and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334808A (en) * 2019-06-12 2019-10-15 武汉大学 A kind of confrontation attack defense method based on confrontation sample training
CN111832627A (en) * 2020-06-19 2020-10-27 华中科技大学 Image classification model training method, classification method and system for suppressing label noise

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334808A (en) * 2019-06-12 2019-10-15 武汉大学 A kind of confrontation attack defense method based on confrontation sample training
CN111832627A (en) * 2020-06-19 2020-10-27 华中科技大学 Image classification model training method, classification method and system for suppressing label noise

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YISENWANG 等: "IMPROVING ADVERSARIAL ROBUSTNESS REQUIRES REVISITING MISCLASSIFIED EXAMPLES", PUBLISHED AS A CONFERENCE PAPER AT ICLR 2020, 30 April 2020 (2020-04-30), pages 1 - 14 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436051A (en) * 2021-06-17 2021-09-24 南京航空航天大学 Image privacy protection method and system based on image countermeasure and computer equipment

Similar Documents

Publication Publication Date Title
US11468262B2 (en) Deep network embedding with adversarial regularization
US10546242B2 (en) Image analysis neural network systems
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN110633745B (en) Image classification training method and device based on artificial intelligence and storage medium
CN111507469B (en) Method and device for optimizing super parameters of automatic labeling device
CN107977707B (en) Method and computing equipment for resisting distillation neural network model
CN108256482B (en) Face age estimation method for distributed learning based on convolutional neural network
CN107945210B (en) Target tracking method based on deep learning and environment self-adaption
CN113569667B (en) Inland ship target identification method and system based on lightweight neural network model
JP2020123330A (en) Method for acquiring sample image for label acceptance inspection from among auto-labeled images utilized for neural network learning, and sample image acquisition device utilizing the same
CN107066951B (en) Face spontaneous expression recognition method and system
CN109033978B (en) Error correction strategy-based CNN-SVM hybrid model gesture recognition method
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
CN113220886A (en) Text classification method, text classification model training method and related equipment
CN111783841A (en) Garbage classification method, system and medium based on transfer learning and model fusion
CN110879982A (en) Crowd counting system and method
CN111724370B (en) Multi-task image quality evaluation method and system based on uncertainty and probability
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN110135505A (en) Image classification method, device, computer equipment and computer readable storage medium
CN110991257A (en) Polarization SAR oil spill detection method based on feature fusion and SVM
CN115424093A (en) Method and device for identifying cells in fundus image
CN109740672B (en) Multi-stream feature distance fusion system and fusion method
CN108496174B (en) Method and system for face recognition
CN111210018A (en) Method and device for improving robustness of deep neural network model
CN112270368A (en) Image classification method based on misclassification perception regularization training

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination