CN113344090B - Image processing method for resisting attack by target in middle layer - Google Patents

Image processing method for resisting attack by target in middle layer Download PDF

Info

Publication number
CN113344090B
CN113344090B CN202110676108.2A CN202110676108A CN113344090B CN 113344090 B CN113344090 B CN 113344090B CN 202110676108 A CN202110676108 A CN 202110676108A CN 113344090 B CN113344090 B CN 113344090B
Authority
CN
China
Prior art keywords
classifier
target
iteration
map data
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110676108.2A
Other languages
Chinese (zh)
Other versions
CN113344090A (en
Inventor
高联丽
程娅娅
宋井宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jingzhili Technology Co ltd
Original Assignee
Chengdu Jingzhili Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jingzhili Technology Co ltd filed Critical Chengdu Jingzhili Technology Co ltd
Priority to CN202110676108.2A priority Critical patent/CN113344090B/en
Publication of CN113344090A publication Critical patent/CN113344090A/en
Application granted granted Critical
Publication of CN113344090B publication Critical patent/CN113344090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image processing method for an intermediate layer with a target to resist attack, which comprises the following steps: s1, obtaining a target image in a target image library; s2, inputting the target image into a classifier to obtain feature map data output by a middle layer of the classifier; s3, inputting the confrontation sample into a classifier to obtain characteristic diagram data output by an intermediate layer of the classifier; s4, constructing a loss function; s5, updating the confrontation sample; s6, carrying out standardization processing on the updated confrontation sample; s7, repeating the steps S3 to S6 until the number of iteration steps reaches a set value, inputting the final standardized counterattack sample into a classifier, outputting an incorrect classification result by the classifier, and finishing image processing of counterattack; the method solves the problem that the space consistency constraint is unreasonably applied to the original and target characteristic diagrams in space due to the use of Euclidean distance in the traditional technology for resisting the attack by the target in the middle layer, thereby causing low-efficiency attack effect.

Description

Image processing method for resisting attack by target in middle layer
Technical Field
The invention relates to the technical field of image processing, in particular to an image processing method for an intermediate layer with a target to resist attack.
Background
The countermeasure sample can easily cheat the deep neural network by only adding disturbance which is difficult to be identified by human eyes in the input picture. According to the victim model visibility, the counter attacks can be classified as white-box, gray-box, black-box attacks, which gradually decrease in visibility. According to the attack target, the method can be divided into non-target attack and target attack, and the non-target attack only requires an attacker misleading model to give an arbitrary wrong output result; when an attacker is expected to induce a model to give a certain error output, the attacker is a targeted attack. In the face of the most challenging attack requirement (targeted black box attack), the attack effect of the traditional attack technology based on model logits layer output is not strong enough. In order to solve the problem, some researchers propose an intermediate layer attack technology, and further improve the targeted black box attack performance. As one of the main methods, the intermediate layer has a target to resist the attack, and after the target picture is given, the disturbance is generated by reducing the difference of the intermediate layer characteristic graphs of the original picture and the target picture. In measuring feature map differences, the current technology generally selects euclidean distances at the pixel level, but since euclidean distances unreasonably impose spatial consistency constraints on the original and target feature maps, the selection of euclidean distances is questionable. Intuitively, given two pictures, one cat on the left and one cat on the right, the neural network would classify them as "cats", but the euclidean distance between the two pictures is large.
Disclosure of Invention
Aiming at the defects in the prior art, the image processing method for the intermediate layer targeted attack countermeasure provided by the invention solves the problem of low-efficiency attack effect caused by unreasonable spatial consistency constraint on the original and target characteristic diagrams due to the use of Euclidean distance in the traditional intermediate layer targeted attack countermeasure technology.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an image processing method for an intermediate layer targeted against attacks, comprising the steps of:
s1, obtaining a target category, inputting an original image into a classifier, and obtaining a target image in a target gallery based on the target category;
s2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized countermeasure sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in the middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output by the middle layer of the classifier and the feature map data of the standardized countermeasure sample of the previous iteration output by the middle layer of the classifier;
s5, calculating the gradient of the loss between the characteristic diagram data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
s6, carrying out standardization processing on the updated countermeasure sample to obtain a standardized countermeasure sample of the iteration;
and S7, taking the standard countermeasure sample of the iteration as the standard countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the iteration step number reaches a set value, inputting the final standard countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.
Further, step S1 comprises the following sub-steps:
s11, obtaining a target category, inputting an original image into the classifier, and obtaining feature map data O output by the original image in a middle layer l of the classifier l Wherein, in the step (A),
Figure BDA0003120643490000021
N l the number of channels in the intermediate layer l, M l For the length times the width of the feature map data,
Figure BDA0003120643490000022
a real number space;
s12, obtaining all feature maps of the sub-gallery of the target category in the target gallery, and selecting distance feature map data O from all the feature maps l The farthest feature map is used as a target feature map;
and S13, taking the image in the target image library corresponding to the target feature image as a target image.
Further, the method for acquiring the target category in step S11 includes 2 methods: the 1 st: randomly selecting one from all categories as a target category, 2: and selecting the prediction class with the prediction confidence degree arranged at the r-th position as the target class according to the prediction class of the original sample by the classifier.
Further, the loss function in step S4 is:
Figure BDA0003120643490000031
Figure BDA0003120643490000032
wherein the content of the first and second substances,
Figure BDA0003120643490000033
in order to be a function of the loss,
Figure BDA0003120643490000034
in the v-th iteration, the normalized confrontation sample of the previous iteration is used as the characteristic map data, T, output by the intermediate layer l of the classifier l Feature map data, s, output for the target image in the intermediate layer l of the classifier ·i Is composed of
Figure BDA0003120643490000035
Data at spatial position i, s ·j Is composed of
Figure BDA0003120643490000036
Data at spatial position j, t ·i Is T l Data at spatial position i, t ·j Is T l Data at spatial position j, M l For the length-by-width of the feature map data, k (·,) is an auxiliary mapping operation that maps the feature map data from the original space to the complete inner product space.
The beneficial effects of the above further scheme are: through the auxiliary space mapping operation, the loss function maps the feature map data from the original space to a complete inner product space, the Euclidean distance alignment mode on the original space in the traditional technology is replaced, the space constraint brought by the Euclidean distance measurement is reduced, and therefore the semantic alignment of the feature map data is better completed.
Further, the loss function in step S4 is:
Figure BDA0003120643490000041
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003120643490000042
in order to be a function of the loss,
Figure BDA0003120643490000043
in the v-th iteration, the normalized confrontation sample of the previous iteration is used as the characteristic map data, T, output by the intermediate layer l of the classifier l Feature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, N l Number of channels, M, for feature map data l For the length times the width of the feature map data,
Figure BDA0003120643490000044
is composed of
Figure BDA0003120643490000045
Data at position (n, m), (T) l ) nm Is T l The data at location (n, m),
Figure BDA0003120643490000046
as feature map data
Figure BDA0003120643490000047
Variance of all values on channel n, var ((T) l ) ) Is feature map data T l Variance of all values on channel n.
The beneficial effects of the above further scheme are: the loss function uses the global statistics to replace Euclidean distance measurement of a pixel level, has translation invariance, and reduces spatial constraint when the quantity difference is reduced to a certain extent.
Further, step S5 includes the following substeps:
s51, calculating the gradient of the loss between the characteristic diagram data relative to the normalized countermeasure sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
Further, the formula for calculating the cumulative gradient of the current iteration number in step S52 is:
Figure BDA0003120643490000048
Figure BDA0003120643490000049
wherein beta is v Cumulative gradient, beta, for the ν th iteration 0 μ is the attenuation factor, | is a two-norm operation, g ν-1 Normalizing countermeasure samples for a loss function with respect to an upper round of iterations
Figure BDA0003120643490000051
The gradient of (a) of (b) is,
Figure BDA0003120643490000052
according to a loss function
Figure BDA0003120643490000053
Normalized confrontation samples for round-up iterations
Figure BDA0003120643490000054
Gradient operation is solved;
the formula of the updated confrontation sample obtained in step S53 is:
Figure BDA0003120643490000055
Figure BDA0003120643490000056
α=∈/T
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003120643490000057
and updating an antagonistic sample for the ν th iteration, setting epsilon as a disturbance infinite norm threshold, setting alpha as a step length, setting T as the maximum iteration step number, setting (-) as a sign function and setting x as an original image.
Further, the formula of the normalization process in step S6 is:
Figure BDA0003120643490000058
wherein the content of the first and second substances,
Figure BDA0003120643490000059
the normalized challenge samples for the ν th iteration,
Figure BDA00031206434900000510
the samples are confronted for the update of the ν th iteration.
The beneficial effects of the above further scheme are: one is as follows: through a gradient accumulation mode, the updating mode stabilizes the updating direction of the confrontation sample and prevents the confrontation sample from falling into local optimum; and the second step is as follows: the disturbance threshold of the antagonistic sample is limited, so that the identifiability of human eyes can be effectively reduced; and thirdly: the normalization of the challenge sample stabilizes the gradient calculation. In conclusion, the scheme can generate the confrontation sample with higher mobility and complete more effective attack.
In conclusion, the beneficial effects of the invention are as follows:
(1) The prior art measures the difference between the original characteristic diagram and the target characteristic diagram through Euclidean distance, and further reduces the difference to generate a countermeasure sample. However, since the Euclidean distance inevitably introduces spatial constraint, the invention introduces high-order statistics with translation invariance to measure the similarity between the characteristic graphs, further improves the mobility of resisting samples and achieves more effective attack effect of the black box target.
(2) The method and the device consider the problem of extra space constraint of the traditional measuring mode when the difference between the characteristic graphs is measured, introduce high-order statistics with translation invariance and further reduce the space constraint. The antagonistic sample image has stronger migration capability, and better target attack performance of the black box is achieved. The method has certain universality, can be combined with a method for resisting attacks with targets in most middle layers, and does not introduce additional computational complexity.
Drawings
Fig. 1 is a flowchart of an image processing method for targeting an intermediate layer against an attack.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, an image processing method for an intermediate layer targeted against attacks includes the following steps:
s1, obtaining a target category, inputting an original image into a classifier, and obtaining a target image in a target gallery based on the target category;
the target gallery is made from a common public data set to assist the generation of a target image. In this embodiment, the target gallery holds 20 pictures for each class, and the target image is obtained from the target gallery.
Step S1 includes the following substeps:
s11, obtaining a target category, inputting an original image into the classifier, and obtaining feature map data O output by the original image in a middle layer l of the classifier l Wherein, in the step (A),
Figure BDA0003120643490000061
N l number of channels in intermediate layer l, M l For the length times the width of the feature map data,
Figure BDA0003120643490000062
is a real numberA space;
the method for acquiring the target category in the step S11 includes 2 types: the 1 st: randomly selecting one of all categories as a target category, 2 nd: and selecting the prediction category with the prediction confidence degree ranked at the r-th position as a target category according to the prediction category of the original sample by the classifier.
S12, obtaining all feature maps of the sub-gallery of the target category in the target gallery, and selecting distance feature map data O from all the feature maps l The farthest feature map is used as a target feature map;
specifically, in step S12, the target gallery includes images of all categories, and the image under each category is referred to as a sub-gallery. After the target category is obtained in S11, calling the target graph library to obtain a target category sub-graph library and calculating all feature graph data of the target category sub-graph library, wherein the feature graph data comprise distance feature graph data O l The farthest feature map is the target feature map.
And S13, taking the image in the target image library corresponding to the target feature image as a target image.
S2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized countermeasure sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in the middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output in the middle layer of the classifier and the feature map data of the standardized countermeasure sample of the previous iteration output in the middle layer of the classifier;
in this embodiment, with the help of different spatial mapping auxiliary operations, 3 kinds of loss functions (1 st to 3) based on the pairwise alignment attack method and 1 kind of loss function (4 th) based on the global alignment are proposed in this embodiment.
The 1 st:
Figure BDA0003120643490000071
Figure BDA0003120643490000072
Figure BDA0003120643490000081
the 2 nd:
Figure BDA0003120643490000082
Figure BDA0003120643490000083
Figure BDA0003120643490000084
and (3) a step of:
Figure BDA0003120643490000085
Figure BDA0003120643490000086
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003120643490000087
in order to be a function of the loss,
Figure BDA0003120643490000088
in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifier l Feature map data, s, output for the target image in the intermediate layer l of the classifier ·i Is composed of
Figure BDA0003120643490000089
Data at spatial location i, s ·j Is composed of
Figure BDA00031206434900000810
Data at spatial position j, t ·i Is T l Data at spatial position i, t ·j Is T l Data at spatial position j, σ, c, d being hyper-parameters, M l For the length multiplied by the width of the feature map data, exp (. Cndot.) is an exponential function, (. Cndot.) T In order to perform a transposition operation,
Figure BDA00031206434900000811
is a squaring operation on the two norms.
And 4, the method comprises the following steps:
Figure BDA00031206434900000812
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA00031206434900000813
in order to be a function of the loss,
Figure BDA00031206434900000814
in the v-th iteration, the normalized confrontation sample of the previous iteration is used as the characteristic map data, T, output by the intermediate layer l of the classifier l Feature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, N l Number of channels, M, for feature map data l For the length times the width of the feature map data,
Figure BDA0003120643490000091
is composed of
Figure BDA0003120643490000092
Data at position (n, m), (T) l ) nm Is T l The data at location (n, m),
Figure BDA0003120643490000093
as feature map data
Figure BDA0003120643490000094
In the channelVariance of all values on n, var ((T) l ) ) For feature map data T l Variance of all values on channel n.
S5, calculating the gradient of the loss between the characteristic diagram data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
step S5 comprises the following substeps:
s51, calculating the gradient of the loss between the characteristic map data relative to the normalized confrontation sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
The formula for calculating the cumulative gradient of the current iteration number in step S52 is:
Figure BDA0003120643490000095
Figure BDA0003120643490000096
wherein, beta v Cumulative gradient, β, for the v-th iteration 0 μ is the attenuation factor, | is a two-norm operation, g ν-1 Normalizing countermeasure samples for a loss function with respect to an upper round of iterations
Figure BDA0003120643490000097
The gradient of (a) of (b) is,
Figure BDA0003120643490000098
according to a loss function
Figure BDA0003120643490000099
Normalized confrontation samples for round-up iterations
Figure BDA00031206434900000910
Gradient operation is carried out;
the formula of the updated confrontation sample obtained in step S53 is:
Figure BDA00031206434900000911
Figure BDA00031206434900000912
α=∈/T
wherein the content of the first and second substances,
Figure BDA00031206434900000913
updating the confrontation sample for the v-th iteration, wherein epsilon is a disturbance infinite norm threshold, alpha is a step length, T is a maximum iteration step number, sign (DEG) is a sign function, x is an original image, and the small and large taking operation min (max (·, DEG) is used for ensuring the confrontation sample
Figure BDA0003120643490000101
Always within e of x.
S6, carrying out standardization processing on the updated countermeasure sample to obtain a standardized countermeasure sample;
the formula of the standardization process in step S6 is:
Figure BDA0003120643490000102
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003120643490000103
the normalized challenge samples for the ν th iteration,
Figure BDA0003120643490000104
the challenge sample is updated for the ν th iteration.
And S7, taking the standard countermeasure sample of the iteration as the standard countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the iteration steps reach a set value, inputting the final standard countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.

Claims (8)

1. An image processing method for an intermediate layer targeted against attacks, comprising the steps of:
s1, obtaining a target category, inputting an original image into a classifier, and obtaining a target image in a target gallery based on the target category;
s2, inputting the target image into a classifier to obtain feature map data output by the target image in a middle layer of the classifier;
s3, inputting the standardized countermeasure sample of the previous iteration into a classifier to obtain characteristic diagram data output by the classifier in a middle layer;
s4, constructing a loss function, and calculating the loss between the feature map data of the target image output by the middle layer of the classifier and the feature map data of the standardized countermeasure sample of the previous iteration output by the middle layer of the classifier;
s5, calculating the gradient of the loss between the feature map data relative to the standardized countermeasure sample of the previous iteration, calculating the noise of the current iteration based on the gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample;
s6, carrying out standardization processing on the updated countermeasure sample to obtain a standardized countermeasure sample of the iteration;
and S7, taking the standard countermeasure sample of the iteration as the standard countermeasure sample of the previous iteration, repeating the iteration steps from S3 to S6 until the iteration step number reaches a set value, inputting the final standard countermeasure sample into a classifier, outputting an incorrect classification result by the classifier, and finishing the image processing of the countermeasure attack.
2. The image processing method for an intermediate layer targeted against attacks according to claim 1, characterized in that said step S1 comprises the sub-steps of:
s11, obtaining a target category, inputting an original image into the classifier, and obtaining feature map data O output by the original image in a middle layer l of the classifier l Wherein, in the process,
Figure FDA0003120643480000011
N l number of channels in intermediate layer l, M l For the length times the width of the feature map data,
Figure FDA0003120643480000012
is a real number space;
s12, all feature maps of the sub-map library of the target category in the target map library are obtained, and distance feature map data O are selected from all feature maps l The farthest feature map is used as a target feature map;
and S13, taking the image in the target image library corresponding to the target feature image as a target image.
3. The image processing method for middle layer targeted against attack as claimed in claim 2, wherein the method of obtaining the target class in step S11 includes 2 methods: the 1 st: randomly selecting one from all categories as a target category, 2: and selecting the prediction class with the prediction confidence degree arranged at the r-th position as the target class according to the prediction class of the original sample by the classifier.
4. The image processing method for an intermediate layer targeted against attacks according to claim 1, characterized in that the loss function in step S4 is:
Figure FDA0003120643480000021
Figure FDA0003120643480000022
wherein the content of the first and second substances,
Figure FDA0003120643480000023
in order to be a function of the loss,
Figure FDA0003120643480000024
in the v-th iteration, the normalized confrontation sample of the previous iteration is the characteristic map data, T, output by the middle layer l of the classifier l Feature map data, s, output for the target image in the intermediate layer l of the classifier ·i Is composed of
Figure FDA0003120643480000026
Data at spatial location i, s .j Is composed of
Figure FDA0003120643480000025
Data at spatial position j, t ·i Is T l Data at spatial position i, t ·j Is T l Data at spatial position j, M l The length of the feature map data is multiplied by the width, and k (·,) is an auxiliary mapping operation for mapping the feature map data from the original space to the complete inner product space.
5. The image processing method for an intermediate layer targeted against attacks according to claim 1, characterized in that the loss function in step S4 is:
Figure FDA0003120643480000031
wherein the content of the first and second substances,
Figure FDA0003120643480000032
in order to be a function of the loss,
Figure FDA0003120643480000033
in the v-th iteration, the normalized confrontation sample of the previous iteration is used as the characteristic map data, T, output by the intermediate layer l of the classifier l Feature map data output for the target image at the intermediate level i of the classifier, | | is a two-norm operation, N l Number of channels, M, for feature map data l For the length times the width of the feature map data,
Figure FDA0003120643480000034
is composed of
Figure FDA0003120643480000035
Data at position (n, m), (T) l ) nm Is T l The data at location (n, m),
Figure FDA0003120643480000036
as feature map data
Figure FDA0003120643480000037
Variance of all values on channel n, var ((T) l ) ) Is feature map data T l Variance of all values on channel n.
6. The image processing method for middle layer targeted against attack as claimed in claim 1, wherein said step S5 comprises the sub-steps of:
s51, calculating the gradient of the loss between the characteristic map data relative to the normalized confrontation sample of the previous iteration;
s52, calculating the accumulated gradient of the current iteration according to the gradient obtained in the step S51;
and S53, calculating the noise of the current iteration according to the accumulated gradient, and adding the noise into the standardized countermeasure sample of the previous iteration to obtain an updated countermeasure sample.
7. The image processing method for an intermediate layer targeted against attacks according to claim 6, wherein the formula for calculating the cumulative gradient of the current iteration number in step S52 is as follows:
Figure FDA0003120643480000038
Figure FDA0003120643480000039
wherein beta is v Cumulative gradient, beta, for the ν th iteration 0 μ is the attenuation factor, | is a two-norm operation, g v-1 Normalizing countermeasure samples for a loss function with respect to an upper round of iterations
Figure FDA00031206434800000310
The gradient of (a) of (b) is,
Figure FDA00031206434800000311
according to a loss function
Figure FDA0003120643480000041
Normalized confrontation samples for round-up iterations
Figure FDA0003120643480000042
Gradient operation is solved;
the formula of the updated confrontation sample obtained in step S53 is:
Figure FDA0003120643480000043
Figure FDA0003120643480000044
α=∈/T
wherein the content of the first and second substances,
Figure FDA0003120643480000045
and e, for an updating confrontation sample of the v-th iteration, setting epsilon as a disturbance infinite norm threshold, setting alpha as a step length, setting T as the maximum iteration step number, setting (-) as a sign function and setting x as an original image.
8. The image processing method for middle layer targeted against attack as claimed in claim 1, wherein the formula of the normalization process in step S6 is:
Figure FDA0003120643480000046
wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003120643480000047
the normalized challenge samples for the ν th iteration,
Figure FDA0003120643480000048
the samples are confronted for the update of the ν th iteration.
CN202110676108.2A 2021-06-18 2021-06-18 Image processing method for resisting attack by target in middle layer Active CN113344090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110676108.2A CN113344090B (en) 2021-06-18 2021-06-18 Image processing method for resisting attack by target in middle layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110676108.2A CN113344090B (en) 2021-06-18 2021-06-18 Image processing method for resisting attack by target in middle layer

Publications (2)

Publication Number Publication Date
CN113344090A CN113344090A (en) 2021-09-03
CN113344090B true CN113344090B (en) 2022-11-22

Family

ID=77476514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110676108.2A Active CN113344090B (en) 2021-06-18 2021-06-18 Image processing method for resisting attack by target in middle layer

Country Status (1)

Country Link
CN (1) CN113344090B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN111709435A (en) * 2020-05-18 2020-09-25 杭州电子科技大学 Countermeasure sample generation method based on discrete wavelet transform
CN111932646A (en) * 2020-07-16 2020-11-13 电子科技大学 Image processing method for resisting attack
CN112368719A (en) * 2018-05-17 2021-02-12 奇跃公司 Gradient antagonism training of neural networks

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10624558B2 (en) * 2017-08-10 2020-04-21 Siemens Healthcare Gmbh Protocol independent image processing with adversarial networks
US10944767B2 (en) * 2018-02-01 2021-03-09 International Business Machines Corporation Identifying artificial artifacts in input data to detect adversarial attacks
CN112396129B (en) * 2020-12-08 2023-09-05 中山大学 Challenge sample detection method and universal challenge attack defense system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112368719A (en) * 2018-05-17 2021-02-12 奇跃公司 Gradient antagonism training of neural networks
CN109948658A (en) * 2019-02-25 2019-06-28 浙江工业大学 The confrontation attack defense method of Feature Oriented figure attention mechanism and application
CN111709435A (en) * 2020-05-18 2020-09-25 杭州电子科技大学 Countermeasure sample generation method based on discrete wavelet transform
CN111932646A (en) * 2020-07-16 2020-11-13 电子科技大学 Image processing method for resisting attack

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Patch-wise Attack for Fo oling Dee p Neural Network";Lianli Gao等;《http://arxiv.org/abs/2007.06765》;20201231;1-21 *
"适用于图像检索的强化对抗生成哈希方法";施鸿源等;《小型微型计算机系统》;20210531;1039-1043 *

Also Published As

Publication number Publication date
CN113344090A (en) 2021-09-03

Similar Documents

Publication Publication Date Title
US11341417B2 (en) Method and apparatus for completing a knowledge graph
CN108416250B (en) People counting method and device
Jin et al. Object-oriented method combined with deep convolutional neural networks for land-use-type classification of remote sensing images
US10482174B1 (en) Systems and methods for identifying form fields
US11860675B2 (en) Latent network summarization
US9129191B2 (en) Semantic object selection
CN108491511B (en) Data mining method and device based on graph data and model training method and device
EP4040401A1 (en) Image processing method and apparatus, device and storage medium
US20150170006A1 (en) Semantic object proposal generation and validation
CN110909618B (en) Method and device for identifying identity of pet
WO2019200735A1 (en) Livestock feature vector acquisition method, apparatus, computer device and storage medium
CN108985190B (en) Target identification method and device, electronic equipment and storage medium
US20200125954A1 (en) Systems and methods for selecting and generating log parsers using neural networks
EP3588376A1 (en) System and method for enrichment of ocr-extracted data
CN110348516B (en) Data processing method, data processing device, storage medium and electronic equipment
WO2021031704A1 (en) Object tracking method and apparatus, computer device, and storage medium
CN115130643A (en) Graphical neural network of data sets with heterogeneity
CN113836339B (en) Scene graph generation method based on global information and position embedding
CN111400572A (en) Content safety monitoring system and method for realizing image feature recognition based on convolutional neural network
CN113837151A (en) Table image processing method and device, computer equipment and readable storage medium
Fan et al. A better way to monitor haze through image based upon the adjusted LeNet-5 CNN model
CN114445121A (en) Advertisement click rate prediction model construction and advertisement click rate prediction method
CN113344090B (en) Image processing method for resisting attack by target in middle layer
Lachos et al. Flexible regression modeling for censored data based on mixtures of student-t distributions
CN112417961B (en) Sea surface target detection method based on scene prior knowledge

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant