CN114998707A - Attack method and device for evaluating robustness of target detection model - Google Patents

Attack method and device for evaluating robustness of target detection model Download PDF

Info

Publication number
CN114998707A
CN114998707A CN202210935649.7A CN202210935649A CN114998707A CN 114998707 A CN114998707 A CN 114998707A CN 202210935649 A CN202210935649 A CN 202210935649A CN 114998707 A CN114998707 A CN 114998707A
Authority
CN
China
Prior art keywords
attack
pixel points
disturbance
detection model
key pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210935649.7A
Other languages
Chinese (zh)
Other versions
CN114998707B (en
Inventor
吕洁印
戴涛
刘浩
周受钦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Cimc Technology Co ltd
Shenzhen CIMC Intelligent Technology Co Ltd
Original Assignee
Shenzhen Cimc Technology Co ltd
Shenzhen CIMC Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Cimc Technology Co ltd, Shenzhen CIMC Intelligent Technology Co Ltd filed Critical Shenzhen Cimc Technology Co ltd
Priority to CN202210935649.7A priority Critical patent/CN114998707B/en
Publication of CN114998707A publication Critical patent/CN114998707A/en
Application granted granted Critical
Publication of CN114998707B publication Critical patent/CN114998707B/en
Priority to PCT/CN2022/137578 priority patent/WO2024027068A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an attack method and device for evaluating robustness of a target detection model, wherein the method comprises the following steps: acquiring at least one target picture input into a target detection model; calculating key pixel points in the initial importance matrix; selecting the key pixel points as disturbance objects according to the importance; carrying out iterative attack on the key pixel points until the successful attack reaches the iteration times; when the attack is successful and the iteration times are not reached, recovering key pixel points with disturbance amplitude smaller than a first preset number of preset values based on the disturbance amplitude; when the attack fails and the disturbance iteration times are reached, adding a new second preset number of key pixel points; and storing the target picture with the least number of disturbed key pixel points and the least number of detected target instances in the attack process. By dynamically increasing and recovering the disturbed key pixel points in the attack process, the method and the device are favorable for finding out a better disturbed key pixel point combination.

Description

Attack method and device for evaluating robustness of target detection model
Technical Field
The application relates to the technical field of artificial intelligence and data security, in particular to an attack method and device for evaluating robustness of a target detection model.
Background
Object detection is an important research task in computer vision and has wide application in real life, so that the robustness and safety of a model are very important. The robustness and safety of the target detection model need to be evaluated to meet the actual requirements. In general, a sparse attack method may be used to evaluate the robustness and security of the target detection model.
However, the conventional sparse attack method has the following problems: (1) the positions of the disturbance pixels are determined before the attack, and the disturbance pixels can be dynamically changed, so that the attack method is difficult to adapt to the dynamically changed disturbance pixels; (2) key pixel points cannot be effectively found for disturbance; (3) the anti-attack method directly using the image recognition model has no pertinence.
Disclosure of Invention
The present application is proposed to solve at least one of the problems described above. According to an aspect of the present application, there is provided an attack method for evaluating robustness of a target detection model, the method including:
acquiring at least one target picture input into a target detection model;
calculating key pixel points of the at least one target picture by adopting an initial importance matrix;
obtaining an intermediate matrix after the initial importance matrix is subjected to maximum pooling operation;
when the elements in the initial importance matrix are positioned in a target instance region and are equal to the elements corresponding to the intermediate matrix, reserving the corresponding elements, otherwise, setting the values of the corresponding elements to be 0, and obtaining a final importance matrix;
selecting the key pixel points as disturbance objects according to the final importance matrix;
carrying out iterative attack on the key pixel points until successful attack and iteration times are reached;
when the attack is successful and the iteration times are not reached, recovering the key pixel points with the disturbance amplitude smaller than a first preset number of preset values based on the disturbance amplitude;
when the attack fails and the disturbance iteration times are reached, adding a new second preset number of key pixel points;
and storing the target picture with the minimum number of disturbed key pixel points and the minimum number of detected target instances in the attack process.
In one embodiment of the present application, wherein the formula of the initial importance matrix is as follows:
Figure DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE004
is shown at
Figure DEST_PATH_IMAGE006
The pixel point of the position is
Figure DEST_PATH_IMAGE008
The value on the channel is then used to determine,
Figure DEST_PATH_IMAGE010
is shown at
Figure 131911DEST_PATH_IMAGE006
The importance of the location pixel point(s),
Figure DEST_PATH_IMAGE012
means that a value, L (x, P, M), is randomly selected from the interval 0 to 1 i ) The loss function is represented.
In one embodiment of the present application, the formula in which the formula of the importance matrix is maximally pooled is as follows;
Figure DEST_PATH_IMAGE014
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE016
representing the maximum pooling operation and s represents the size of the pooling core.
In an embodiment of the present application, a formula for selecting the key pixel point as a disturbance object is as follows:
Figure DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE020
denotes the first
Figure DEST_PATH_IMAGE022
The position mask of the perturbed pixel in the round of attack,
Figure DEST_PATH_IMAGE024
the operation of adding the disturbance pixel points is expressed, I represents the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure DEST_PATH_IMAGE026
indicating an increased number of perturbed pixel points.
In an embodiment of the present application, performing an iterative attack on the key pixel includes: attacking the target detection model by adopting a sparse attack method, wherein a loss function adopted by the sparse attack method is as follows:
Figure DEST_PATH_IMAGE028
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE030
is shown as
Figure DEST_PATH_IMAGE032
An object instance is in
Figure 412630DEST_PATH_IMAGE008
Probability values over the categories.
In an embodiment of the application, when the attack is successful and the iteration number is not reached, based on the size of the disturbance amplitude, recovering the key pixel points of which the disturbance amplitude is smaller than a first preset number of preset values includes: after the attack is successful, when the target detection model cannot detect any target, recovering the disturbed key pixel points, wherein a recovery formula is as follows:
Figure DEST_PATH_IMAGE034
wherein the content of the first and second substances,
Figure 477669DEST_PATH_IMAGE020
is shown as
Figure 49595DEST_PATH_IMAGE022
The position mask of the perturbed pixel in the round-robin attack,
Figure DEST_PATH_IMAGE036
expressing the operation of restoring the disturbed pixel points, I expressing the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure DEST_PATH_IMAGE038
and representing the proportion of the recovery disturbance pixel points.
In an embodiment of the present application, when the attack is successful and the iteration number is not reached, based on the magnitude of the disturbance amplitude, recovering the key pixel points of which the disturbance amplitude is smaller than the first preset number of the preset values, includes: based on the size of the disturbance amplitude, recovering the pixel points with smaller disturbance amplitude, wherein the recovery formula is as follows:
Figure DEST_PATH_IMAGE040
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE042
is shown in all the disturbed pixels, ordered from small to large in absolute value, located at
Figure 566771DEST_PATH_IMAGE038
A value of a quantile element; at the mask
Figure DEST_PATH_IMAGE044
When the element is 1, the corresponding pixel is a disturbance pixel;
Figure DEST_PATH_IMAGE046
and
Figure DEST_PATH_IMAGE048
respectively representing a disturbance mask and a counterdisturbance
Figure 681489DEST_PATH_IMAGE006
The value of the element.
In an embodiment of the present application, when the attack fails and the number of disturbance iterations is reached, adding a new second preset number of the key pixel points includes: for each element in the perturbation mask, increasing the number of perturbed key pixel points; the addition formula is as follows:
Figure DEST_PATH_IMAGE050
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE052
is shown in descending order in the importance matrix
Figure DEST_PATH_IMAGE054
A value of an element; importance matrix medium ratio
Figure 88068DEST_PATH_IMAGE052
The pixel points corresponding to the large elements are used as newly-added disturbance objects;
Figure 678449DEST_PATH_IMAGE046
represents a disturbance mask in
Figure 750048DEST_PATH_IMAGE006
The value of the element.
According to another aspect of the present application, there is provided an attack apparatus for evaluating robustness of a target detection model, the apparatus including:
a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, causing the processor to perform the aforementioned attack method of assessing robustness of an object detection model.
According to yet another aspect of the present application, a storage medium is provided, on which a computer program is stored, which, when executed by a processor, causes the processor to perform the above-mentioned attack method for evaluating robustness of an object detection model.
According to the attack method for evaluating the robustness of the target detection model, the key pixel points of at least one target picture input into the target detection model are determined firstly, the key pixel points are selected as disturbance objects according to the importance of the key pixel points, and the disturbed key pixel points are dynamically increased and restored in the process of attacking the key pixel points, so that a better disturbed key pixel point combination can be found. In addition, the method determines key pixel points to be disturbed by using the idea of attribution analysis of the neural network, and can obtain higher attack success rate by using fewer disturbed pixel points. In addition, the target detection model predicts a plurality of target instances, each of which is characterized by its corresponding receptive field. In order to influence more target examples by less pixel disturbance, the method and the device use the maximum pooling operation to select the local most important pixel points for disturbance, and the scattered disturbance pixel points can realize better sparse attack, namely, the key pixel points with less disturbance influence the prediction results of more target examples in the picture.
Drawings
The above and other objects, features and advantages of the present application will become more apparent from the following detailed description of the embodiments of the present application when taken in conjunction with the accompanying drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the principles of the application. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 shows a schematic flow diagram of an attack method for assessing robustness of a target detection model according to an embodiment of the application;
FIG. 2 is a schematic diagram illustrating an application scenario of an attack method for evaluating robustness of a target detection model according to an embodiment of the present application;
fig. 3 shows a schematic block diagram of an attack apparatus for evaluating robustness of a target detection model according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail below with reference to the accompanying drawings. It should be understood that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and that the present application is not limited by the example embodiments described herein. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the application described in the application without inventive step, shall fall within the scope of protection of the application.
The deep neural network model has been successful in many tasks, but it still has many problems, and its robustness and interpretability are poor. Target detection is an important research task in computer vision and has wide application in real life, so that the robustness and safety of a model are important. Based on the above, a sparse attack framework of the model needs to be designed, and the robustness of the model is evaluated in a low-dimensional space. The method and the device can be mainly used as a robustness assessment tool of the target detection model to verify the robustness and the safety of the model, and are very important for the practical application of the target detection model.
In general, there are many differences between object detection models and image classification models. The target detection model predicts the results of multiple target instances, each affected by the pixel points within its receptive field. In the attack process, a target instance can dominate the loss function, so that the method does not determine all disturbed pixel points at one time, but dynamically and gradually increases the disturbed pixel points in the attack process. The main goal of sparse attack is to minimize the number of disturbed pixels. In order to reduce the number of disturbance pixel points and improve the attack success rate, the application provides a targeted design for a target detection model: (1) finding out pixel points which are important for a model prediction result in the image by using a neural network attribution analysis method for reference; (2) the disturbed pixel points are scattered, so that more characteristics in the characteristic diagram can be influenced, and the prediction results of more target examples can be influenced.
Based on the initial purpose, the method is an attack method for evaluating the robustness of the target detection model based on the iterative addition and deletion of key pixel points, in the attack process, the method calculates the importance matrix of the picture pixels by using the thought of attribution analysis of a neural network, and uses the maximum pooling operation to select the pixel points corresponding to the local maximum values in the importance matrix for disturbance; and after the attack is successful, recovering the values of a part of disturbed pixel points based on the disturbance amplitude. And when the iteration times of the attack are reached, stopping the flow of the attack, and storing the picture with the least number of the disturbance pixels and the least number of the detection targets in the attack process. The picture is the final confrontation sample.
Aiming at the target detection model, the method is an attack method for evaluating the robustness of the target detection model by adopting a sparse attack algorithm, and the robustness of the target detection model is evaluated under low-dimensional space disturbance. The method uses an iterative addition and deletion sparse attack method based on key points for reference of the idea of attribution analysis of the neural network, and successfully attacks the target detection model with few disturbance pixels, so that the target detection model cannot detect any target. Therefore, the method can be used as an evaluation tool for the robustness of the model, and the development of the robustness of the model and a defense method is promoted.
The application provides a sparse attack framework of a target detection model, and the robustness of the model is evaluated in a low-dimensional space. The method and the device can be mainly used as a robustness assessment tool of the target detection model to verify the robustness and the safety of the target detection model, and are very important for the practical application of the target detection model.
Based on the foregoing technical problem, the present application provides an attack method for evaluating robustness of a target detection model, where the method includes: acquiring at least one target picture input into a target detection model; calculating key pixel points of the at least one target picture by adopting an initial importance matrix; obtaining an intermediate matrix after the initial importance matrix is subjected to maximum pooling operation; when the elements in the initial importance matrix are positioned in a target instance region and are equal to the elements corresponding to the intermediate matrix, reserving the corresponding elements, otherwise, setting the values of the corresponding elements to be 0, and obtaining a final importance matrix; selecting the key pixel points as disturbance objects according to the final importance matrix; carrying out iterative attack on the key pixel points until the successful attack reaches the iteration times; when the attack is successful and the iteration times are not reached, recovering the key pixel points with the disturbance amplitude smaller than a first preset number of preset values based on the disturbance amplitude; when the attack fails and the disturbance iteration times are reached, adding a new second preset number of key pixel points; and storing the target picture with the minimum number of disturbed key pixel points and detection target instances in the attack process. The method and the device dynamically increase and recover the disturbed key pixel points in the attack process, and are favorable for finding out a better disturbed key pixel point combination. In addition, the method determines key pixel points to be disturbed by using the idea of attribution analysis of the neural network, so that less disturbed key pixel points can be disturbed to obtain higher attack success rate. In addition, the target detection model predicts a plurality of target instances, each of which is characterized by its corresponding receptive field. In order to influence more target examples by less pixel disturbance, the method and the device use the maximum pooling operation to select the local most important pixel points for disturbance, and the scattered disturbance pixel points can realize better sparse attack, namely, the prediction results of more target examples in the picture are influenced by less disturbance pixels.
The following describes in detail a scheme of an attack method for evaluating robustness of a target detection model according to an embodiment of the present application with reference to the drawings. The features of the various embodiments of the present application may be combined with each other without conflict.
FIG. 1 shows a schematic flow diagram of an attack method for assessing robustness of a target detection model according to an embodiment of the application; as shown in fig. 1, an attack method 100 for evaluating robustness of a target detection model according to an embodiment of the present application may include the following steps S101, S102, S103, S104, S105, S106, S107, S108, and S109.
In step S101, at least one target picture of an input target detection model is acquired.
In step S102, a key pixel point of the at least one target picture is calculated by using the initial importance matrix.
Wherein the formula of the initial importance matrix is as follows:
Figure DEST_PATH_IMAGE056
(1)
wherein, the first and the second end of the pipe are connected with each other,
Figure 47169DEST_PATH_IMAGE004
is shown at
Figure 47486DEST_PATH_IMAGE006
The pixel point of the position is
Figure 859584DEST_PATH_IMAGE008
The value on the channel is then used to determine,
Figure 236339DEST_PATH_IMAGE010
is shown at
Figure 948817DEST_PATH_IMAGE006
The importance of the location pixel point(s),
Figure 385615DEST_PATH_IMAGE012
means that a value, L (x, P, M), is randomly selected from the interval 0 to 1 i ) The loss function is represented.
In step S103, the initial importance matrix is subjected to a maximum pooling operation to obtain an intermediate matrix.
In practical situations, key pixel points important for a target detection model prediction result are often gathered together. If the key pixel points are gathered at one position and all the key pixel points are selected as the disturbance pixel points, the number of the characteristics which can be influenced by the key pixel points is limited because the characteristics of the target detection model have a certain receptive field. In order to reduce the proportion of the number of the disturbance pixel points and keep the number of the influenced features, the attack method only selects the pixel points corresponding to the local maximum value in the importance matrix; and these pixels are located inside the actual target instance. This may reduce the number of disturbing pixels while reducing the probability of all target instances. The formula for performing maximum pooling on the formula of the importance matrix is as follows;
Figure DEST_PATH_IMAGE058
(2)
wherein the content of the first and second substances,
Figure 888272DEST_PATH_IMAGE016
representing the maximum pooling operation and s represents the size of the pooling core.
In step S104, when the elements in the initial importance matrix are located in the target instance region and equal to the corresponding elements of the intermediate matrix, the corresponding elements are retained, otherwise, the values of the corresponding elements are set to 0, and a final importance matrix is obtained.
Wherein the final importance matrix is formulated as follows:
Figure DEST_PATH_IMAGE060
(3)
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE062
a location mask representing the actual target instance,
Figure DEST_PATH_IMAGE064
to represent
Figure 911460DEST_PATH_IMAGE006
Inside the actual target instance.
In step S105, the key pixel point is selected as a disturbance object according to the final importance matrix.
The formula for selecting the key pixel point as the disturbance object is as follows:
Figure DEST_PATH_IMAGE066
(4)
wherein the content of the first and second substances,
Figure 183173DEST_PATH_IMAGE020
is shown as
Figure 525292DEST_PATH_IMAGE022
The position mask of the perturbed pixel in the round of attack,
Figure 810518DEST_PATH_IMAGE024
the operation of adding the disturbance pixel points is expressed, I represents the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure 529075DEST_PATH_IMAGE026
indicating an increased number of perturbed pixel points.
The above steps S101 to S105 are processes of determining key pixel points in the target picture.
In step S106, iterative attack is performed on the key pixel until the number of iterations is reached and successful attack is performed.
Specifically, the method determines pixel points which have important influence on the model prediction result by using a plurality of attribution analysis methods of the neural network, such as a gradient-based method, a gradient and input product method and an integral gradient method. These neural network attribution analysis methods generate an importance matrix. The size of the image is consistent with that of the original image, and the importance of pixel points in the image is represented.
In an embodiment of the present application, performing an iterative attack on the key pixel includes: attacking the target detection model by adopting a sparse attack method, wherein a loss function adopted by the sparse attack method is as follows:
Figure DEST_PATH_IMAGE068
(5)
wherein the content of the first and second substances,
Figure 389715DEST_PATH_IMAGE030
is shown as
Figure 902736DEST_PATH_IMAGE032
An object instance is in
Figure 409678DEST_PATH_IMAGE008
Probability values over the categories.
In step S107, when the attack is successful and the iteration number is not reached, the key pixel points with the disturbance amplitude smaller than the first preset number of the preset values are recovered based on the disturbance amplitude.
In an embodiment of the present application, when the attack is successful and the iteration number is not reached, based on the magnitude of the disturbance amplitude, recovering the key pixel points of which the disturbance amplitude is smaller than the first preset number of the preset values, includes: after the attack is successful, when the target detection model cannot detect any target, recovering the disturbed key pixel points, wherein a recovery formula is as follows:
Figure DEST_PATH_IMAGE070
(6)
wherein the content of the first and second substances,
Figure 338451DEST_PATH_IMAGE020
is shown as
Figure 381494DEST_PATH_IMAGE022
The position mask of the perturbed pixel in the round of attack,
Figure 563951DEST_PATH_IMAGE036
expressing the operation of restoring the disturbed pixel points, I expressing the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure 325234DEST_PATH_IMAGE038
and expressing the proportion of the recovered disturbed pixel points.
After the attack is successful, the target detection model cannot detect any target, and the disturbed pixel point begins to be recovered, as shown in formula (6).
In an embodiment of the present application, when the attack is successful and the iteration number is not reached, based on the magnitude of the disturbance amplitude, recovering the key pixel points of which the disturbance amplitude is smaller than the first preset number of the preset values, includes: based on the size of the disturbance amplitude, recovering the pixel points with smaller disturbance amplitude, wherein the recovery formula is as follows:
Figure DEST_PATH_IMAGE072
(7)
wherein the content of the first and second substances,
Figure 588856DEST_PATH_IMAGE042
is shown in all the disturbed pixels, ordered from small to large in absolute value, located at
Figure 486405DEST_PATH_IMAGE038
The value of the fractional element; at the mask
Figure 341228DEST_PATH_IMAGE044
When the element is 1, the corresponding pixel is a disturbance pixel;
Figure 88342DEST_PATH_IMAGE046
and
Figure 686814DEST_PATH_IMAGE048
respectively representing a disturbance mask and a counterdisturbance
Figure 704448DEST_PATH_IMAGE006
The value of the element.
In the embodiment of the application, the pixel point with smaller disturbance amplitude can be recovered based on the disturbance amplitude, and the disturbance amplitude can be determined by specifically referring to the size of P.
In step S108, when the attack fails and the number of disturbance iterations is reached, a new second preset number of the key pixel points is added.
In an embodiment of the present application, when the attack fails and the number of disturbance iterations is reached, adding a new second preset number of the key pixel points includes: for each element in the perturbation mask, increasing the number of perturbed key pixel points; the addition formula is as follows:
Figure DEST_PATH_IMAGE074
wherein the content of the first and second substances,
Figure 667856DEST_PATH_IMAGE052
is shown in descending order in the importance matrix
Figure 636687DEST_PATH_IMAGE054
A value of an element; importance matrix medium ratio
Figure 38849DEST_PATH_IMAGE052
The pixel points corresponding to the large elements are used as newly-added disturbance objects;
Figure 645411DEST_PATH_IMAGE046
represents a disturbance mask in
Figure 107617DEST_PATH_IMAGE006
The value of the element.
As can be seen, the ratio in the importance matrix
Figure 330787DEST_PATH_IMAGE052
The pixel points corresponding to the large elements become disturbance objects;
Figure 769597DEST_PATH_IMAGE046
represents a disturbance mask in
Figure 496244DEST_PATH_IMAGE006
The value of the element.
Generally, in the attack process, if the attack fails and the number of iterations of one turn is increased, new disturbance pixel points are added.
Step S109, saving the target picture with the minimum number of the disturbed key pixel points and the minimum number of the detection target instances in the attack process.
As shown in fig. 2, in the attack process, the key pixel points that have important influence on the target detection model prediction result will change continuously. In order to effectively determine key pixel points which have larger influence on the target detection model prediction result in the picture, the neural network attribution analysis method is used for reference. In the iterative process of attack, key pixel points important for the prediction result of the target detection model are gradually added; and after the attack is successful, in order to reduce the proportion of the number of disturbed key pixel points, recovering the low-importance pixel points in the disturbed pixel point set.
For example, for a picture, the target detection model tends to predict a result that contains multiple target instances. In the attack process, a target instance may occupy a dominant position in the loss function, and the associated pixel points have important influence on the loss function. Thus, pixels selected based on attribution analysis may be primarily concerned with one target instance of the model prediction, while ignoring other target instances. Therefore, the attack method of the application does not determine all key pixel points at one time, but optimizes the disturbed key pixel points added last time to a stable state and then adds new key pixel points. And after the attack is successful, recovering pixels which are not important for the model prediction result, and reducing the proportion of the number of key pixels. In the attack process, the position mask of the disturbance pixel point is changed, and the equations (4) and (6) can be referred to.
The application relates to an iterative addition and deletion method based on key pixel points, which comprises the steps of firstly selecting key pixel points which have important influence on a target detection model prediction result based on neural network attribution analysis to carry out disturbance. After the attack is successful, based on the disturbance amplitude, the pixel points with small disturbance amplitude are recovered, and the proportion of the number of disturbance key pixel points is reduced. The following code shows the flow of the iterative addition and deletion method based on key pixel points. Wherein
Figure DEST_PATH_IMAGE076
Representing a maximum number of iterations of the attack;
Figure DEST_PATH_IMAGE078
representing the number of iterations of increasing the disturbance pixel in each round;
Figure DEST_PATH_IMAGE080
a probability threshold representing a target instance;
Figure DEST_PATH_IMAGE082
representing the number of target examples detected by the target detection model, wherein the probability value of the category of each target example is greater than a set probability threshold value
Figure 706515DEST_PATH_IMAGE080
Figure DEST_PATH_IMAGE084
Representing the opposition to the disturbance;
Figure DEST_PATH_IMAGE086
representing the update step against the perturbation. In line 11 of the following code
Figure DEST_PATH_IMAGE088
Indicating a function, when the input condition is satisfied, the function value is 1, otherwise, the function value is 0. And when the maximum iteration times are reached, stopping the iteration process.
The code of the target detection model is evaluated based on the key points by adopting a sparse attack method as follows:
inputting: probability output function of target detection model
Figure DEST_PATH_IMAGE090
(ii) a Inputting pictures
Figure DEST_PATH_IMAGE092
(ii) a Location mask of actual target instance
Figure DEST_PATH_IMAGE094
And (3) outputting: confrontation sample
Figure DEST_PATH_IMAGE096
1:
Figure DEST_PATH_IMAGE098
2: while
Figure DEST_PATH_IMAGE100
do
3:if
Figure DEST_PATH_IMAGE102
then
4:
Figure DEST_PATH_IMAGE104
5:elseif i%
Figure DEST_PATH_IMAGE106
=0 then
Assigning the values calculated by the formulas (3), (4) and (5) to
Figure DEST_PATH_IMAGE108
7:
Figure DEST_PATH_IMAGE110
= Inc(M,I,k)
8:endif
9:
Figure DEST_PATH_IMAGE112
10:
Figure DEST_PATH_IMAGE114
11:
Figure DEST_PATH_IMAGE116
12:
Figure DEST_PATH_IMAGE118
13:if
Figure DEST_PATH_IMAGE120
or((
Figure DEST_PATH_IMAGE122
) and
Figure DEST_PATH_IMAGE124
<
Figure DEST_PATH_IMAGE126
) then
14:
Figure DEST_PATH_IMAGE128
15:
Figure DEST_PATH_IMAGE130
16:
Figure DEST_PATH_IMAGE132
17:end if
18:end while
19:
Figure DEST_PATH_IMAGE134
20:return
Figure DEST_PATH_IMAGE136
The method and the device dynamically increase and recover the disturbance pixel points in the attack process, and are favorable for finding out a better disturbance pixel combination. In addition, the method determines key pixel points to be disturbed by using the idea of attribution analysis of the neural network, and can obtain higher attack success rate by using fewer disturbed pixel points. In addition, the target detection model predicts a plurality of target instances, each of which is characterized by its corresponding receptive field. In order to influence more target examples by less pixel disturbance, the method and the device use the maximum pooling operation to select the local most important pixel points for disturbance, and the scattered disturbance pixel points can realize better sparse attack, namely, the prediction results of more target examples in the picture are influenced by less disturbance pixels.
The attack apparatus for evaluating robustness of a target detection model according to the present application is described below with reference to fig. 3, where fig. 3 shows a schematic block diagram of the attack apparatus for evaluating robustness of a target detection model according to an embodiment of the present application.
As shown in fig. 3, the attack apparatus 300 for evaluating robustness of an object detection model includes: one or more memories 301 and one or more processors 302, the memory 301 having stored thereon a computer program for execution by the processor 302, the computer program, when executed by the processor 302, causing the processor 302 to perform the attack method for assessing robustness of an object detection model as described above.
The apparatus 300 may be part or all of a computer device that may implement an attack method for evaluating robustness of a target detection model by software, hardware, or a combination of software and hardware.
As shown in fig. 3, the apparatus 300 includes one or more memories 301, one or more processors 302, a display (not shown), a communication interface, and the like, which are interconnected via a bus system and/or other form of connection mechanism (not shown). It should be noted that the components and configuration of apparatus 300 shown in FIG. 3 are exemplary only, and not limiting, and that apparatus 300 may have other components and configurations as desired.
The memory 301 is used for storing various data and executable program instructions generated during the operation of the above-described method, such as algorithms for storing various application programs or implementing various specific functions. May include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc.
The processor 302 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may perform desired functions using other components in the apparatus 300.
In one example, the apparatus 300 further includes an output device that may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display device, a speaker, and the like.
The communication interface may be any interface of any presently known communication protocol, such as a wired interface or a wireless interface, wherein the communication interface may include one or more serial ports, USB interfaces, ethernet ports, WiFi, wired network, DVI interfaces, device integrated interconnect modules, or other suitable various ports, interfaces, or connections.
Furthermore, according to the embodiment of the present application, there is also provided a storage medium on which program instructions are stored, and the program instructions are used for executing the corresponding steps of the attack method for evaluating robustness of the target detection model according to the embodiment of the present application when the program instructions are executed by a computer or a processor. The storage medium may include, for example, a memory card of a smart phone, a storage component of a tablet computer, a hard disk of a personal computer, a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a portable compact disc read only memory (CD-ROM), a USB memory, or any combination of the above storage media.
The attack apparatus and the storage medium for evaluating the robustness of the target detection model according to the embodiments of the present application have the same advantages as the foregoing method because the foregoing method can be implemented.
Furthermore, according to an embodiment of the present application, there is provided a computer program that, when executed by a computer or a processor, implements an attack method for evaluating robustness of a target detection model.
Although the example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above-described example embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present application. All such changes and modifications are intended to be included within the scope of the present application as claimed in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the units is only one logical functional division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another device, or some features may be omitted, or not executed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the description of exemplary embodiments of the present application, various features of the present application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the method of the present application should not be construed to reflect the intent: this application is intended to cover such departures from the present disclosure as come within known or customary practice in the art to which this invention pertains. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where such features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some of the modules according to embodiments of the present application. The present application may also be embodied as apparatus programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present application may be stored on a computer readable medium or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
The above description is only for the specific embodiments of the present application or the description thereof, and the protection scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope disclosed in the present application, and shall be covered by the protection scope of the present application. The protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An attack method for evaluating robustness of an object detection model, the method comprising:
acquiring at least one target picture input into a target detection model;
calculating key pixel points of the at least one target picture by adopting an initial importance matrix;
obtaining an intermediate matrix after the initial importance matrix is subjected to maximum pooling operation;
when the elements in the initial importance matrix are positioned in the target instance area and are equal to the elements corresponding to the intermediate matrix, reserving the corresponding elements, otherwise, setting the values of the corresponding elements to be 0 to obtain a final importance matrix;
selecting the key pixel points as disturbance objects according to the final importance matrix;
carrying out iterative attack on the key pixel points until the successful attack reaches the iteration times;
when the attack is successful and the iteration times are not reached, recovering the key pixel points with the disturbance amplitude smaller than a first preset number of preset values based on the disturbance amplitude;
when the attack fails and the disturbance iteration times are reached, adding a new second preset number of key pixel points;
and storing the target picture with the minimum number of disturbed key pixel points and the minimum number of detected target instances in the attack process.
2. The method of claim 1, wherein the initial importance matrix is formulated as follows:
Figure 920551DEST_PATH_IMAGE002
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE003
is shown at
Figure 297044DEST_PATH_IMAGE004
The pixel point of the position is
Figure DEST_PATH_IMAGE005
The value on the channel is then used to determine,
Figure 226953DEST_PATH_IMAGE006
is shown at
Figure 968382DEST_PATH_IMAGE004
The importance of the location pixel point(s),
Figure DEST_PATH_IMAGE007
indicates that a value, L (x, P, M), is randomly selected from 0 to 1 i ) The loss function is represented.
3. The method of claim 1, wherein the formula for the importance matrix is maximally pooled as follows;
Figure DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 819795DEST_PATH_IMAGE010
representing the maximum pooling operation and s represents the size of the pooling core.
4. The method of claim 1, wherein the formula for selecting the key pixel point as a perturbation object is as follows:
Figure 422814DEST_PATH_IMAGE012
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE013
is shown as
Figure 870149DEST_PATH_IMAGE014
The position mask of the perturbed pixel in the round of attack,
Figure DEST_PATH_IMAGE015
the operation of adding the disturbance pixel points is expressed, I represents the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure 611840DEST_PATH_IMAGE016
indicating an increased number of perturbed pixel points.
5. The method of claim 1, wherein iteratively attacking the key pixel point comprises: attacking the target detection model by adopting a sparse attack method, wherein a loss function adopted by the sparse attack method is as follows:
Figure 544024DEST_PATH_IMAGE018
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE019
() Is shown as
Figure 527898DEST_PATH_IMAGE020
An object instance is in
Figure 353772DEST_PATH_IMAGE005
Probability values over the categories.
6. The method of claim 1, wherein when the attack is successful and the iteration number is not reached, based on the magnitude of the disturbance amplitude, recovering a first preset number of the key pixel points whose disturbance amplitude is smaller than a preset value comprises: after the attack is successful, when the target detection model cannot detect any target, recovering the disturbed key pixel points, wherein a recovery formula is as follows:
Figure 938468DEST_PATH_IMAGE022
wherein the content of the first and second substances,
Figure 92369DEST_PATH_IMAGE013
denotes the first
Figure 302770DEST_PATH_IMAGE014
The position mask of the perturbed pixel in the round of attack,
Figure DEST_PATH_IMAGE023
expressing the operation of restoring the disturbed pixel points, I expressing the importance matrix of the pixel points, which is obtained by the attribution analysis method of the neural network,
Figure 560314DEST_PATH_IMAGE024
and expressing the proportion of the recovered disturbed pixel points.
7. The method of claim 1, wherein when the attack is successful and the iteration number is not reached, based on the magnitude of the disturbance amplitude, recovering a first preset number of the key pixel points whose disturbance amplitude is smaller than a preset value comprises: based on the size of the disturbance amplitude, recovering the pixel points with smaller disturbance amplitude, wherein the recovery formula is as follows:
Figure 706124DEST_PATH_IMAGE026
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE027
is shown in all the disturbed pixels, ordered from small to large in absolute value, located at
Figure 550584DEST_PATH_IMAGE024
The value of the fractional element; in the mask
Figure 548364DEST_PATH_IMAGE028
When the element is 1, the corresponding pixel is a disturbance pixel;
Figure DEST_PATH_IMAGE029
and
Figure 817672DEST_PATH_IMAGE030
respectively representing a disturbance mask and a counterdisturbance
Figure 9750DEST_PATH_IMAGE004
The value of the element.
8. The method of claim 1, wherein adding a new second predetermined number of the key pixel points when the attack fails and the number of perturbation iterations is reached comprises: for each element in the perturbation mask, increasing the number of perturbed key pixel points; the addition formula is as follows:
Figure 138243DEST_PATH_IMAGE032
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE033
expressed in descending order in the importance matrix
Figure 2031DEST_PATH_IMAGE016
A value of an element; importance matrix medium ratio
Figure 657004DEST_PATH_IMAGE033
The pixel points corresponding to the large elements are used as newly-added disturbance objects;
Figure 144617DEST_PATH_IMAGE029
represents a disturbance mask in
Figure 635772DEST_PATH_IMAGE004
The value of the element.
9. An attack apparatus for evaluating robustness of an object detection model, the apparatus comprising:
a memory and a processor, the memory having stored thereon a computer program for execution by the processor, the computer program, when executed by the processor, causing the processor to perform the attack method for assessing robustness of an object detection model as claimed in any one of claims 1 to 8.
10. A storage medium having stored thereon a computer program which, when executed by a processor, causes the processor to execute the attack method of assessing robustness of an object detection model according to any one of claims 1 to 8.
CN202210935649.7A 2022-08-05 2022-08-05 Attack method and device for evaluating robustness of target detection model Active CN114998707B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210935649.7A CN114998707B (en) 2022-08-05 2022-08-05 Attack method and device for evaluating robustness of target detection model
PCT/CN2022/137578 WO2024027068A1 (en) 2022-08-05 2022-12-08 Attack method and device for evaluating robustness of object detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210935649.7A CN114998707B (en) 2022-08-05 2022-08-05 Attack method and device for evaluating robustness of target detection model

Publications (2)

Publication Number Publication Date
CN114998707A true CN114998707A (en) 2022-09-02
CN114998707B CN114998707B (en) 2022-11-04

Family

ID=83023008

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210935649.7A Active CN114998707B (en) 2022-08-05 2022-08-05 Attack method and device for evaluating robustness of target detection model

Country Status (2)

Country Link
CN (1) CN114998707B (en)
WO (1) WO2024027068A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027068A1 (en) * 2022-08-05 2024-02-08 深圳中集智能科技有限公司 Attack method and device for evaluating robustness of object detection model

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242166A (en) * 2019-12-30 2020-06-05 南京航空航天大学 Universal countermeasure disturbance generation method
CN111539916A (en) * 2020-04-08 2020-08-14 中山大学 Image significance detection method and system for resisting robustness
CN111931707A (en) * 2020-09-16 2020-11-13 平安国际智慧城市科技股份有限公司 Face image prediction method, device, equipment and medium based on countercheck patch
CN112087420A (en) * 2020-07-24 2020-12-15 西安电子科技大学 Network killing chain detection method, prediction method and system
CN113569234A (en) * 2021-06-17 2021-10-29 南京大学 Visual evidence obtaining system for android attack scene reconstruction and implementation method
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN113869152A (en) * 2021-09-14 2021-12-31 武汉大学 Anti-face recognition method and system based on adversarial attack
CN113979367A (en) * 2021-10-12 2022-01-28 深圳中集智能科技有限公司 Automatic identification system and method for container position
CN114270373A (en) * 2019-09-24 2022-04-01 赫尔实验室有限公司 Deep reinforcement learning based method for implicitly generating signals to spoof a recurrent neural network
CN114298190A (en) * 2021-12-20 2022-04-08 润联软件系统(深圳)有限公司 Target positioning-based attack resisting method, device, equipment and storage medium
CN114419358A (en) * 2021-10-19 2022-04-29 南京邮电大学 Confrontation sample generation method
US20220172000A1 (en) * 2020-02-25 2022-06-02 Zhejiang University Of Technology Defense method and an application against adversarial examples based on feature remapping
US20220180242A1 (en) * 2020-12-08 2022-06-09 International Business Machines Corporation Dynamic Gradient Deception Against Adversarial Examples in Machine Learning Models

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10936910B2 (en) * 2019-02-15 2021-03-02 Baidu Usa Llc Systems and methods for joint adversarial training by incorporating both spatial and pixel attacks
WO2021257817A1 (en) * 2020-06-17 2021-12-23 The Trustees Of Princeton University System and method for secure and robust distributed deep learning
CN114220097B (en) * 2021-12-17 2024-04-12 中国人民解放军国防科技大学 Screening method, application method and system of image semantic information sensitive pixel domain based on attack resistance
CN114332569B (en) * 2022-03-17 2022-05-27 南京理工大学 Low-disturbance attack resisting method based on attention mechanism
CN114998707B (en) * 2022-08-05 2022-11-04 深圳中集智能科技有限公司 Attack method and device for evaluating robustness of target detection model

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114270373A (en) * 2019-09-24 2022-04-01 赫尔实验室有限公司 Deep reinforcement learning based method for implicitly generating signals to spoof a recurrent neural network
CN111242166A (en) * 2019-12-30 2020-06-05 南京航空航天大学 Universal countermeasure disturbance generation method
US20220172000A1 (en) * 2020-02-25 2022-06-02 Zhejiang University Of Technology Defense method and an application against adversarial examples based on feature remapping
CN111539916A (en) * 2020-04-08 2020-08-14 中山大学 Image significance detection method and system for resisting robustness
CN112087420A (en) * 2020-07-24 2020-12-15 西安电子科技大学 Network killing chain detection method, prediction method and system
CN111931707A (en) * 2020-09-16 2020-11-13 平安国际智慧城市科技股份有限公司 Face image prediction method, device, equipment and medium based on countercheck patch
US20220180242A1 (en) * 2020-12-08 2022-06-09 International Business Machines Corporation Dynamic Gradient Deception Against Adversarial Examples in Machine Learning Models
CN113569234A (en) * 2021-06-17 2021-10-29 南京大学 Visual evidence obtaining system for android attack scene reconstruction and implementation method
CN113780123A (en) * 2021-08-27 2021-12-10 广州大学 Countermeasure sample generation method, system, computer device and storage medium
CN113869152A (en) * 2021-09-14 2021-12-31 武汉大学 Anti-face recognition method and system based on adversarial attack
CN113979367A (en) * 2021-10-12 2022-01-28 深圳中集智能科技有限公司 Automatic identification system and method for container position
CN114419358A (en) * 2021-10-19 2022-04-29 南京邮电大学 Confrontation sample generation method
CN114298190A (en) * 2021-12-20 2022-04-08 润联软件系统(深圳)有限公司 Target positioning-based attack resisting method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FRANCESCO CROCE等: "Sparse and imperceivable adversarial attacks", 《2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV)》 *
蒋兴浩等: "基于视觉的飞行器智能目标检测对抗攻击技术", 《空天防御》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024027068A1 (en) * 2022-08-05 2024-02-08 深圳中集智能科技有限公司 Attack method and device for evaluating robustness of object detection model

Also Published As

Publication number Publication date
CN114998707B (en) 2022-11-04
WO2024027068A1 (en) 2024-02-08

Similar Documents

Publication Publication Date Title
CN109858461B (en) Method, device, equipment and storage medium for counting dense population
CN107220618B (en) Face detection method and device, computer readable storage medium and equipment
CN112052787B (en) Target detection method and device based on artificial intelligence and electronic equipment
CN109671020B (en) Image processing method, device, electronic equipment and computer storage medium
CN108171663B (en) Image filling system of convolutional neural network based on feature map nearest neighbor replacement
CN109800682B (en) Driver attribute identification method and related product
CN108875533B (en) Face recognition method, device, system and computer storage medium
CN110598717B (en) Image feature extraction method and device and electronic equipment
CN110263628B (en) Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN115797670A (en) Bucket wheel performance monitoring method and system based on convolutional neural network
CN114998707B (en) Attack method and device for evaluating robustness of target detection model
CN111357034A (en) Point cloud generation method, system and computer storage medium
CN110751021A (en) Image processing method, image processing device, electronic equipment and computer readable medium
JP2020504383A (en) Image foreground detection device, detection method, and electronic apparatus
CN110490058B (en) Training method, device and system of pedestrian detection model and computer readable medium
CN108875538B (en) Attribute detection method, device and system and storage medium
CN112348808A (en) Screen perspective detection method and device
CN110689496B (en) Method and device for determining noise reduction model, electronic equipment and computer storage medium
CN112364846A (en) Face living body identification method and device, terminal equipment and storage medium
CN115346125B (en) Target detection method based on deep learning
CN108764206B (en) Target image identification method and system and computer equipment
JP2021527859A (en) Irregular shape segmentation in an image using deep region expansion
CN116258643A (en) Image shadow eliminating method, device, equipment and storage medium
CN115147296A (en) Hyperspectral image correction method, device, computer equipment and storage medium
CN109587248A (en) User identification method, device, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant