CN115761310A - Method and system for generating customizable countermeasure patch - Google Patents

Method and system for generating customizable countermeasure patch Download PDF

Info

Publication number
CN115761310A
CN115761310A CN202211351791.3A CN202211351791A CN115761310A CN 115761310 A CN115761310 A CN 115761310A CN 202211351791 A CN202211351791 A CN 202211351791A CN 115761310 A CN115761310 A CN 115761310A
Authority
CN
China
Prior art keywords
patch
image
countermeasure
loss function
countermeasure patch
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211351791.3A
Other languages
Chinese (zh)
Inventor
王正
卫慧
于瀚勋
常元和
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202211351791.3A priority Critical patent/CN115761310A/en
Publication of CN115761310A publication Critical patent/CN115761310A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

A method and a system for generating a customizable countermeasure patch belong to the field of image processing and countermeasure attack, and comprise the following steps: inputting the content target image to an image conversion network; the image conversion network is used for converting the content target image into a countercheck patch, wherein the content characteristic and the style characteristic are respectively introduced into the countercheck patch through a content loss function and a style loss function, and the aggressiveness of the countercheck patch is enhanced through the target loss function; the image conversion network outputs a countermeasure patch. The anti-patch can be customized, and the visual concealment of the anti-patch can be greatly improved while a good physical attack effect is obtained.

Description

Method and system for generating customizable countermeasure patch
Technical Field
The invention belongs to the field of image processing and anti-attack, and particularly relates to a method and a system for generating a customizable anti-patch.
Background
The anti-attack means that a model based on the deep neural network makes wrong judgment with high confidence by designing a special disturbance. Different from other attacks, the mode reasoning stage mainly occurs in the anti-attack mode, the mode is not changed, only the input of the mode is modified, and the good attack success rate can be obtained.
In the field of current artificial intelligence, deep neural networks have enjoyed great success in many application scenarios such as image classification, target detection, image segmentation, etc., however, they are susceptible to challenge samples, i.e., are vulnerable in the face of challenge attacks. Moreover, the phenomenon of resisting attacks not only exists in a digital space, but also occurs in a plurality of tasks based on a deep neural network in the real physical world, so that people worry about the safety of artificial intelligence.
Countermeasure patches are often used to combat attacks in the physical world, which no longer limit the perturbation to variations that are imperceptible to humans, but instead generate a special patch in a small, local, non-perturbation-constrained region. The anti-patch has the characteristics of robustness, universality and robustness, and is widely applied to the task of attacking the pedestrian detector. However, the appearance and color distribution of the current confrontation patch are very abrupt, which brings strong visual impact to people, and in practical application, when an attacker uses the confrontation patch to launch an attack in the physical world, the confrontation patch can attract the attention of a defender. That is, current anti-patch lacks visual concealment, which enables an attacker to effectively escape from the detector's identification, but cannot escape from human identification.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a method and a system for generating a customizable anti-patch, wherein the anti-patch is customizable, and the visual concealment of the anti-patch is greatly improved while a good physical attack effect is obtained.
To achieve the above object, in one aspect, a method for generating a customizable countermeasure patch is adopted, comprising the steps of:
inputting the content target image to an image conversion network;
the image conversion network is used for converting the content target image into the countermeasure patch, wherein the content characteristic and the style characteristic are respectively introduced into the countermeasure patch through a content loss function and a style loss function, and the aggressivity of the countermeasure patch is enhanced through the target loss function;
the image conversion network outputs a countermeasure patch.
Preferably, the image conversion network is a U-Net network structure, and the training of the image conversion network adopts a random gradient descent method and a weighted combination of a minimization loss function.
Preferably, the content target image is a cartoon image, and during the image conversion network training, a cartoon image data set is prepared in advance, and training and testing are performed on a pedestrian image data set INRIAPERSON.
Preferably, the content loss function
Figure BDA0003919166280000021
Comprises the following steps:
Figure BDA0003919166280000022
where p is an countermeasure patch, f c Is a three-channel characteristic diagram C extracted from the layer C of a pre-trained VGG-16 neural network c ×H c ×W c ,I con Representing a content object image.
Preferably, by a style loss function
Figure BDA0003919166280000023
Introducing style characteristics of artistic image into the countermeasure patch, style loss function
Figure BDA0003919166280000024
Comprises the following steps:
Figure BDA0003919166280000031
wherein f is s Is a three-channel characteristic diagram C extracted from the s layer of a pre-trained VGG-16 neural network s ×H s ×W s G denotes a Gram matrix of depth features extracted from a series of lattice layers, I sty Representing a stylistic object image.
Preferably, the YOLOv2 model for resisting patch attacks is a single-stage strategy object detector, and each anchor box of the YOLOv model comprises a vector [ x, y, w, h, p obj ,p class1 ,p class2 ,...p classn ]Form a three-element body [ B, P ] obj ,P class ]To represent the output of the detector, defining an object loss function
Figure BDA0003919166280000032
s.t.P class =0 to attack the detector.
Preferably, the target loss function is defined as:
Figure BDA0003919166280000033
wherein λ 1 To lambda 3 Weights representing the loss units are used to weigh multiple objectives;
Figure BDA0003919166280000034
the loss of the target is indicated by the target loss,
Figure BDA0003919166280000035
indicating a loss of a non-printable score,
Figure BDA0003919166280000036
represents the total change loss;
Figure BDA0003919166280000037
and
Figure BDA0003919166280000038
ensuring that countermeasure patches are applicable in the real world;
Figure BDA0003919166280000039
and
Figure BDA00039191662800000310
minimizing through an optimizer, and controlling semantic content, style characteristics and texture structures respectively;
Figure BDA00039191662800000311
Figure BDA00039191662800000312
and
Figure BDA00039191662800000313
all represent loss units.
Preferably, in the training process, the image conversion network scales the image pasted with the countermeasure patch at random in each iteration.
Preferably, INRIAPerson screens samples with pedestrian height greater than 100 pixels by preprocessing, and the screened images are divided into a training set and a test set.
In another aspect, a system for generating a customizable countermeasure patch is provided, including an image conversion network for converting a content target image into a countermeasure patch;
the image conversion network includes:
a content loss function module for introducing content characteristics into the countermeasure patch;
a style loss function module for introducing style features into the countermeasure patch;
a target loss function module to enhance aggressiveness against the patch.
One of the above technical solutions has the following beneficial effects:
the invention can generate the self-defined and stylized countermark patch with diversified contents and good physical attack effect through the image conversion network. The image translation network balances customizability and aggressivity.
In order to improve the customizability of the countermeasure patch, the method extracts content features from the cartoon image and introduces artistic style features into the countermeasure patch. Unlike previous work that only considers the structural and textural features of the patch, the present invention makes full use of the stylized features.
A large number of experimental evaluations show that the customizable countermeasure patch provided by the invention successfully realizes effective attack on a human body detection model in digital and physical spaces, and has more aesthetic feeling in the human perception aspect compared with other countermeasure patches.
By analyzing the difference between the attack effects of patches of different styles, the invention finds and summarizes a rule: the style with higher color saturation and richness has stronger attack effect, otherwise, the style has poorer attack effect. This discovery provides a new clue for researchers to explore the mechanism of attack against patches.
Drawings
FIG. 1 is a schematic diagram of a real-time scenario of the present invention;
FIG. 2 is a schematic diagram of generating a countermeasure patch in an image conversion network according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating a large-scale training of an image transformation network according to an embodiment of the present invention;
FIG. 4 shows different weights λ according to an embodiment of the present invention 1 Comparing the visual quality and the attack effect;
FIG. 5 is a schematic diagram illustrating the effect of digital space attack against a patch according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating the comparison between the effect of the anti-patch of the embodiment of the present invention and the attack effect of other existing methods;
fig. 7 is a schematic diagram illustrating the effect of resisting physical space attack of a patch according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, an implementation scenario of the present invention is designed to combat patch attacks. The target detection model is used for identifying an input image, the initial image belongs to the class A, after the countermeasure patch is added to the initial image, the initial image is input into the target detection model, and the output identification result is of the class B or cannot be identified. This type of attack, which by deliberately adding interference to the input samples, causes the model to give a false output with high confidence, is known as a counter attack.
The invention provides an embodiment of a method for generating a customizable countermeasure patch, comprising the steps of:
inputting the content target image to an image conversion network; the image conversion network is used for converting the content target image into a countermeasure patch, wherein the content characteristic and the style characteristic are respectively introduced into the countermeasure patch through a content loss function and a style loss function, and the aggressiveness of the countermeasure patch is enhanced through the target loss function; the image conversion network outputs the countermeasure patch.
The image conversion network is of a U-Net network structure, and training and testing are performed on a common pedestrian image data set INRIAPerson. An embodiment of a process for generating an image transformation network is provided below, and with reference to fig. 2, the specific steps are as follows:
s1, preparing a data set. The content target image can be a cartoon image, and a cartoon image data set can be downloaded from the Internet in advance during image conversion network training and is used for training and testing the image conversion network to generate a countermeasure patch. In this embodiment, the cartoon image dataset contains 105 images with a resolution of 3 × 256 × 256. The INRIAPerson data set is a group of images containing standing or walking pedestrians, and the data set is preprocessed through setting conditions; specifically, a sample with the height of the pedestrian larger than 100 pixels is screened through the marking frame information of the marking file. In the embodiment, a total of 902 images are used, 614 images are used as a training set, 288 images are used as a test set, and the performance of resisting the attack of the patch on the YOLOv2 model is evaluated.
S2, training an end-to-end image conversion network TF, and adopting a U-Net network structure to enable low-level and high-level information in an image to be utilized in the network firstly, wherein the parameter is weight W 1 The input image is converted into the output confrontation patch through mapping, and the training of the image conversion network adopts a random gradient descent method and the weighted combination of the minimized loss function.
The training process of step S2 further includes the following steps:
s201, semantic content is introduced into the countermeasure patch. In order to solve the problems of resisting the appearance of the patch, such as being abrupt and discordant, and considering that the image with visual semantics is more harmonious in people than nonsense graffiti, the embodiment not only improves the structural characteristics and the texture characteristics of the patch from three angles of the total change of the image, the rationality of the patch and the utilization of the seed patch, but also improves the structural characteristics and the texture characteristics of the patch through a content loss function
Figure BDA0003919166280000061
Introducing content characteristics into the generated countermeasure patch, see FIG. 2 bottom, content loss function
Figure BDA0003919166280000062
Is defined as:
Figure BDA0003919166280000063
wherein p is the countermeasure patch generated in the present invention, f c Is a three-channel characteristic diagram C extracted from the layer C of a pre-trained VGG-16 neural network c ×H c ×W c ,I con Representing a content object image.
And S202, introducing an artistic style to the countermeasure patch to disguise the countermeasure patch. The existing anti-patch generation method only considers adding structural features and texture features, but the style is not utilized as an important feature of the image. The invention combines with windLattice migration technique by style loss function
Figure BDA0003919166280000064
The style characteristics of the artistic image are introduced into the countercheck patch, so that on one hand, the aesthetic elements of the patch are increased, and on the other hand, the artistic style can also be used as the camouflage for the countercheck disturbance. See bottom of FIG. 2, style loss function
Figure BDA0003919166280000071
Is defined as:
Figure BDA0003919166280000072
wherein, f s Is a three-channel characteristic diagram C extracted from the s layer of a pre-trained VGG-16 neural network s ×H s ×W s G denotes a Gram matrix of depth features extracted from a series of lattice layers, I sty The stylistic object image is represented, see the camouflage-style image in fig. 2.
S203, in order to increase the physical attack capability in real space, the YOLOv2 model for resisting patch attack is a single-stage strategy object detector, and each anchor box of the single-stage strategy object detector comprises a vector [ x, y, w, h, p ] obj ,p class1 ,p class2 ,...p classn ]Form a three-element body [ B, P ] obj ,P class ]To represent the output of the detector, defining an object loss function
Figure BDA0003919166280000073
Figure BDA0003919166280000074
s.t.P class =0 to attack the detector.
The aggressiveness against the patch is enhanced by an objective loss function, defined as:
Figure BDA0003919166280000075
wherein Physical attack is Physical attack, stylizedpatch is stylized patch,
Figure BDA0003919166280000076
corresponding to a plurality of loss units (
Figure BDA0003919166280000077
And
Figure BDA0003919166280000078
) Linear combinations of (3). Lambda [ alpha ] 1 To lambda 3 The weight, which represents the loss element, is used to weigh multiple objectives,
Figure BDA0003919166280000079
the loss of the target is indicated by the target loss,
Figure BDA00039191662800000710
indicating a loss of a non-printable score,
Figure BDA00039191662800000711
represents the total change loss;
Figure BDA00039191662800000712
and
Figure BDA00039191662800000713
ensuring that countermeasure patches are applicable in the real world;
Figure BDA00039191662800000714
and
Figure BDA00039191662800000715
and the semantic content, the style characteristic and the texture structure are respectively controlled by the minimization of the optimizer. These three loss functions ensure that the image conversion network can generate stylized countermeasure patches. The different effects on visual aesthetics and attack effectiveness can be seen in fig. 4.
In previous studies, the size of the image input to the detector was fixed. However, in physical world applications, the object detector may be input with images of different sizes, and thus the images need to be scaled to the appropriate size to match the detector. This reduces the effectiveness of the attack against the patch, as the anti-patch is scaled together with the target image.
In this regard, to enhance the robustness against attack of the patch, the image to which the patch is attached is randomly scaled in each iteration of the training process. See fig. 3. The scaling operation causes the generated adversarial patch to obtain a scale-invariant attack signature, which is more favorable for physical attacks.
The image conversion network trained according to the method can output the customized countermeasure patch by inputting the content target image. Referring to fig. 5, the combination of different content target images and style target images may generate a variety of customizable countermeasure patches. The generated countermeasure patch effectively attacks the object detector, as compared to the original image, the pasted cartoon image, and the pasted noise image.
The present invention also provides a system for generating a customizable countermeasure patch that can be used to implement the above method embodiments, including an image conversion network for converting a content target image into a countermeasure patch. The image conversion network comprises:
a content loss function module for introducing content characteristics into the countermeasure patch;
a style loss function module for introducing style features into the countermeasure patch;
a target loss function module to enhance aggressiveness against the patch.
Table 1 below compares the factors considered by the present invention with existing methods of generating countermeasure patches. From table 1, it can be seen that the method for generating customizable countermeasure patches based on the style migration technology has significantly improved factors considered by the existing method, and the √ in the table is considered, and the blank is not considered.
TABLE 1
Figure BDA0003919166280000081
Figure BDA0003919166280000091
Referring to fig. 6, the average Accuracy (AP) is shown. The lower the AP, the better the attack effect. The anti-patch generated by the invention is compared with the anti-patch generated by other existing methods (comprising GoogleAP, DPATCH, textureAP and LAP) in attack effect. The performance of TextureAP and LAP is not shown in the PR curves, with average Accuracies (AP) of 25.53%,43.07%, respectively. The experimental results showed that google AP and DPATCH performed poorly, reducing the average Accuracy (AP) to only 83.39% and 94.08%. In contrast, the anti-patch generated by the invention realizes attack capability similar to LAP, and the Average Precision (AP) is reduced to 45.92%. In summary, from the viewpoint of attack capability, the method for generating a customizable counterpatch can generate a counterpatch which effectively attacks a target detector.
Referring to fig. 7, three confrontational patches generated by the present invention were printed on a white T-shirt and worn by a person. In contrast, the T-shirt worn by another person had a corresponding original cartoon image. The attack results of the physical space are presented in three different scenarios for comprehensive comparison. It can be observed that in all frame images with different light and background, the object detector can identify the person wearing the original cartoon image, which means that the YOLOv2 model has good detection effect. In contrast, a person wearing a countermeasure patch cannot be identified by the object detector. The experimental results prove the strong attack resistance of the anti-patch generated by the method in the physical world.
The method of the invention is different from the common anti-patch generation method only considering the image structure and the texture characteristics, but fully utilizes the style characteristics of the image, and greatly improves the visual aesthetic property of the anti-patch. The image conversion network which can convert any input image into the countercheck patch can generate the countercheck patch which is customizable, stylized, diversified in content and good in physical attack effect, and visual attractiveness and physical attack are balanced.
The present invention is not limited to the above embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention are included in the scope of the claims of the present invention as filed.

Claims (10)

1. A method of generating a customizable countermeasure patch, comprising the steps of:
inputting the content target image to an image conversion network;
the image conversion network is used for converting the content target image into the countermeasure patch, wherein the content characteristic and the style characteristic are respectively introduced into the countermeasure patch through a content loss function and a style loss function, and the aggressivity of the countermeasure patch is enhanced through the target loss function;
the image conversion network outputs a countermeasure patch.
2. The method of generating a customizable countermeasure patch according to claim 1, wherein the image conversion network is a U-Net network structure trained using a weighted combination of a stochastic gradient descent method and a minimization loss function.
3. The method of generating a customizable countermeasure patch according to claim 2, wherein the content target image is a cartoon image, and the image transformation network is trained by preparing a cartoon image dataset in advance, and performing training and testing on a pedestrian image dataset inrimaperson.
4. The method of generating a customizable countermeasure patch of claim 3, wherein the content loss function
Figure FDA0003919166270000011
Comprises the following steps:
Figure FDA0003919166270000012
where p is a countermeasure patch, f c Is a three-channel characteristic diagram C extracted from the layer C of a pre-trained VGG-16 neural network c ×H c ×W c ,I con Representing a content object image.
5. The method of generating a customizable countermeasure patch according to claim 4, wherein the style loss function is passed
Figure FDA0003919166270000013
Introducing style characteristics of artistic image into the countermeasure patch, style loss function
Figure FDA0003919166270000014
Comprises the following steps:
Figure FDA0003919166270000015
wherein f is s Is a three-channel characteristic diagram C extracted from the s layer of a pre-trained VGG-16 neural network s ×H s ×W s G denotes a Gram matrix of depth features extracted from a series of style layers, I sty Representing a stylistic object image.
6. The method of generating a customizable anti-patch as claimed in claim 5, wherein the YOLOv2 model for anti-patch attacks is a single-phase policy object detector with each anchor box containing one vector [ x, y, w, h, p obj ,p class1 ,p class2 ,...p classn ]Form a three-element body [ B, P ] obj ,P class ]To represent the output of the detector, defining an object loss function
Figure FDA0003919166270000021
s.t.P class =0 to attack the detector.
7. The method of generating a customizable countermeasure patch according to claim 6, wherein the target loss function is defined as:
Figure FDA0003919166270000022
wherein λ is 1 To lambda 3 Weights representing the loss units are used to weigh multiple objectives;
Figure FDA0003919166270000023
the loss of the target is indicated by the target loss,
Figure FDA0003919166270000024
indicating that a non-printable score is lost,
Figure FDA0003919166270000025
represents the total change loss;
Figure FDA0003919166270000026
and
Figure FDA0003919166270000027
ensuring that the countermeasure patch is applicable in the real world;
Figure FDA0003919166270000028
and
Figure FDA0003919166270000029
minimizing through an optimizer, and controlling semantic content, style characteristics and texture structures respectively;
Figure FDA00039191662700000210
Figure FDA00039191662700000211
and
Figure FDA00039191662700000212
all represent loss units.
8. The method of generating a customizable countermeasure patch according to claim 7, wherein the image conversion network scales the image to which the countermeasure patch is affixed at random for each iteration during the training process.
9. The method of generating a customizable countermeasure patch according to claim 3, wherein INRIAPerson screens a sample of pedestrians with a height greater than 100 pixels by preprocessing, the screened images being divided into a training set and a test set.
10. A system for generating a customizable countermeasure patch, comprising an image conversion network for converting a content target image into a countermeasure patch;
the image conversion network includes:
a content loss function module for introducing content features into the countermeasure patch;
a style loss function module for introducing style features into the countermeasure patch;
a target loss function module to enhance aggressiveness against the patch.
CN202211351791.3A 2022-10-31 2022-10-31 Method and system for generating customizable countermeasure patch Pending CN115761310A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211351791.3A CN115761310A (en) 2022-10-31 2022-10-31 Method and system for generating customizable countermeasure patch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211351791.3A CN115761310A (en) 2022-10-31 2022-10-31 Method and system for generating customizable countermeasure patch

Publications (1)

Publication Number Publication Date
CN115761310A true CN115761310A (en) 2023-03-07

Family

ID=85354795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211351791.3A Pending CN115761310A (en) 2022-10-31 2022-10-31 Method and system for generating customizable countermeasure patch

Country Status (1)

Country Link
CN (1) CN115761310A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702634A (en) * 2023-08-08 2023-09-05 南京理工大学 Full-coverage concealed directional anti-attack method
CN116883520A (en) * 2023-09-05 2023-10-13 武汉大学 Color quantization-based multi-detector physical domain anti-patch generation method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116702634A (en) * 2023-08-08 2023-09-05 南京理工大学 Full-coverage concealed directional anti-attack method
CN116702634B (en) * 2023-08-08 2023-11-21 南京理工大学 Full-coverage concealed directional anti-attack method
CN116883520A (en) * 2023-09-05 2023-10-13 武汉大学 Color quantization-based multi-detector physical domain anti-patch generation method
CN116883520B (en) * 2023-09-05 2023-11-28 武汉大学 Color quantization-based multi-detector physical domain anti-patch generation method

Similar Documents

Publication Publication Date Title
Wang et al. Fca: Learning a 3d full-coverage vehicle camouflage for multi-view physical adversarial attack
CN115761310A (en) Method and system for generating customizable countermeasure patch
Tan et al. Legitimate adversarial patches: Evading human eyes and detection models in the physical world
Zhang et al. Cd-uap: Class discriminative universal adversarial perturbation
CN108182657A (en) A kind of face-image conversion method that confrontation network is generated based on cycle
CN106504064A (en) Clothes classification based on depth convolutional neural networks recommends method and system with collocation
CN109766822B (en) Gesture recognition method and system based on neural network
CN112883874B (en) Active defense method aiming at deep face tampering
Li et al. Adaptive momentum variance for attention-guided sparse adversarial attacks
Chen et al. Query-efficient decision-based black-box patch attack
Chindaudom et al. AdversarialQR: An adversarial patch in QR code format
CN112597993A (en) Confrontation defense model training method based on patch detection
CN113378949A (en) Dual-generation confrontation learning method based on capsule network and mixed attention
CN114724189A (en) Method, system and application for training confrontation sample defense model for target recognition
KR20200094938A (en) Data imbalance solution method using Generative adversarial network
CN115640609A (en) Feature privacy protection method and device
CN113627543A (en) Anti-attack detection method
Hu et al. Deep learning for distinguishing computer generated images and natural images: A survey
Hu et al. Adversarial color film: Effective physical-world attack to dnns
CN112215151B (en) Method for enhancing anti-interference capability of target detection system by using 3D (three-dimensional) countermeasure sample
Lapid et al. Patch of invisibility: Naturalistic black-box adversarial attacks on object detectors
CN113469965A (en) Countermeasure sample generation method for limiting disturbance noise by using mask
Li et al. Backdoor Attacks to Deep Learning Models and Countermeasures: A Survey
CN116824485A (en) Deep learning-based small target detection method for camouflage personnel in open scene
CN113743231B (en) Video target detection avoidance system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination