CN112927211A - Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal - Google Patents

Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal Download PDF

Info

Publication number
CN112927211A
CN112927211A CN202110255222.8A CN202110255222A CN112927211A CN 112927211 A CN112927211 A CN 112927211A CN 202110255222 A CN202110255222 A CN 202110255222A CN 112927211 A CN112927211 A CN 112927211A
Authority
CN
China
Prior art keywords
depth
dimensional detector
confrontation
voxel
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110255222.8A
Other languages
Chinese (zh)
Other versions
CN112927211B (en
Inventor
蔡木目心
桑楠
张静玉
成日冉
周慧
王旭鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202110255222.8A priority Critical patent/CN112927211B/en
Publication of CN112927211A publication Critical patent/CN112927211A/en
Application granted granted Critical
Publication of CN112927211B publication Critical patent/CN112927211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a general attack resisting method, a storage medium and a terminal based on a depth three-dimensional detector, wherein the method comprises the following steps: acquiring a structure of a depth three-dimensional detector to be attacked, and acquiring pre-stored confrontation voxels corresponding to the structure; when the attack is carried out, the confrontation voxels are superposed on a radar laser scene input to a depth three-dimensional detector to be attacked. The method for calculating and storing the scene modification by using the voxel confrontation method is convenient to calculate and store; the modification stored in the form of confrontation voxels can be applied to any point cloud scene, and has universality; while the use of voxel-fighting methods would most likely enable the modification to preserve the gradient to the detection result and to optimize it.

Description

Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal
Technical Field
The invention relates to the field of point cloud countermeasures, in particular to a universal countermeasure attack method based on a depth three-dimensional detector, a storage medium and a terminal.
Background
Perceptual techniques based on 2D images and 3D lidar data benefit from recent advances in deep neural networks and achieve encouraging performance in a variety of real-world tasks. In particular, vision-based object detection plays an important role in many safety critical applications, such as autonomous driving. Nonetheless, szegdy et al found that the deep model was susceptible to challenge attacks, and that well-designed attack samples could lead to unpredictable results for the depth model. The unreliability of existing depth models poses a great threat to the potential application of target detection.
In general, generation of a resistant sample results from modification of pixels or points, a well-studied method, and has proven to be effective. Szegydy and Moosavi first studied methods for generating antagonistic samples on two-dimensional images. Later, the method was extended to 3D, many algorithms generated antagonistic point cloud data with point-by-point modification, and achieved good success rates of attacks. However, the above-described counter-attack approach is optimized for a single sample, which is computationally expensive and inflexible for new samples. LG-GAN designs a generating countermeasure network that derives the countermeasure point cloud from the geometric features of the object augmented by its tag. This approach generates antagonistic samples in a flexible way, but the effectiveness of the spoof depth model is limited.
Meanwhile, three-dimensional target detection is different from target identification, point cloud data of target detection is huge, positioning and classification are included as a result, point-by-point modification is difficult to add, and the calculation amount is too large. And point-by-point modification can only be applied to a single point cloud sample, and the method has no universality. The three-dimensional target detection depth network can carry out some preprocessing on the point cloud data, and the conventional modification method can lose gradients and cannot optimize and modify the gradients.
Therefore, it is an urgent problem in the art to provide a general attack countermeasure method, a storage medium and a terminal based on a depth three-dimensional detector, to initiate an instance-independent attack countermeasure by using generated counter-antibody, and to apply the same to a three-dimensional detector with various point cloud representations.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a general attack resisting method based on a depth three-dimensional detector, a storage medium and a terminal.
The purpose of the invention is realized by the following technical scheme:
the invention provides a general attack resisting method based on a depth three-dimensional detector, which comprises the following steps:
acquiring a structure of a depth three-dimensional detector to be attacked, and acquiring pre-stored confrontation voxels corresponding to the structure;
when the attack is carried out, the confrontation voxels are superposed on a radar laser scene input to a depth three-dimensional detector to be attacked.
Further, the obtaining a structure of the depth three-dimensional detector to be attacked to obtain a pre-stored confrontation voxel corresponding to the structure includes:
acquiring a structure of a depth three-dimensional detector to be attacked, and matching the structure with a pre-stored structure of the depth three-dimensional detector;
and after the matching is successful, obtaining the confrontation voxel corresponding to the pre-stored depth three-dimensional detector, thereby obtaining the pre-stored confrontation voxel corresponding to the structure.
Further, the obtaining a structure of the depth three-dimensional detector to be attacked to obtain a pre-stored confrontation voxel corresponding to the structure further includes:
and when the matching fails, training by using the structure of the depth three-dimensional detector to be attacked so as to obtain the trained confrontation voxel.
Further, the pre-stored confrontation voxels corresponding to the structure are obtained by training according to a depth three-dimensional detector to be attacked, and specifically include:
given an initialized confrontational voxel;
performing multiple operations on each radar laser scene sample, and circularly finishing the updating of the confrontation voxels aiming at different samples; the operations include:
adding confrontation voxels into a radar laser scene sample, and inputting the confrontation voxels into a depth three-dimensional detector to be attacked for detection to obtain a detection result;
aiming at the detection result and the real result, obtaining a voxel optimization vector which makes the current sample invalid by using a loss function;
and adding the voxel optimization vector to the confrontation voxel to realize the update of the confrontation voxel.
Further, the depth three-dimensional detector generates a large number of recommendations using an anchoring method and generates a final result using NMS.
Further, the loss function includes a countermeasure loss function representing a countermeasure loss between the detection result and the true result of the depth three-dimensional detector, the countermeasure loss function including calculation of an intersection ratio and a confidence score.
Further, when the cross-over ratio calculation is performed, the fight loss function selects the value with the highest cross-over ratio after calculation with all real boxes.
Further, the loss function further includes a distance loss function proportional to the confrontation loss function for limiting the overall size of the confrontation voxel values, the distance loss function employing LAnd (4) norm.
In a second aspect of the present invention, a storage medium is provided, on which computer instructions are stored, which when executed perform the steps of the method for universal counter attack based on a depth three-dimensional detector.
In a third aspect of the present invention, there is provided a terminal, including a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the method for universal attack defense based on a depth three-dimensional detector.
The invention has the beneficial effects that:
(1) in an exemplary embodiment of the invention, the advantages of using voxel countermeasures against target detection include:
(1-1) the three-dimensional target detection is different from target identification, the point cloud data of the target detection is huge, the result comprises positioning and classification, point-by-point modification is difficult to add, and the calculation amount is too large; and the calculation and saving of scene modification calculations using the voxel-opposing method is convenient.
(1-2) the modification stored in the form of confrontation voxels can be applied to any point cloud scene, and has universality; and point-by-point modification can only be applied to a single point cloud sample, and the method has no universality.
(1-3) the point cloud data is preprocessed by the three-dimensional target detection depth network, the conventional modification method cannot optimize and modify the point cloud data due to gradient loss, and the voxel resisting method can avoid gradient loss as much as possible and enable the gradient to be reserved to a detection result, so that the antibody can be optimized.
(2) In a further exemplary embodiment of the present invention, during training of the avidin, a cyclic update is performed, which has the advantage that: it can be ensured that for each sample, the challenge voxels are optimized a sufficient number of times to ensure the challenge effect, and the samples are only replaced when the challenge effect meets the requirements or the upper limit of the number of iterations is reached.
(3) In yet another exemplary embodiment of the present invention, in the training of the anti-biotin, using the antagonistic loss function in the loss function, by reducing IoU and the confidence score of the prediction result, the voxel optimization vector v that disables the depth three-dimensional detector for the current sample can be foundi. Both IoU and confidence scores are included in the opposition loss function. Suppression of the prediction result IoU makes it possible to shift the result to the correct position and make the prediction result an erroneous result at the time of evaluation. At the same time, since the target detector selects a result having a confidence score exceeding a predetermined threshold from all the prediction results when it finally outputs the prediction result, the confidence is suppressedThe score may inhibit the prediction process and the number of final results.
(4) In a further exemplary embodiment of the present invention, to ensure that the final modification is small and not recognizable to the human eye, L is also employed in the loss functionNorm as a function of distance loss because LNorm ratio L2The norm is better able to suppress the maximum value of the disturbance rather than the overall value, which keeps the better visual effect from being perceived by humans.
Drawings
FIG. 1 is a flow chart of a method provided in an exemplary embodiment of the invention;
fig. 2 is a flow chart of a method provided in yet another exemplary embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Referring to fig. 1, fig. 1 illustrates a general attack-countering method based on a depth three-dimensional detector according to an exemplary embodiment of the present invention, which includes the following steps:
acquiring a structure of a depth three-dimensional detector to be attacked, and acquiring pre-stored confrontation voxels corresponding to the structure;
when the attack is carried out, the confrontation voxels are superposed on a radar laser scene input to a depth three-dimensional detector to be attacked.
In particular, in the exemplary embodiment, for various depth three-dimensional detectors, corresponding opposing voxels are stored; acquiring a pre-stored corresponding confrontation voxel of the depth three-dimensional detector to be attacked; and finally, during attack, the confrontation voxels are superposed on a radar laser scene input to the depth three-dimensional detector to be attacked. Resulting in a predicted and actual position of the three-dimensional detector of the attacked depth being different.
Advantages of employing the opposing voxels in this exemplary embodiment to oppose target detection include: (1) the three-dimensional target detection is different from target identification, point cloud data of the target detection is huge, the result comprises positioning and classification, point-by-point modification is difficult to add, and the calculation amount is too large; and the calculation and saving of scene modification calculations using the voxel-opposing method is convenient. (2) The modification stored in the form of confrontation voxels can be applied to any point cloud scene, and has universality; and point-by-point modification can only be applied to a single point cloud sample, and the method has no universality. (3) The three-dimensional target detection depth network can carry out some preprocessing on the point cloud data, and the conventional modification method can lose gradients and cannot carry out optimal modification, for example, a point-by-point countermeasure method can change the data form because of voxelization or gridding in the three-dimensional data preprocessing stage, so that the disturbed gradients can be lost; the method adopting the confrontation voxel avoids gradient loss as much as possible in the data processing stage, and the gradient is transmitted to the detection result, so that the confrontation voxel can be optimized.
Under a general condition, the structure of the depth three-dimensional detector to be attacked is obtained in various point cloud representation forms, so that the structure is obtained and matched, and the implementation mode is simple.
Preferably, in an exemplary embodiment, the obtaining a structure of the depth three-dimensional detector to be attacked, and obtaining pre-stored opposing voxels corresponding to the structure, includes:
acquiring a structure of a depth three-dimensional detector to be attacked, and matching the structure with a pre-stored structure of the depth three-dimensional detector;
and after the matching is successful, obtaining the confrontation voxel corresponding to the pre-stored depth three-dimensional detector, thereby obtaining the pre-stored confrontation voxel corresponding to the structure.
Specifically, in the exemplary embodiment, when an attack is to be made in the detection process of a certain depth three-dimensional detector, the structure of the depth three-dimensional detector is first acquired so as to match a pre-stored depth three-dimensional detector.
Preferably, in an exemplary embodiment, the obtaining a structure of the depth three-dimensional detector to be attacked, and obtaining a pre-stored opposing voxel corresponding to the structure, further includes:
and when the matching fails, training by using the structure of the depth three-dimensional detector to be attacked so as to obtain the trained confrontation voxel.
Specifically, in this exemplary embodiment, when the depth three-dimensional detector to be attacked is an unstored depth three-dimensional detector, the depth three-dimensional detector is trained to obtain a countermeasure voxel, and then countermeasure attack is performed.
Preferably, in an exemplary embodiment, the pre-stored confrontation voxels corresponding to the structure are obtained by training according to a depth three-dimensional detector to be attacked, and as described in the upper part of fig. 2, specifically include:
given an initialized confrontational voxel;
performing multiple operations on each radar laser scene sample, and circularly finishing the updating of the confrontation voxels aiming at different samples; the operations include:
adding confrontation voxels into a radar laser scene sample, and inputting the confrontation voxels into a depth three-dimensional detector to be attacked for detection to obtain a detection result;
aiming at the detection result and the real result, obtaining a voxel optimization vector which makes the current sample invalid by using a loss function;
and adding the voxel optimization vector to the confrontation voxel to realize the update of the confrontation voxel.
In particular, in the exemplary embodiment, given S is a set of radar lasing scenes, PiIs a point cloud scene in S, P containing N pointsiIs input into a depth three-dimensional detector D, V (P)i) Representing the application of opposing voxels V to a scene PiPoint-by-point confrontation produced above. Thus, from PiThe generated antagonistic sample can be represented as Pi+V(Pi),D(Pi) And D (P)i+V(Pi) Respectively represent PiAnd the detection result of the challenge sample. The main goal of the present exemplary embodiment is to find a countermeasure voxel V that can disable the depth three-dimensional detector D for most scenes in the point cloud scene collection S while being imperceptible to humans.
The definition is as follows:
D(Pi)≠D(Pi+V(Pi)),satisfying||V||p<ξ.
where ξ represents an upper bound on the modification in the opposing voxel V.
The confrontation voxel V has W × H × L cells, which respectively correspond to the confrontation value of each block of the real space. Each point cloud scene PiThe voxelization is divided into W × H × L parts with a resolution of m, when each point in space belongs to a particular voxel. In one of the exemplary embodiments, the length is taken from the point of transmission of the lidar: 0-80m, width: -35m-35m, high: the 4m space is used as the countermeasure space, and the resolution is set to 0.1m, which corresponds to voxels of size 800 × 700 × 35.
In the initialization pairAfter resisting the voxel V, applying the resisting voxel V to a scene in S, and achieving the purpose of disabling the prediction through a loss function, wherein a voxel optimization vector V is generatediBy viTo update the confrontation voxel V so that V is V + ViAnd then used for the next sample to update the opposing voxel V in this cycle.
Specifically, the voxel V is confronted with the initialization of the truncated normal distribution (the initialization of the phase normal distribution has the advantage that all initialization values and mean values can be made not to be too different), and then the confronted voxel V is added to the ith sample, and the voxel optimization vector V is obtained through the optimization of the detector prediction value, the true value and the loss functioniCycling this process generates viUntil the depth three-dimensional detector D detects an error or the upper limit of the number of cycles (i.e., the aforementioned preset condition) is reached, where v is usediAnd updating the V. The updated V is added to the (i + 1) th sample to update V until all samples are used up, thus generating a final confrontation voxel V.
In this way (adding the voxel optimization vector vi to the opposing voxel V may complete the updating of the voxels cyclically), the advantage is that it may make it effective for more sample attacks. The method has the advantages that for each sample, the confrontation voxels are optimized for enough times to ensure better confrontation effect, and the samples are replaced when the confrontation effect meets the requirements or the upper limit of the iteration times is reached.
Preferably, in an exemplary embodiment, the depth three-dimensional detector generates a number of recommendations using an anchoring method and generates a final result using an NMS. Where the anchoring method employs a prediction box/bounding box.
More preferably, in an exemplary embodiment, the loss function comprises a penalty loss function LadvSaid antagonistic loss function LadvRepresenting the countermeasure loss between the detection result and the true result of the depth three-dimensional detector, the countermeasure loss function including the calculation of the intersection ratio and the confidence score.
Specifically, a large number of suggestions are generated using the anchoring method due to the depth three-dimensional detector D, anGenerating a final result using the NMS; the opposition loss function should suppress IoU and the confidence score at the same time. The calculation includes all proposals that exceed IoU and the confidence score threshold (it should be noted that many proposals are generated during the detection phase, each with its corresponding IoU and confidence score. the calculation of the penalty function sets a IoU threshold and confidence score threshold, and considers only those proposals that have IoU greater than IoU threshold or have confidence scores greater than the confidence score threshold, and then adds up the penalty by formula). By reducing IoU and the confidence score of the prediction, a voxel optimization vector v can be found that invalidates the depth three-dimensional detector D to the current samplei. Optimizing the voxel to a vector viAddition to the opposing voxel V may complete the updating of the voxel cyclically, which may make it effective for more sample attacks.
Where IoU is referred to as an Intersection over Union, IoU calculates the ratio of the Intersection and Union of the "predicted bounding box" and the "true bounding box".
The challenge loss is expressed as:
Figure BDA0002967967700000071
in the formula, piRepresenting the ith bounding box, s, of all the detection results P of the depth three-dimensional detector DiThe confidence score for the bounding box is represented. p denotes all the real bounding boxes of the sample. More preferably, in an exemplary embodiment, when calculating the penalty function, for each prediction box IoU selects the value that is the highest after IoU it calculates with all real boxes.
Wherein both IoU and the confidence score are included in the opposition loss function. Suppression of the prediction result IoU makes it possible to shift the result to the correct position and make the prediction result an erroneous result at the time of evaluation. Since the target detector ultimately outputs the predicted result by selecting from all predicted results a result whose confidence score exceeds a given threshold, suppressing the confidence score may suppress the prediction process and the number of final results.
Preferably, in an exemplary embodiment, the loss function further comprises a competing loss function LadvProportional distance loss function LdisA distance loss function for limiting the overall size of the voxel values, said distance loss function using LAnd (4) norm.
In particular, in this exemplary embodiment, to ensure that the final modification is small and not recognizable to the human eye, we must limit the size of the overall change.
LdisFor limiting the overall size of the countermeasure modification, here we choose LNorm as LdisDue to LNorm ratio L2The norm is more likely to suppress the maximum value of the countering modification than the overall value, which keeps the better visual effect from being perceived by humans. Thus the distance loss LdisComprises the following steps:
Figure BDA0002967967700000072
Lthe maximum value of the perturbation in the whole voxel is indicated.
Thus, the overall loss function can be described as follows:
Figure BDA0002967967700000073
here, LadvRepresenting the loss of opposition, L, between the prediction of the detector and the ground truthdisTo represent the overall size of the resisting modification. Each sample is recycled in this manner to minimize the loss function until the depth three-dimensional detector D fails to detect it.
In a second aspect of the present invention, a storage medium is provided, on which computer instructions are stored, which when executed perform the steps of the method for universal counter attack based on a depth three-dimensional detector.
In a third aspect of the present invention, there is provided a terminal, including a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the method for universal attack defense based on a depth three-dimensional detector.
Based on such understanding, the technical solutions of the present embodiments may be essentially implemented or make a contribution to the prior art, or may be implemented in the form of a software product stored in a storage medium and including several instructions for causing an apparatus to execute all or part of the steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
It is to be understood that the above-described embodiments are illustrative only and not restrictive of the broad invention, and that various other modifications and changes in light thereof will be suggested to persons skilled in the art based upon the above teachings. And are neither required nor exhaustive of all embodiments. And obvious variations or modifications of the invention may be made without departing from the spirit or scope of the invention.

Claims (10)

1. A general attack resisting method based on a depth three-dimensional detector is characterized in that: the method comprises the following steps:
acquiring a structure of a depth three-dimensional detector to be attacked, and acquiring pre-stored confrontation voxels corresponding to the structure;
when the attack is carried out, the confrontation voxels are superposed on a radar laser scene input to a depth three-dimensional detector to be attacked.
2. The universal attack countermeasure method based on the depth three-dimensional detector as claimed in claim 1, characterized in that: the obtaining of the structure of the depth three-dimensional detector to be attacked to obtain the pre-stored confrontation voxel corresponding to the structure includes:
acquiring a structure of a depth three-dimensional detector to be attacked, and matching the structure with a pre-stored structure of the depth three-dimensional detector;
and after the matching is successful, obtaining the confrontation voxel corresponding to the pre-stored depth three-dimensional detector, thereby obtaining the pre-stored confrontation voxel corresponding to the structure.
3. The universal attack countermeasure method based on the depth three-dimensional detector as claimed in claim 2, characterized in that: the obtaining of the structure of the depth three-dimensional detector to be attacked to obtain the pre-stored confrontation voxel corresponding to the structure further includes:
and when the matching fails, training by using the structure of the depth three-dimensional detector to be attacked so as to obtain the trained confrontation voxel.
4. The universal attack countermeasure method based on the depth three-dimensional detector as claimed in claim 1 or 3, characterized in that: the pre-stored confrontation voxel corresponding to the structure is obtained by training according to a depth three-dimensional detector to be attacked, and specifically comprises the following steps:
given an initialized confrontational voxel;
performing multiple operations on each radar laser scene sample, and circularly finishing the updating of the confrontation voxels aiming at different samples; the operations include:
adding confrontation voxels into a radar laser scene sample, and inputting the confrontation voxels into a depth three-dimensional detector to be attacked for detection to obtain a detection result;
aiming at the detection result and the real result, obtaining a voxel optimization vector which makes the current sample invalid by using a loss function;
and adding the voxel optimization vector to the confrontation voxel to realize the update of the confrontation voxel.
5. The universal attack countermeasure method based on the depth three-dimensional detector is characterized in that: the depth three-dimensional detector generates a number of recommendations using an anchoring method and generates a final result using NMS.
6. The universal attack countermeasure method based on the depth three-dimensional detector is characterized in that: the loss function includes a countermeasure loss function representing a countermeasure loss between a detection result and a true result of the depth three-dimensional detector, the countermeasure loss function including a cross-over ratio and a calculation of a confidence score.
7. The universal attack countermeasure method based on the depth three-dimensional detector as claimed in claim 6, characterized in that: and when the cross-over ratio calculation is carried out, the fight loss function selects the value which is the highest after the cross-over ratio calculation with all the real boxes is carried out.
8. The universal attack countermeasure method based on the depth three-dimensional detector is characterized in that: the loss function further includes a distance loss function proportional to the countermeasure loss function, the distance loss function limiting the overall size of the countermeasure voxel values, the distance loss function employing LAnd (4) norm.
9. A storage medium having stored thereon computer instructions, characterized in that: the computer instructions are operable to perform the steps of a method of universal counter attack based on a depth three-dimensional detector according to any one of claims 1 to 8.
10. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, wherein the processor executes the computer instructions to perform the steps of the method for universal counter attack based on depth three-dimensional detector according to any one of claims 1 to 8.
CN202110255222.8A 2021-03-09 2021-03-09 Universal attack countermeasure method based on depth three-dimensional detector, storage medium and terminal Active CN112927211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110255222.8A CN112927211B (en) 2021-03-09 2021-03-09 Universal attack countermeasure method based on depth three-dimensional detector, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110255222.8A CN112927211B (en) 2021-03-09 2021-03-09 Universal attack countermeasure method based on depth three-dimensional detector, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN112927211A true CN112927211A (en) 2021-06-08
CN112927211B CN112927211B (en) 2023-08-25

Family

ID=76172178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110255222.8A Active CN112927211B (en) 2021-03-09 2021-03-09 Universal attack countermeasure method based on depth three-dimensional detector, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112927211B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610904A (en) * 2021-07-19 2021-11-05 广州大学 Method, system, computer and medium for generating three-dimensional (3D) local point cloud countermeasure sample
CN113808165A (en) * 2021-09-14 2021-12-17 电子科技大学 Point disturbance attack resisting method facing three-dimensional target tracking model
CN114282437A (en) * 2021-12-23 2022-04-05 浙江大学 Physically-realizable laser radar 3D point cloud countersample generation method and system

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160248796A1 (en) * 2013-08-23 2016-08-25 The Boeing Company System and method for discovering optimal network attack paths
CN108229682A (en) * 2018-02-07 2018-06-29 深圳市唯特视科技有限公司 A kind of image detection countercheck based on backpropagation attack
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
US20190021677A1 (en) * 2017-07-18 2019-01-24 Siemens Healthcare Gmbh Methods and systems for classification and assessment using machine learning
US20190238568A1 (en) * 2018-02-01 2019-08-01 International Business Machines Corporation Identifying Artificial Artifacts in Input Data to Detect Adversarial Attacks
CA3033014A1 (en) * 2018-02-07 2019-08-07 Royal Bank Of Canada Robust pruned neural networks via adversarial training
CN110598400A (en) * 2019-08-29 2019-12-20 浙江工业大学 Defense method for high hidden poisoning attack based on generation countermeasure network and application
CN110866287A (en) * 2019-10-31 2020-03-06 大连理工大学 Point attack method for generating countercheck sample based on weight spectrum
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
US20200265271A1 (en) * 2019-02-15 2020-08-20 Baidu Usa Llc Systems and methods for joint adversarial training by incorporating both spatial and pixel attacks
CN111627044A (en) * 2020-04-26 2020-09-04 上海交通大学 Target tracking attack and defense method based on deep network
US20200294257A1 (en) * 2019-03-16 2020-09-17 Nvidia Corporation Leveraging multidimensional sensor data for computationally efficient object detection for autonomous machine applications
WO2020199577A1 (en) * 2019-03-29 2020-10-08 北京市商汤科技开发有限公司 Method and device for living body detection, equipment, and storage medium
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method
US20200410228A1 (en) * 2019-06-28 2020-12-31 Baidu Usa Llc Systems and methods for fast training of more robust models against adversarial attacks
CN112307881A (en) * 2019-07-30 2021-02-02 拉皮斯坎实验室股份有限公司 Multi-model detection of objects
CN112348908A (en) * 2019-08-07 2021-02-09 西门子医疗有限公司 Shape-based generative countermeasure network for segmentation in medical imaging

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160248796A1 (en) * 2013-08-23 2016-08-25 The Boeing Company System and method for discovering optimal network attack paths
US20190021677A1 (en) * 2017-07-18 2019-01-24 Siemens Healthcare Gmbh Methods and systems for classification and assessment using machine learning
US20190238568A1 (en) * 2018-02-01 2019-08-01 International Business Machines Corporation Identifying Artificial Artifacts in Input Data to Detect Adversarial Attacks
CN108229682A (en) * 2018-02-07 2018-06-29 深圳市唯特视科技有限公司 A kind of image detection countercheck based on backpropagation attack
CA3033014A1 (en) * 2018-02-07 2019-08-07 Royal Bank Of Canada Robust pruned neural networks via adversarial training
CN108734728A (en) * 2018-04-25 2018-11-02 西北工业大学 A kind of extraterrestrial target three-dimensional reconstruction method based on high-resolution sequence image
US20200265271A1 (en) * 2019-02-15 2020-08-20 Baidu Usa Llc Systems and methods for joint adversarial training by incorporating both spatial and pixel attacks
US20200294257A1 (en) * 2019-03-16 2020-09-17 Nvidia Corporation Leveraging multidimensional sensor data for computationally efficient object detection for autonomous machine applications
WO2020199577A1 (en) * 2019-03-29 2020-10-08 北京市商汤科技开发有限公司 Method and device for living body detection, equipment, and storage medium
US20200410228A1 (en) * 2019-06-28 2020-12-31 Baidu Usa Llc Systems and methods for fast training of more robust models against adversarial attacks
CN112307881A (en) * 2019-07-30 2021-02-02 拉皮斯坎实验室股份有限公司 Multi-model detection of objects
US20210034865A1 (en) * 2019-07-30 2021-02-04 Rapiscan Laboratories, Inc. Multi-Model Detection of Objects
CN112348908A (en) * 2019-08-07 2021-02-09 西门子医疗有限公司 Shape-based generative countermeasure network for segmentation in medical imaging
US20210038198A1 (en) * 2019-08-07 2021-02-11 Siemens Healthcare Gmbh Shape-based generative adversarial network for segmentation in medical imaging
CN110598400A (en) * 2019-08-29 2019-12-20 浙江工业大学 Defense method for high hidden poisoning attack based on generation countermeasure network and application
CN110866287A (en) * 2019-10-31 2020-03-06 大连理工大学 Point attack method for generating countercheck sample based on weight spectrum
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information
CN111627044A (en) * 2020-04-26 2020-09-04 上海交通大学 Target tracking attack and defense method based on deep network
CN111914946A (en) * 2020-08-19 2020-11-10 中国科学院自动化研究所 Countermeasure sample generation method, system and device for outlier removal method

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
J. TU等: "Physically realizable adversarial examples for lidar object detection", 《IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 13713 *
曹玉东等: "基于增强型对抗学习的无参考图像质量评价算法", 《计算机应用》 *
曹玉东等: "基于增强型对抗学习的无参考图像质量评价算法", 《计算机应用》, vol. 40, no. 11, 30 November 2020 (2020-11-30), pages 3167 - 3168 *
曹玉红等: "基于深度学习的医学影像分割研究综述", 《计算机应用》, vol. 41, no. 8, pages 2273 - 2287 *
罗雨珊: "基于cnn的车辆目标检测关键技术研究与设计", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *
罗雨珊: "基于cnn的车辆目标检测关键技术研究与设计", 《中国优秀硕士学位论文全文数据库 工程科技II辑》, no. 12, 15 December 2019 (2019-12-15), pages 42 - 43 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610904A (en) * 2021-07-19 2021-11-05 广州大学 Method, system, computer and medium for generating three-dimensional (3D) local point cloud countermeasure sample
CN113610904B (en) * 2021-07-19 2023-10-20 广州大学 3D local point cloud countermeasure sample generation method, system, computer and medium
CN113808165A (en) * 2021-09-14 2021-12-17 电子科技大学 Point disturbance attack resisting method facing three-dimensional target tracking model
CN113808165B (en) * 2021-09-14 2023-06-13 电子科技大学 Point disturbance anti-attack method for three-dimensional target tracking model
CN114282437A (en) * 2021-12-23 2022-04-05 浙江大学 Physically-realizable laser radar 3D point cloud countersample generation method and system
CN114282437B (en) * 2021-12-23 2024-05-17 浙江大学 Physical-realizable laser radar 3D point cloud countermeasure sample generation method and system

Also Published As

Publication number Publication date
CN112927211B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN112927211A (en) Universal anti-attack method based on depth three-dimensional detector, storage medium and terminal
CN113110509B (en) Warehousing system multi-robot path planning method based on deep reinforcement learning
CN110222831A (en) Robustness appraisal procedure, device and the storage medium of deep learning model
CN111523422B (en) Key point detection model training method, key point detection method and device
WO2020062911A1 (en) Actor ensemble for continuous control
Lee et al. Gaussianmask: Uncertainty-aware instance segmentation based on gaussian modeling
CN111507369B (en) Space learning method and device for automatic driving vehicle, and testing method and device
CN110503113B (en) Image saliency target detection method based on low-rank matrix recovery
CN110647992A (en) Training method of convolutional neural network, image recognition method and corresponding devices thereof
Liu et al. Slowlidar: Increasing the latency of lidar-based detection using adversarial examples
CN112365582B (en) Countermeasure point cloud generation method, storage medium and terminal
CN106600613B (en) Improvement LBP infrared target detection method based on embedded gpu
CN112232434A (en) Attack-resisting cooperative defense method and device based on correlation analysis
CN113569611A (en) Image processing method, image processing device, computer equipment and storage medium
CN115984439A (en) Three-dimensional countertexture generation method and device for disguised target
CN113486871B (en) Unmanned vehicle local autonomous control method, device and equipment based on depth map
CN114973235A (en) Method for generating countermeasure point cloud based on disturbance added in geometric feature field
Badrloo et al. A novel region-based expansion rate obstacle detection method for MAVs using a fisheye camera
CN116659516B (en) Depth three-dimensional attention visual navigation method and device based on binocular parallax mechanism
CN110749325B (en) Flight path planning method and device
CN117456391A (en) Combined detection method for ground military target and key parts of ground military target through unmanned aerial vehicle
Nalpantidis et al. Stereovision-based algorithm for obstacle avoidance
CN115909027B (en) Situation estimation method and device
CN116310681A (en) Unmanned vehicle passable area prediction method and system based on multi-frame point cloud fusion
Gao et al. More robust object tracking via shape and motion cue integration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant