CN113838211B - 3D point cloud classification attack defense method, device, equipment and storage medium - Google Patents

3D point cloud classification attack defense method, device, equipment and storage medium Download PDF

Info

Publication number
CN113838211B
CN113838211B CN202111081192.XA CN202111081192A CN113838211B CN 113838211 B CN113838211 B CN 113838211B CN 202111081192 A CN202111081192 A CN 202111081192A CN 113838211 B CN113838211 B CN 113838211B
Authority
CN
China
Prior art keywords
point cloud
sample
point
cloud sample
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111081192.XA
Other languages
Chinese (zh)
Other versions
CN113838211A (en
Inventor
唐可可
吴坚鹏
史亚文
苗丁锐博
娄添瑞
顾钊铨
李默涵
李树栋
仇晶
韩伟红
田志宏
殷丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou University filed Critical Guangzhou University
Priority to CN202111081192.XA priority Critical patent/CN113838211B/en
Publication of CN113838211A publication Critical patent/CN113838211A/en
Application granted granted Critical
Publication of CN113838211B publication Critical patent/CN113838211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a 3D point cloud classification attack defense method, a device, equipment and a storage medium, wherein the method comprises the steps of obtaining an original point cloud sample of an input classification model, and deleting and disturbing points of the original point cloud sample; inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on a DGCNN network structure; inputting geometric features of the feature point cloud samples into a decoder, and reconstructing a three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid; and iteratively training the reconstructed point cloud sample, limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then outputs the training output sample to replace the original point cloud sample. The problem that the 3D point cloud neural network is easily attacked by the resistant sample is solved. The method has the effect of improving the defending performance of the 3D point cloud neural network.

Description

3D point cloud classification attack defense method, device, equipment and storage medium
Technical Field
The application relates to the technical field of deep learning defense, in particular to a 3D point cloud classification attack defense method, device, equipment and storage medium.
Background
With the popularization of 3D sensors such as laser radar and depth camera, more and more people pay attention to the research of 3D vision, and meanwhile, a deep learning method applied to point cloud data acquired by the 3D sensor is widely applied in a plurality of fields. Deep learning of 3D point cloud data presents superior performance over many machine learning problems, particularly classification problems.
Currently, attacks on depth models in the image field have been widely studied, but attacks on 3D point cloud depth models in the fields of autopilots, robots, graphics, etc. are rarely studied. The 3D point cloud deep neural network in the fields of autopilot, robot or graphics is easily attacked by the antagonistic sample, and an attacker can cause the deep neural network identification error of the 3D point cloud data by applying imperceptible interference to the input sample.
With respect to the related art, the inventor considers that the 3D point cloud neural network in the fields of autopilot, robot, graphics and the like is easily attacked by the resistance sample, and has the defect of poor defenses.
Disclosure of Invention
In order to improve the defending performance of the 3D point cloud neural network, the application provides a 3D point cloud classification attack defending method, device, equipment and storage medium.
In a first aspect, the present application provides a 3D point cloud classification attack defense method, which has a feature of improving the defense performance of a 3D point cloud neural network.
The application is realized by the following technical scheme:
A3D point cloud classification attack defense method comprises the following steps:
acquiring an original point cloud sample input into a classification model, and deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on a DGCNN network structure to obtain a feature point cloud sample containing the geometric features;
inputting the geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid, and generating a reconstructed point cloud sample;
iteratively training the reconstructed point cloud sample, and limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample is close to the original point cloud sample and then output;
the output samples are caused to replace original point cloud samples to defend against attacks against the samples.
The present application may be further configured in a preferred example to: the step of learning the geometric features of the preprocessing point cloud sample based on the DGCNN network structure comprises the following steps:
each point in the preprocessed point cloud sample is used as a vertex of the DGCNN network structure, a point-to-point relationship is established, and a distance between the points is obtained;
calculating the distance between each point and the nearest neighbor points of the preset number, taking the calculation result as the distance characteristic of each point, carrying out dimension increasing treatment on the distance characteristic of each point, and obtaining the local characteristic of each point through dimension decreasing operation;
performing iterative operation on the local features to obtain multi-scale local features;
and fusing the local features of each layer of the multi-scale local features, and performing dimension reduction operation after fusing to obtain global features containing the local features of each point. The present application may be further configured in a preferred example to: the step of reconstructing the three-dimensional point cloud from the two-dimensional manifold space based on the two-dimensional regular grid comprises the following steps:
fusing the dimension of each point of the two-dimensional regular grid with the global feature containing the local feature, wherein the fusion result is used as the fusion feature of the global feature containing the local feature and the two-dimensional regular grid;
and performing dimension reduction operation based on the fusion characteristics to reduce dimension to three dimensions and reconstruct a three-dimensional reconstruction point cloud sample.
The present application may be further configured in a preferred example to: the step of limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the preset number of nearest neighbor points thereof comprises the following steps:
setting a representation of a chamfer distance loss function to constrain a distance between the original point cloud sample and the reconstructed point cloud sample;
setting the expression of the nearest neighbor distance loss function to restrict the distance between each point in the reconstructed point cloud sample and the preset number of nearest neighbor points;
summing the chamfer distance loss function and the nearest neighbor distance loss function to obtain a relational expression of the sum of the chamfer distance loss function and the nearest neighbor distance loss function so as to limit the reconstructed point cloud sample of iterative training.
The present application may be further configured in a preferred example to: when the trained output sample approaches the original point cloud sample, the sum of the chamfer distance loss function and the nearest neighbor distance loss function converges.
The present application may be further configured in a preferred example to: the step of deleting the points of the original point cloud sample comprises the following steps:
determining a plurality of center points of the original point cloud sample through random sampling;
and deleting the center point and the point clouds with the preset quantity nearest to the center point.
The present application may be further configured in a preferred example to: the step of performing point perturbation on the original point cloud sample comprises the following steps:
and presetting a motion range, and enabling the original point cloud sample to randomly move in the motion range.
In a second aspect, the application provides a 3D point cloud classification attack defense device, which has the characteristic of improving the defense performance of a 3D point cloud neural network.
The application is realized by the following technical scheme:
a 3D point cloud classification attack defense device, comprising:
the preprocessing module is used for acquiring an original point cloud sample input into the classification model, deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
the feature module is used for inputting the preprocessed point cloud sample into the encoder, learning the geometric features of the preprocessed point cloud sample based on the DGCNN network structure, and obtaining a feature point cloud sample containing the geometric features;
the reconstruction module is used for inputting the geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, and generating a reconstructed point cloud sample;
the training module is used for iteratively training the reconstructed point cloud sample, limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then outputs the obtained sample;
and the defending module is used for enabling the output sample to replace the original point cloud sample so as to defend against the attack of the countering sample.
In a third aspect, the present application provides a computer device, which has a feature of improving defense performance of a 3D point cloud neural network.
The application is realized by the following technical scheme:
a computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method described above when the computer program is executed.
In a fourth aspect, the present application provides a computer readable storage medium, which has a characteristic of improving the defense performance of a 3D point cloud neural network.
The application is realized by the following technical scheme:
a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method described above.
By adopting the technical scheme, the original point cloud samples input into the classification model are subjected to point deletion and disturbance, for example, a plurality of center points are determined through random sampling, then the center points and 5 points closest to the center points are deleted, each point in the point cloud with the plurality of points deleted is subjected to a small-range random movement, the original point cloud samples are preprocessed, the preprocessed point cloud samples are obtained, so that the point disturbance based on gradient or optimization attack is simulated, and then the attack resisting sample is simulated, so that a defense network can reconstruct the point cloud attack resisting sample subjected to point deletion attack or point disturbance attack, and the reconstructed point cloud sample is close to the original point cloud sample; inputting the preprocessed point cloud samples into an encoder, learning geometric features of the preprocessed point cloud samples based on a DGCNN network structure, obtaining local features by applying the idea of a graph convolution network, obtaining the local features of the multi-scale point cloud through iterative operation, and then fusing and dimension-reducing the local features of each layer to obtain global features containing the local features of the point cloud, so as to obtain feature point cloud samples, so that the learned geometric features are richer, and reconstruction of more detailed point clouds is facilitated; inputting the characteristic point cloud sample into a decoder, reconstructing a three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid, converting the three-dimensional point cloud set with random space distribution and uneven height into a regular parameter distribution which is orderly and uniformly distributed with the space of a spherical embryo on a two-dimensional main manifold, accurately establishing shape characteristic description of the three-dimensional point cloud, generating a reconstructed point cloud sample, and being beneficial to reconstructing the reconstructed point cloud sample which is close to the original point cloud sample; iteratively training a reconstructed point cloud sample, limiting the distance between an original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the reconstructed point cloud sample by using an autonomously designed chamfer distance loss function and nearest neighbor distance loss function, so that an output sample is closer to the original point cloud sample, outlier point clouds are reduced, the three-dimensional curved surface of the output point cloud is smoother, the classification accuracy of a classifier model is improved, and the recognition accuracy of the model is higher; when the output sample approaches the original point cloud sample, the output sample replaces the original point cloud sample so as to defend against the attack of the sample; and then, the method and the device fully learn through coding the point cloud samples possibly subjected to point deletion attack or point disturbance attack, extract global characteristics of the point cloud containing local characteristics, and train the point cloud close to a clean sample based on a loss function after reconstructing the extracted point cloud characteristics from a two-dimensional manifold space through a decoder and a two-dimensional rule grid, so that the attack of the countermeasures sample is defended, the effects of defending the point deletion attack and the point disturbance attack are achieved, meanwhile, the identification accuracy of the model to the point cloud sample with defending performance is not influenced, the method and the device can be used for any classification model, and have very strong universality.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the 3D point cloud classification attack defense method obtains a characteristic point cloud sample containing geometric characteristics by coding and learning the point cloud sample possibly subjected to point deletion attack or point disturbance attack, reconstructs the geometric characteristics of the characteristic point cloud sample, trains out point clouds close to a clean sample based on the reconstructed point cloud sample, achieves the effects of defending point deletion attack and point disturbance attack, does not influence the identification accuracy of a model on the point cloud sample with defending performance, and has strong universality;
2. the encoder learns geometric features of the preprocessed point cloud sample based on the DGCNN network structure, and obtains global features containing local features of the point cloud, so that the learned geometric features are richer, and reconstruction of more detailed point clouds is facilitated;
3. the decoder reconstructs the characteristic point cloud sample based on the two-dimensional regular grid, and reconstructs a three-dimensional point cloud from the two-dimensional manifold space so as to convert the three-dimensional point cloud into regular parameter distribution which is orderly and even in spatial distribution on the two-dimensional main manifold, and accurately establishes the shape characteristic description of the three-dimensional point cloud, thereby being beneficial to reconstructing a reconstructed point cloud sample close to the original point cloud sample;
4. the reconstruction point cloud sample is trained through the autonomously designed chamfer distance loss function and the nearest neighbor distance loss function, so that the output sample is closer to the original point cloud sample, the outlier point cloud is reduced, the output point cloud has a better three-dimensional curved surface, the classification accuracy of the classifier model is improved, the recognition accuracy of the model to the point cloud sample with the defending performance is not influenced, and the recognition accuracy of the model is higher.
Drawings
Fig. 1 is a general flowchart of a 3D point cloud classification attack defense method according to one embodiment of the present application.
Fig. 2 is a flow chart for learning geometric features of a preprocessed point cloud sample.
FIG. 3 is a flow chart for reconstructing a three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid.
Fig. 4 is a flow chart of a reconstructed point cloud sample limiting iterative training.
Fig. 5 is a block diagram of a 3D point cloud classification attack defense device according to an embodiment of the present application.
Detailed Description
The present embodiment is merely illustrative of the present application and is not intended to be limiting, and those skilled in the art, after having read the present specification, may make modifications to the present embodiment without creative contribution as required, but is protected by patent laws within the scope of the claims of the present application.
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
In addition, the term "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In this context, unless otherwise specified, the term "/" generally indicates that the associated object is an "or" relationship.
The existing three-dimensional point cloud classification attack methods can be roughly divided into three categories: an attack method based on optimization, an attack method based on gradient and an attack method based on generation. For an attack method based on optimization, such as a C & W attack algorithm, generating an anti-point cloud through point disturbance or adding points; for a gradient-based attack method, the coordinate value of each point disturbance can be solved through a rapid/gradient iterative algorithm; for a generation-based 3D attack method, such as LG-GAN algorithm, it generates a challenge point cloud guided by the input target tag using a generation challenge network.
Meanwhile, with the open PointNet publishing, the 3D point cloud deep learning successively has better and better performances, such as PointNet++, DGCNN and the like, and some defects of the PointNet are overcome to a certain extent.
After research on point cloud classification attack methods, it is found that countermeasure samples generated by the attack methods can bring great safety problems to a 3D point cloud deep learning model, such as the 3D point cloud deep learning model applied to the automatic driving field, the robot field or the graphics field.
However, existing 3D point cloud classification attack defense methods, such as random sampling, outlier removal, and up-sampling after noise reduction, all have certain problems. The method for randomly sampling and removing outliers has a defensive effect on disturbance or addition of points, but the method for randomly sampling and removing outliers ensures that the number of points of a point cloud resistance sample is unchanged or reduced, and the point cloud cannot defend against the attack; the up-sampling method after noise reduction has a defensive effect on the algorithm of point deletion, and has a defensive effect on point disturbance and addition, but the classification accuracy of the original clean sample is reduced. Therefore, the existing 3D point cloud defense technology can only defend against individual attack methods, the existing 3D point cloud classification attack defense method cannot cover a main attack countermeasure method, and meanwhile, classification accuracy of undisturbed samples after being subjected to defense processing is reduced, and classification accuracy of a depth model is influenced.
Therefore, the research of optimizing the 3D point cloud classification attack defense method and defending the 3D point cloud resistance sample is urgent.
The application provides a defense method for reconstructing a 3D point cloud sample resisting attack based on a codec, which comprises the steps of preprocessing an original sample input into a classification model, then encoding and reconstructing, training the reconstructed sample, and inputting the reconstructed sample into the classification model to replace the original sample. The method has remarkable defending effect on the main 3D point cloud classification attack method, meanwhile, classification of clean point cloud samples is not affected, and accuracy of identification of the undisturbed original samples is not affected after defending treatment. The model is simple to train, the original classification model structure does not need to be modified, the main type point cloud classification attack can be effectively defended, and the method has good applicability.
Embodiments of the present application are described in further detail below with reference to the drawings attached hereto.
Referring to fig. 1, an embodiment of the present application provides a 3D point cloud classification attack defense method, and main steps of the method are described below.
S1, acquiring an original point cloud sample input into a classification model, and deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
s2, inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on a DGCNN network structure to obtain a feature point cloud sample containing the geometric features;
s3, inputting geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, and generating a reconstructed point cloud sample;
s4, iteratively training the reconstructed point cloud sample, and limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then is output;
s5, enabling the output sample to replace the original point cloud sample so as to defend against the attack of the challenge sample.
Specifically, the step S1 of deleting and disturbing the original point cloud sample comprises the following steps:
and determining a plurality of center points of the original point cloud sample through random sampling, and determining a deleting range.
Deleting the center point and the point clouds with the preset quantity nearest to the center point, and determining the deleted object. In this embodiment, a plurality of center points and 5 point clouds closest to the center point are sequentially deleted.
And then presetting a motion range, and enabling the original point cloud sample subjected to point deletion to randomly move in the motion range. In this embodiment, the original point cloud samples subjected to point deletion are randomly dithered, and then each point cloud therein performs a small-range random movement to simulate point disturbance based on gradient or optimization attack, so that the defending network can reconstruct a point cloud close to the original sample for the input anti-attack sample.
According to the point cloud challenge sample reconstruction method and device, the point cloud challenge sample can be reconstructed by the defense network through point deletion attack or point disturbance attack through point deletion and random dithering of the points, and the reconstructed point cloud sample is close to the original point cloud sample.
Further, S2, inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on the DGCNN network structure to learn more abundant geometric features from the input point cloud, so as to obtain a feature point cloud sample containing the geometric features.
Referring to fig. 2, the step of learning geometric features of the preprocessing point cloud sample based on the DGCNN network structure includes:
s21, enabling each point cloud in the preprocessed point cloud sample to serve as a vertex of the DGCNN network structure, establishing a point-to-point relationship, and obtaining a point-to-point distance;
s22, calculating the distance between each point and the preset number of nearest neighbor points, taking the calculation result as the distance characteristic of the point, carrying out dimension increasing treatment on the distance characteristic of each point, and then obtaining the local characteristic of each point through dimension decreasing operation;
s23, carrying out iterative operation on the local features of each point to obtain multi-scale local features;
s24, fusing the local features of each layer of the multi-scale local features, and performing dimension reduction operation after fusing to obtain global features containing the local features of each point.
Specifically, the DGCNN (Dynamic Graph CNN for Learning on Point Cloud, dynamic graph convolutional network for point cloud learning) network structure uses the idea of graph convolution network, regards each point cloud in the preprocessed point cloud sample as the vertex of the graph, establishes a point-to-point relationship, such as a point-to-point side length relationship, calculates the distance between each point and K neighboring points, and uses the calculation result as the distance feature of the point, so as to classify the preprocessed point cloud sample, learn more abundant geometric features, and extract the local feature and global feature of each point.
And performing dimension lifting operation on the distance characteristics of each point of the preprocessed point cloud sample based on MLP (multi-Layer Perception), so that the characteristic information is redundant, and more characteristic information of the point cloud is acquired.
And performing Pooling operation on the distance characteristics of each point after dimension lifting, performing dimension reduction according to a human visual system to obtain local characteristics of the point, reducing the characteristic dimension of the output of a convolution layer through Pooling, reducing network parameters and calculation cost, reducing the over-fitting phenomenon, and reserving obvious geometric characteristic information in redundant information.
The MLP is then used to iterate the above operations on the post-Pooling local point cloud features to obtain multi-scale point local features, such as 64-to 128-to 256-dimensions, and so on.
And finally, fusing the local features of each layer of the iterative point cloud sample, performing Pooling operation for dimension reduction after fusing, obtaining global features containing the local features of the point cloud, and forming a feature point cloud sample.
In this embodiment, the encoder may learn more abundant geometric features by using other network structures similar to DGCNN, to obtain a feature point cloud sample including the geometric features.
And S3, inputting geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, and generating a reconstructed point cloud sample, thereby being beneficial to reconstructing the point cloud with better details by a neural network.
And (3) respectively carrying out equidistant sampling on the obtained grids in the directions of the x axis and the y axis in a two-dimensional space with the interval of [ -1,1], namely, a two-dimensional regular grid.
Referring to fig. 3, the step of reconstructing a three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid includes:
s31, fusing the dimension of each point of the two-dimensional rule grid with the global feature containing the local feature, wherein the fusion result is used as the fusion feature of the global feature containing the local feature and the two-dimensional rule grid;
and S32, performing dimension reduction operation based on the fusion characteristics to reduce dimension to three dimensions and reconstruct a three-dimensional reconstruction point cloud sample.
Specifically, the feature point cloud sample and the two-dimensional regular grid are input into a decoder, and deep learning is performed through a multi-layer perceptron for 2 times. The network structure of the 2-time multi-layer sensor is similar to that of a two-dimensional regular grid, and the two-dimensional regular grid is 45 x 45, and has 2025 points in total, and the dimension of each point is 2; the dimension of the global feature containing the local feature is 512, the global feature containing the local feature is copied to 2025 points, and then added with a preset two-dimensional rule grid to obtain the feature of each point 514 dimension.
And then enabling the characteristic of each point dimension 514 to serve as the fusion characteristic of the global characteristic containing the local characteristic and the two-dimensional regular grid, performing MLP operation on the characteristic of each point dimension 514 until the dimension is reduced to 3 dimensions, namely an X axis, a Y axis and a Z axis corresponding to the three-dimensional coordinates, wherein the process is equivalent to reconstructing the three-dimensional point cloud shape from the two-dimensional manifold space by utilizing the global characteristic containing the local characteristic of the point cloud, and generating a reconstructed point cloud sample.
S4, iteratively training the reconstructed point cloud sample, limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the reconstructed point cloud sample, so that the trained output sample is closer to the original point cloud sample, the condition of outlier point clouds is reduced, the output sample has a smoother curved surface, and the influence on classification of the classifier model is reduced.
Referring to fig. 4, further, the step of limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the preset number of nearest neighbor points includes:
and S41, setting a chamfer distance loss function expression to restrict the distance between the original point cloud sample and the reconstructed point cloud sample. Since the smaller the distance between the original point cloud sample and the reconstructed point cloud sample, the closer the output sample is to the original point cloud sample, the mathematical expression of Chamfer loss (Chamfer distance loss function) is as follows:
Figure BDA0003264035400000141
where x is any point in the original point cloud S,
Figure BDA0003264035400000142
is the reconstructed point cloud->
Figure BDA0003264035400000143
Any point in the above.
S42, setting a representation of a nearest neighbor distance loss function to restrict the distance between each point in the reconstructed point cloud sample and the preset number of nearest neighbor points. The K-Nearest Neighbor (KNN) function is used for avoiding outliers in the process of reconstructing the point cloud and generating a smoother curved surface. In the curved surface reconstruction process, points far away from the curved surface become outliers which easily influence the classification of the classifier model, so that in order to reconstruct a better three-dimensional curved surface and reduce the generation of outliers, the application adds the distance limitation of KNN and assumes that
Figure BDA00032640354000001410
Is a reconstructed point cloud
Figure BDA0003264035400000144
Any point in (i.e.)>
Figure BDA0003264035400000145
K nearest neighbor points x' of (1) are { x }, respectively 1 ,x 2 ,...,x k During the three-dimensional reconstruction, in order to allow a reconstruction to be followed by +.>
Figure BDA0003264035400000146
The distance between K nearest neighbor points of (2) is as small as possible, and the formula of KNN is defined as:
Figure BDA0003264035400000147
Figure BDA0003264035400000148
the optimization of the two loss functions in the training process is performed simultaneously, and in order to make the generated output samples approach to the original point cloud samples, at the same time, the generated output samples have no outliers.
And S43, summing the chamfer distance loss function and the nearest distance loss function, obtaining a relation of the sum of the chamfer distance loss function and the nearest distance loss function, and reducing the relation by gradient until the sum of the chamfer distance loss function and the nearest distance loss function converges, namely reaches the minimum value, so as to limit the reconstructed point cloud sample of iterative training and enable the generated sample to be close to the original sample. The loss design of the whole training model is as follows:
Figure BDA0003264035400000149
wherein α is a user-defined parameter for balancing chamfer distance loss with KNN distance loss during training.
S5, when the trained output sample approaches the original point cloud sample, namely the sum of the chamfer distance loss function and the nearest distance loss function converges, the distance is continuously reduced through gradient descent, and the generated output sample is continuously made to approach the original point cloud sample.
According to the method, the comparison of the visualized generated sample and the original point cloud sample is carried out, and whether the generated output sample is close to the original point cloud sample or not is judged by judging the Chamfer distance between the output sample and the original point cloud sample.
When the sum of the Chamfer Loss and the KNN Loss converges, namely the visual comparison generation sample is close to the normal sample, training is finished, and the output sample replaces the original point cloud sample so as to defend against the attack of the sample.
In this embodiment, a three-dimensional point cloud classification data set, such as a training set of ModeNet40 and ShapeNet, is used for training, in the training process, point deletion and point dithering pretreatment are performed on an original point cloud sample, then a codec is used for sample reconstruction, and finally, the limitation of two distance loss functions is used for iterating and training the reconstructed point cloud sample for several times.
The method comprises the steps of filtering outliers in point clouds through a statistical method, firstly utilizing an encoder to encode and obtain characteristic point cloud samples, inputting the characteristic point cloud samples into a decoder, reconstructing based on a two-dimensional grid to obtain reconstructed point cloud samples, training the reconstructed point cloud samples until the sum of a chamfer distance loss function and a nearest neighbor distance loss function converges, and meanwhile, inputting an output sample into a classifier for recognition when a result generated by a visualized output sample is close to an original point cloud sample.
By comparing the accuracy difference of the output sample with the point cloud sample without the defensive processing, the defensive effect of the output sample is better.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not limit the implementation process of the embodiment of the present application in any way.
Furthermore, the 3D point cloud classification attack defense method obtains the characteristic point cloud sample by coding and learning the point cloud sample possibly subjected to point deletion attack or point disturbance attack, and then reconstructs the characteristic point cloud sample, trains the point cloud close to a clean sample based on the reconstructed point cloud sample, thereby achieving the effects of defending point deletion attack and point disturbance attack, simultaneously not affecting the identification accuracy of the model on the point cloud sample with defending performance, and having strong universality.
Referring to fig. 5, the embodiment of the present application further provides a 3D point cloud classification attack defense device based on a codec, where the 3D point cloud classification attack defense device based on a codec corresponds to one-to-one to the 3D point cloud classification attack defense method in the foregoing embodiment. The 3D point cloud classification attack defense device based on the codec comprises:
the preprocessing module is used for acquiring an original point cloud sample input into the classification model, deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
the feature module is used for inputting the preprocessed point cloud sample into the encoder, learning the geometric features of the preprocessed point cloud sample based on the DGCNN network structure, and obtaining a feature point cloud sample containing the geometric features;
the reconstruction module is used for inputting the geometric features of the feature point cloud samples into the decoder, reconstructing three-dimensional point cloud from the two-dimensional manifold space based on the two-dimensional regular grid, and generating reconstructed point cloud samples;
the training module is used for iteratively training the reconstructed point cloud sample, limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then is output;
and the defense module is used for enabling the output sample to replace the original point cloud sample so as to defend against the attack of the challenge sample.
For specific limitation of a codec-based 3D point cloud classification attack defense device, reference may be made to the limitation of a 3D point cloud classification attack defense method hereinabove, and the description thereof will not be repeated here. The modules in the 3D point cloud classification attack defense device based on the codec can be implemented in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement the above-described 3D point cloud classification attack defense method.
In one embodiment, a computer readable storage medium is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
s1, acquiring an original point cloud sample input into a classification model, and deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
s2, inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on a DGCNN network structure to obtain a feature point cloud sample containing the geometric features;
s3, inputting geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, and generating a reconstructed point cloud sample;
s4, iteratively training the reconstructed point cloud sample, and limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then is output;
s5, enabling the output sample to replace the original point cloud sample so as to defend against the attack of the challenge sample.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the system is divided into different functional units or modules to perform all or part of the above-described functions.

Claims (8)

1. The 3D point cloud classification attack defense method is characterized by comprising the following steps of:
acquiring an original point cloud sample input into a classification model, and deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
inputting the preprocessed point cloud sample into an encoder, and learning geometric features of the preprocessed point cloud sample based on a DGCNN network structure to obtain a feature point cloud sample containing the geometric features;
inputting the geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional regular grid, and generating a reconstructed point cloud sample;
iteratively training the reconstructed point cloud sample, and limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample is close to the original point cloud sample and then output;
causing the output sample to replace an original point cloud sample to defend against attacks against the sample;
the step of learning the geometric features of the preprocessing point cloud sample based on the DGCNN network structure comprises the following steps:
each point in the preprocessed point cloud sample is used as a vertex of the DGCNN network structure, a point-to-point relationship is established, and a distance between the points is obtained;
calculating the distance between each point and the preset number of nearest neighbor points, taking the calculation result as the distance characteristic of the point, carrying out dimension increasing treatment on the distance characteristic of each point, and then obtaining the local characteristic of each point through dimension decreasing operation;
performing iterative operation on the local features to obtain multi-scale local features;
fusing the local features of each layer of the multi-scale local features, and performing dimension reduction operation after fusing to obtain global features containing the local features of each point;
the step of reconstructing the three-dimensional point cloud from the two-dimensional manifold space based on the two-dimensional regular grid comprises the following steps:
fusing the dimension of each point of the two-dimensional regular grid with the global feature containing the local feature, wherein the fusion result is used as the fusion feature of the global feature containing the local feature and the two-dimensional regular grid;
and performing dimension reduction operation based on the fusion characteristics to reduce dimension to three dimensions and reconstruct a three-dimensional reconstruction point cloud sample.
2. The 3D point cloud classification attack defense method according to claim 1, wherein the step of limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and its preset number of nearest neighbor points comprises:
setting a representation of a chamfer distance loss function to constrain a distance between the original point cloud sample and the reconstructed point cloud sample;
setting the expression of the nearest neighbor distance loss function to restrict the distance between each point in the reconstructed point cloud sample and the preset number of nearest neighbor points;
summing the chamfer distance loss function and the nearest neighbor distance loss function to obtain a relational expression of the sum of the chamfer distance loss function and the nearest neighbor distance loss function so as to limit the reconstructed point cloud sample of iterative training.
3. The 3D point cloud classification attack defense method according to claim 2 wherein the sum of the chamfer distance loss function and the nearest neighbor distance loss function converges when a trained output sample approaches the original point cloud sample.
4. A method of defending against a 3D point cloud classification attack according to any of claims 1-3, wherein the step of performing point deletion on the original point cloud sample comprises:
determining a plurality of center points of the original point cloud sample through random sampling;
and deleting the center point and the point clouds with the preset quantity nearest to the center point.
5. A method of defending against a 3D point cloud classification attack according to any of claims 1-3, wherein the step of performing a point perturbation on the original point cloud sample comprises:
and presetting a motion range, and enabling the original point cloud sample to randomly move in the motion range.
6. A 3D point cloud classification attack defense device, comprising:
the preprocessing module is used for acquiring an original point cloud sample input into the classification model, deleting and disturbing points of the original point cloud sample to acquire a preprocessed point cloud sample;
the feature module is used for inputting the preprocessed point cloud sample into the encoder, learning the geometric features of the preprocessed point cloud sample based on the DGCNN network structure, and obtaining a feature point cloud sample containing the geometric features; the method comprises the steps of learning geometric features of the preprocessed point cloud samples based on a DGCNN network structure, wherein each point in the preprocessed point cloud samples is used as a vertex of the DGCNN network structure, establishing a point-to-point relationship, and obtaining a distance between the points; calculating the distance between each point and the preset number of nearest neighbor points, taking the calculation result as the distance characteristic of the point, carrying out dimension increasing treatment on the distance characteristic of each point, and then obtaining the local characteristic of each point through dimension decreasing operation; performing iterative operation on the local features to obtain multi-scale local features; fusing the local features of each layer of the multi-scale local features, and performing dimension reduction operation after fusing to obtain global features containing the local features of each point;
the reconstruction module is used for inputting the geometric features of the feature point cloud samples into a decoder, reconstructing three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, and generating a reconstructed point cloud sample; reconstructing a three-dimensional point cloud from a two-dimensional manifold space based on a two-dimensional rule grid, wherein the three-dimensional point cloud comprises the steps of fusing the dimension of each point of the two-dimensional rule grid with a global feature containing local features, and taking the fusion result as the fusion feature of the global feature containing the local features and the two-dimensional rule grid; performing dimension reduction operation based on the fusion characteristics to reduce dimension to three dimensions and reconstruct a three-dimensional reconstruction point cloud sample;
the training module is used for iteratively training the reconstructed point cloud sample, limiting the distance between the original point cloud sample and the reconstructed point cloud sample and the distance between each point in the reconstructed point cloud sample and the nearest neighbor points of the preset number of the points until the trained output sample approaches the original point cloud sample and then outputs the obtained sample;
and the defending module is used for enabling the output sample to replace the original point cloud sample so as to defend against the attack of the countering sample.
7. A computer device comprising a memory, a processor and a computer program stored on the memory, the processor executing the computer program to perform the steps of the method of any of claims 1-5.
8. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which, when executed by a processor, implements the steps of the method according to any one of claims 1-5.
CN202111081192.XA 2021-09-15 2021-09-15 3D point cloud classification attack defense method, device, equipment and storage medium Active CN113838211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111081192.XA CN113838211B (en) 2021-09-15 2021-09-15 3D point cloud classification attack defense method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111081192.XA CN113838211B (en) 2021-09-15 2021-09-15 3D point cloud classification attack defense method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113838211A CN113838211A (en) 2021-12-24
CN113838211B true CN113838211B (en) 2023-07-11

Family

ID=78959423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111081192.XA Active CN113838211B (en) 2021-09-15 2021-09-15 3D point cloud classification attack defense method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113838211B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115100503B (en) * 2022-07-29 2024-05-07 电子科技大学 Method, system, storage medium and terminal for generating countermeasure point cloud based on curvature distance and hard concrete distribution
CN115834857B (en) * 2022-11-24 2024-03-19 腾讯科技(深圳)有限公司 Point cloud data processing method, device, equipment and storage medium
CN115937638B (en) * 2022-12-30 2023-07-25 北京瑞莱智慧科技有限公司 Model training method, image processing method, related device and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435331A (en) * 2020-12-07 2021-03-02 上海眼控科技股份有限公司 Model training method, point cloud generating method, device, equipment and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112435331A (en) * 2020-12-07 2021-03-02 上海眼控科技股份有限公司 Model training method, point cloud generating method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Dynamic Graph CNN for Learning on Point Clouds;YUE WANG等;《ACM Transactions on Graphics》;第38卷(第5期);第146页 *
基于主动学习和TCM-KNN方法的有指导入侵检测技术;李洋等;计算机学报(08);第1464-1473页 *
基于插值法的对抗攻击防御算法;范宇豪等;网络空间安全(04);第74-77页 *

Also Published As

Publication number Publication date
CN113838211A (en) 2021-12-24

Similar Documents

Publication Publication Date Title
CN113838211B (en) 3D point cloud classification attack defense method, device, equipment and storage medium
CN109685152B (en) Image target detection method based on DC-SPP-YOLO
CN111862289B (en) Point cloud up-sampling method based on GAN network
CN109919204B (en) Noise image-oriented deep learning clustering method
KR20190031318A (en) Domain Separation Neural Networks
CN110637305A (en) Learning to reconstruct 3D shapes by rendering many 3D views
Liu et al. Learning steering kernels for guided depth completion
CN111667535B (en) Six-degree-of-freedom pose estimation method for occlusion scene
KR20200144398A (en) Apparatus for performing class incremental learning and operation method thereof
CN110796251A (en) Image compression optimization method based on convolutional neural network
CN113807183A (en) Model training method and related equipment
CN111524232A (en) Three-dimensional modeling method and device and server
Yang et al. Learning to propagate interaction effects for modeling deformable linear objects dynamics
Kim et al. Acceleration of actor-critic deep reinforcement learning for visual grasping in clutter by state representation learning based on disentanglement of a raw input image
CN113011430B (en) Large-scale point cloud semantic segmentation method and system
CN114519813A (en) Mechanical arm target grabbing method and system
CN117217280A (en) Neural network model optimization method and device and computing equipment
CN113593043A (en) Point cloud three-dimensional reconstruction method and system based on generation countermeasure network
CN116912296A (en) Point cloud registration method based on position-enhanced attention mechanism
Moreli et al. 3d maps representation using gng
CN116758212A (en) 3D reconstruction method, device, equipment and medium based on self-adaptive denoising algorithm
WO2023086198A1 (en) Robustifying nerf model novel view synthesis to sparse data
Kim et al. Acceleration of actor-critic deep reinforcement learning for visual grasping by state representation learning based on a preprocessed input image
Petrović et al. Efficient Machine Learning of Mobile Robotic Systems based on Convolutional Neural Networks
CN114638408A (en) Pedestrian trajectory prediction method based on spatiotemporal information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant